Security Forward Workshop & Virtual Cocktails - 8 December 2020 - Virtual Meeting

Tuesday, 08 December 2020
By Christopher Smith

A virtual meeting took place between the members of the forum chaired by Chris Smith Programme Co-Chairman and supported by Keith Holland, Programme Chairman Emeritus.

The chairman made the usual introductions thanking Alderman Sheriff Professor Michael Mainelli and Linda Cook for facilitating the days meeting. The agenda for the day comprised the usual sharing of current in-trays and a question posed by Alderman Sheriff Professor Michael Mainelli, 'What is Disinformation?'

In-trays

Forum members discussed and debated the following:

  • The cyber security challenges in working from home. The member averred that “your home is your office and your technology is a front door into your home” The discussion centred around changing a business model and the members requested Jon Cosson to do a presentation at a future meeting.
  • The importance of security as a function during the Covid-19 Pandemic. He asked how could and should we sustain Security’s relevance and the discussion centred around what is the Forum members’ main efforts for 2021.
  • The issues highlighted around credit card fraud through the increase of online purchasing by vulnerable members of the community.
  • The future of the City of London and some of the infrastructure improvements that are planned and underway for the City of London.
  • The issues around mental health arising out of working from home. He highlighted the importance of connectivity and the concerns he had seen raised about working in isolation.
  • The issues around insider threat magnified by disenfranchisement of furloughed staff on 80% of their wages. This linked into a discussion about fraud prevention issues i.e. promoting education in security and how to get teams working effectively without meeting face to face.
  • The issues around British businesses working in foreign countries, particularly in North Africa, where bad debt caused a company to close.

Today's question posed by Professor Michael Mainelli; What is Disinformation?

After much confusion and several answers proposed, Michael provided the answer, 2/3 of the average of all the answers we gave, recurring down to zero. One member was seen reaching for the gin bottle.

The Forum heard about Z/ealous which is general problem-solving tool used to help make better decisions through these 6 stages; establish endeavour, assess and appraise, lookaheads & likelihoods, options and outcomes, understanding and undertaking, and securing and scoring. These lead to the concept of Enchancing which helps to strategize Good or Bad outcomes in relation to Probability. The aim is to reduce volatility or increase certainty. Linking this to Disinformation (or misinformation) may increase volatility. The debate moved on to the Question of whether or not we spend too much time analysing information. Should we not be thinking more? The Forum heard that Misinformation is defined as false information that is spread regardless of intent to mislead. Whereas disinformation is deliberately spreading misinformation. This information has its origins in Soviet propaganda in the early 70’s. Areas of disinformation are seen in public opinion polls, climate change, economic recovery, commercial occupancy rates, and of course COVID-19 and the anti vax question.

The discussion then moved onto another domain, that of regulation; where regulators sought to out model the banks that they were regulating. Could that have been a mistake? Was Big Data being used as it could have been? In thinking about novel ways to regulate, we discussed the idea that economic big data must be understood in the context of real humans making real decisions in radically uncertain contexts. Emotional finance research; when faced with radical uncertainty, financial actors construct conviction narratives that provide them with the competence to act. These sometimes become focused on a ‘phantastic’ object which leads to divided states of mind precluding rational decisions. Sometimes these emotional states propagate through social networks as “groupfeel” and increase the uncertainty of outcomes. Searching and digging into dissonance is likely to reveal more useful information (truths?)

The discussion moved on to: Conviction narratives – people believe things and then looking for facts to reinforce their beliefs; Prediction Markets – where are conflicts likely to arise; The use of ‘Sherlock’ in assembling mental maps.

We heard that Z/Yen push out 500 bulletins every week including one on Security Forward! A whole variety of subject areas are available.

In summarising the discussion, we heard that there are a whole bunch of techniques, growth in information and growth in disinformation. So how to pull it all together? The answer seems to stymie most people. We heard about a large multinational organisation, who run a very high profile and high speed horizon scanning approach. What is extraordinary is that their product is produced every six months, on paper. We heard that a more appropriate approach would be: Have a narrative analysis plus data capturing techniques and technology, eg tagmaps for context and tone; tagging and taxonomy; various analytical techniques including synchronicity and dissonance mapping; visualisation; snapshots, time stamps & storage; and bulletins/alerts. The methodology sees an entire area of Twitter fed into the assembly machine (Sherlock) where a taxonomy of the meme is mapped. This in turn allows for the creation of Phantastic Objects; outcomes such as loss of virility from taking a vaccine. In assembling a suite of such Phantastic Objects, a counter narrative (anti messages) can be created such as “if men don't take vaccines, they all seem to be less fit”. Further Analysis is required to make sure that the right sort of message is reaching the right sort of person in the right sort of medium.

It was suggested that we are on the cusp of building gigantic disinformation machines, perhaps on the ethical side of this, or perhaps not. This is exacerbated by the massive increase in video conferencing activity (recorded) which provides masses of data. We could even see a new bot that is a virtual meeting note taker (would the Bot really not record any “not for the minutes” comments?)

This part of the discussion ended with: We're seeing an explosion of automation. Is there any way to stop the disinformation arms race? Could large amounts of disinformation lead to splinterization of the net? Has the time come for corporates to develop active anti disinformation campaigns? Is this reputational risk a job for security professionals? Regardless, what part should security professionals play?

The forum then heard from a member about his views on this information from the perspective of political warfare, focussing on a nuts and bolts approach to what disinformation is and what disinformation isn't.

The Forum was invited to accept that misinformation is the process of misunderstanding stuff, whereas malinformation is the use of correct information for malign purposes (political leaks). Whereas disinformation, promulgated by technology (computational propaganda) comprises three elements of deliberately and purposely generating misleading misinformation; the use of algorithms, the use of automation and some degree of malign human curation in order to spread false information. The human agent is the greatest spreader of malinformation, disinformation, and misinformation, not machines.

We heard that disinformation is a multifaceted phenomenon; it is not just one thing. It may include: “Astroturfing”, where grassroots organisations are created with no substance or depth; "Fake accounts", it was summated that 2/3 of Trump's followers on Twitter are fake accounts; “Surfacing”, where you take false information and push it through a third party for it to be surfaced in order to gain some validation or credibility for that false information. Speed and rapidity of spreading false information is akin to a virus and epidemiological terms are valid.

Countering this spread of misinformation is enormously difficult. Propaganda can be seen as white propaganda, grey propaganda or black propaganda. How to counter black propaganda or disinformation? We need to think about information manoeuvres – a battle. Do we need a culture where media literacy should be taught to young people? How do we analyse and assess information, for example, 3000 Tweets a second in the United Kingdom? How do we respond in real time to disinformation or malign misinformation? There are two ways of doing this and their effectiveness is questionable: The art of pre bunking, where you get in early, or debunking, when you get in late. It was suggested that the only way to counter the problem in the long to medium range is to create an alternative narrative and to clearly understand the impact of this strategy.