Imagine a war fought in cyberspace. A digital conflict designed to replicate a computer virus, which attacks the mind.

Malware for humans.

This is not fiction. Not a future dystopia. This is reality. Our reality of today…

This is part one of #LoveBomb, a special investigation* into weaponised data and disinformation, social media, and counter offensives.

*The format is, in of itself, experimental and includes multiple formats which will be refined over the course of the investigation.

An Introduction to #Lovebomb

https://youtu.be/f28fUocoSeY

We have known for years we are targeted in every aspect of our daily lives. How we choose what to eat, what to wear. Even my eight-year-old knows certain colours are better at attracting us to food outlets.

The science behind marketing is incredibly powerful, and the industry has been expanding for decades. Then, along came social media and a new playing field opened up. A world where colour’s meaning has been taken to the next level.

That little white ‘F’ on the blue background? It’s designed that way so it’s visible to even colour-blind people, like Mark Zuckerberg.

But colour and this marketing science have spread. Now impacting on the way we vote. How nations behave. And this is how understanding human psychology has resulted in the development of military technology. A variety of weapons trained on our minds which spread through our networks in the same way as a computer virus.

Malware for people, which can’t be treated with traditional medicine.

So, how do we deal with this emergency? I’ve been investigating and found the building blocks of the answer within the DNA of the problem itself.

Part 1:

The world has changed over the last two years, at a pace so rapid many of us are still trying to catch up with what happened. However, even in the last six months we have learned a great deal.

We know Brexit was tainted by the illegal use of big data, and those same people were directly involved in Trump’s election. We also know Russia was heavily involved in what amounts to a grand scheme of psychological manipulation in partnership with the far-right, the Alternative War.

What we have never really been able to explain, despite a number of efforts, is why the technique deployed was so successful. We haven’t been able to pinpoint the mechanics of how it all works and why our own actions have helped to feed the problem, despite shouting about it as loudly as possible across our own, global, social networks. Until now.

The way social networks (and our personal data) have been weaponised has been stripped back to components and

reverse engineered to show not only the scale of the problem we face, but how the same principles can be applied in creating ethical counter-offensives to be deployed in a digital resistance.

The key to this investigation, to exposing how we have been left vulnerable and used, is network centrality.

Network Centrality:

It is, in effect, the analysis of a social network to “cut through noise and hone in on the important parts of a network,” according to Cambridge Intelligence** – an international company specialising in this type of analytical work.

By analysing social media, it is possible to create a full understanding of the dynamic of interaction between users.

Measuring the aspect defined as centrality allows you to identify the key people within a group – say, key Leave figures, important remain advocates, or high profile Alt-right activists. It also allows you to spread your message, whatever that may be, as efficiently as possible.

Within this discipline are three core types of centrality analysis, each of which can be used in different ways for slightly different purposes.

Degree Centrality, in essence, counts the number of links each person has (followers and following, for example) and is used to identify how many times you can jump from that person to another.

Using this method, you could quickly identify popular people who can rapidly spread a message across a broad network.

Taking Donald Trump for an example, an account enhanced with even fake followers provides broader access to a human network, which is essential if you wish to ensure your message is heard widely.

This is the most basic of the three measures, though you can break it down further: you could use it to specifically identify people with a limited following but who follow many, or to establish how much they interact and when.

Betweenness Centrality, in simple terms, measures the frequency with which a person is the shortest route to other people. It is used to identify the people in a social network who are the best bridges to others, with a view to ensuring a message is spread – basically, highlighting the most effective people to target in order to pass information around as swiftly as possible.

Examining the bot network, it’s clear they act not only as bridges themselves, but their often random-seeming following and engagement activity is specifically targeted. For example, contemplate the “Leave-Bots” who have actively pursued Remain advocates.

Between-ness can be hit and miss, according to the experts, and can produce false results – meaning you might end up targeting a useless peripheral user by mistake.

The third type of analysis is Closeness Centrality. Unsurprisingly this looks specifically at how closely connected users are within a group and picks out the best way of targeting people in the shortest possible time, almost like passing messages through familial relationships.

Closeness identifies influencers with a social network, people whose message matters to others and who are listened to with some immediacy.

For example, think of a figure like Julian Assange, who has the ear of a large number of people even after his message and activity changed completely. Closeness Centrality puts a saddle on trust and gives you access to ‘broadcasters.’

Key Finding:

Understanding Network Centrality and the three main measures is the first, essential step in understanding the true importance of the troll farms, fake followers, and the use of social media by influential figures.

It also explains how we have been mapped to ensure we are targeted in the right way, by the right people, at the right time, to help spread fake news and disinformation, even where our intention in sharing it has been to call it out or decry it.

This is the first building block in the DNA of militarised technology and, equally, the first white cell in the experimental immune system this investigation has been developing.

Next In Series:

The next two articles in this investigative series will be covering the psychological principles and impacts of marketing and exploring network centrality in action – using Buzzfeed and post-terror incident reactions as real world examples of how it is used differently to the same ends across our networks.

You can support this series and the ongoing work behind it here.

**Though a full white paper is available, Cambridge Intelligence request

personal information so the link will not appear in this article.