Using big data, companies such as Cambridge Analytica can conduct what is called an ‘Ocean’ personality assessment – normally used in psychology – and the more expansive the data held, the more intricate your individual profile can be. With the ‘right’ data and an app like Ucampaign, it can then be targeted at people you know too.
A basic profile, as academic Michael Kosinski found in his research, can predict your behaviours just based on social media likes alone. An advanced profile, based on what websites you visit, what news you read, your job, your politics, your purchases, your medical records, would mean such a company knows you much better than you know yourself. This allows the people who pay for such services to target you at an individual level with news, information, or social media posts, all of which are tweaked to make sure they have the biggest psychological impact on you.
Fake news and alternative facts are a central part of this, and that includes hacked data dumps which can cause discredit. The Russian terms ‘pokazukha’, which means something like a staged stunt, and ‘zakazukha’, which refers to the widespread practice of planting puff pieces or hatchet jobs, are both terms which are relevant in the broader context of all of this. Fake news had to come from somewhere, and there it was, all along.
Further, using such psychometric profiles, the simplistic creation of AI driven ‘bots’ on social media can also push selected messages into more common public view – with the added bonus of the Social Media Echo Chamber ensuring the activity is seen by the appropriate recipients. This can also keep much of the activity out of sight – because it only hits certain groups – and is the core reason the authorities were so late in responding to the threats during elections.
It was only in March 2017, after it was too late, that the Ranking
Democrat member of the House Intelligence Committee, Adam Schiff, told
CNN the committee was investigating whether the Donald Trump campaign
coordinated with the Russians to spread “fake news” through trolls and
“bots” online and sway the election.
“We are certainly investigating how
the Russians used paid media trolls and bots, how
they used their RT propaganda platform to disseminate information, to
potentially raise stories, some real some not so real, to the top of
people’s social media,” Schiff said.
“Fake news had to come from somewhere, and there it was, all along.”
In many ways, a little historical digging makes sense of not only bots, but a lot of the alternative outlets spewing conspiracy theories.
The Russian state were sponsoring ‘Web Brigades’ as far back as the 1990s, paying around 80 Rubles a comment for people to spam the internet with false information, not to convince people but to confuse them. To create distrust in all media. They were also paying high profile bloggers, which has made me think about sites like Info Wars and Prison Planet in an even darker light.
If you set this against Trump’s decrying of the mainstream media as Fake News while promoting certain outlets, it’s not hard to see the apple hasn’t fallen too far from the tree.
In 2013, Russian reporters investigated the St. Petersburg Internet Research Agency, which employed around four hundred people at the time, and found the agency covertly hired young Russians as “Internet operators” who were paid to write pro-Kremlin postings and comments. Twitter Bot armies of over twenty-thousand artificial accounts were also uncovered.
The group’s office in Olgino, a historical district of Saint Petersburg, gave rise to the now well-known terms “Trolls from Olgino” and “Olgino’s trolls,” both of which are synonomous with both bots and human accounts which spread propaganda. Internet Research Limited, the company behind the Olgino operation is considered to be linked to Yevgeniy Prigozhin, head of the holding company Concord and a “chef” working for Vladimir Putin.
Documents published by broadly benign hackers from Anonymous International, appear to show Concord is directly involved in tasking the trolls and researchers have cited e-mail correspondence in which specific orders are given to the army and, in turn, reports are returned on the completed missions. According to journalists, Concord organized banquets in Kremlin and “cooperates” with the Russian Ministry of Defence.
“spam the internet with false information, not to convince people but to confuse them. To create distrust in all media.”
There are also things called “Dark Posts”, predominantly used on Facebook, which are only ever seen by the intended recipients and which disappear straight afterwards.
According to reports as far back as 2015, these dark posts – which are known generally as unpublished posts – are not the same as targeted posts but they do share common properties. For example, both allow you to promote posts to specific people.
While targeted posts allow you to aim at an audience based only on parameters such as gender, relationship status, education, and so on, Dark posts allow you to use keywords. The key difference is that dark posts publish without showing up on your own wall. Only the target sees it.
It’s not hard to see how this is deployed so effectively by groups using big data to hone down who they are aiming for. Even the most basic advertisers have an understanding of this. One I found wrote of dark posts, saying “using text that highlights their interests, your community members will feel like you’re speaking directly to them.”
The thought of this amount of power in the wrong hands, well, it doesn’t take a lot of imagination to see what has happened.
The additional benefit to using dark posts, in particular in regulated election campaigning, is clear: no one will really know, so there’s no accountability. I suspect the spends on dark posts are in no way declared and, subsequently, the Electoral Commission is not only outgunned in terms of powers, but clueless.
“The thought of this amount of power in the wrong hands, well, it doesn’t take a lot of imagination”
Explaining bots while giving evidence to the Senate Intelligence Committee in April 2017, former FBI Agent Clint Watts highlighted the reason the bot accounts are so effective as a delivery mechanism, explaining “whenever you’re trying to socially engineer them [voters] and convince them that the information is true, it’s much more simple because you see somebody and they look exactly like you, even down to the pictures.”
Watts went on to say the bot campaign came via a “very diffuse network” which often competes with its own efforts “even amongst hackers, between different parts of Russian intelligence, and propagandists — all with general guidelines about what to pursue, but doing it at different times and paces and rhythms.” This make a great deal more sense when set against the Concord investigation.
Artificial Intelligence, much of which was developed by people like Robert Mercer, was originally thought to be primarily a Twitter issue, but Facebook has also now recognised that the creation of these bots – false profiles – has infected their platform. They have gone as far as acknowledging how this impacted on both the US Presidential election and on the UK’s Brexit referendum.
As of the late spring 2017, Facebook directly attributes the growth of its false accounts problem to government interference.
“We recognize that, in today’s information environment, social media plays a sizable role in facilitating communications – not only in times of civic events, such as elections, but in everyday expression,” they said in their latest security report. “In some circumstances, however, we recognize that the risk of malicious actors seeking to use Facebook to mislead people or otherwise promote inauthentic communications can be higher,” the report added.
In advance of France’s election campaign the company shut down around 30,000 suspicious accounts posting high volumes of material to large audiences, saying: “we have had to expand our security focus from traditional abusive behavior, such as account hacking, malware, spam and financial scams, to include more subtle and insidious forms of misuse, including attempts to manipulate civic discourse and deceive people.”
“we have had to expand our security focus from traditional abusive
behavior, such as account hacking, malware, spam and financial scams, to
include more subtle and insidious forms of misuse, including attempts
to manipulate civic discourse and deceive people.”
So, don’t be surprised if the General Election doesn’t go the way you might be expecting on the basis of the polls.
Don’t have nightmares.