digitaLiberties

This is how you can leverage social media to uncover wrongdoing

Investigative platform Exposing the Invisible has released a pack of resources for citizen journalists and activists to learn how to use social media information for their investigations.

Exposing the Invisible
5 May 2017
Screen Shot 2017-05-05 at 23.44.59.png

Leveraging information posted to social media platforms is an essential part of an investigators toolbox when uncovering wrongdoings. At investigative platform, Exposing the Invisible, we recently released a series of resources on finding, collecting and using information from social media platforms.

Exposing the Invisible has been operating for the past three years with the aim to explore and investigate the data traces that are being left behind by those in positions of power suspected of wrongdoings. This approach to finding and leveraging publicly available information is mirrored by the wider organisation, Tactical Tech, who works with activists or interested individuals to minimise their digital shadow and provides practical advice for tools and tactics to increase their digital and physical security. The organisation takes various approaches to this work including producing guides, resources, documentaries, exhibitions, trainings, interactive art spaces. Through publishing all content under creative commons we have some of our resources translated into 16 languages and promote non-commercial tools over their commercial, closed source equivalent.

These resources on social media are not only an essential resource to expose wrongdoings but also to highlight the ease with which individuals can be exposed, something we address in each resource. Exposing the Invisible not only sets out to demonstrate how these techniques work against, for example, corrupt officials, but also how the same techniques are used against activists and campaigners themselves.

The methods, tools and techniques featured below are designed to be replicated by anyone, from activists, campaigners, citizen journalists, artists or armchair investigators with questions such as; Who is at the head of a company? Has your government official been going on too many expensive holidays in comparison to their salary? Is your government commissioning an army of bots to distort public opinion? Can you monitor the movement of weapons through YouTube videos? These are some of the questions that those featured on Exposing the Invisible have looked at but the techniques and tools can be used by anyone who has a question to which they seek an answer to.

Pushing boundaries

With all technologies, there are those practitioners who take the services or hardware offered, and push the boundaries beyond what was originally intended, stretching the original purpose and sometimes re-appropriating it so that it morphs into something else entirely.

Take, for example, the case of Marc Owen Jones, who successfully lobbied Twitter to take down 1,800 'fake' profiles who were showing spam-like activity in the Persian Gulf. Many of these accounts were promoting content that lionised the Saudi government or Saudi foreign policy. Twitter, once heralded for enabling social movements, is being used in this context to silence real conversations in order to dilute marginalised voices and distort public opinion. Marc describes for us how he identified these 'fake' accounts, collected the tweets, analysed the patterns between them to identify the suspicious accounts, and lastly how he visualised the data to highlight the polluting nature of these fake profiles. Through a simple word-cloud Marc presented a visual analysis of the Twitter bios of accounts spreading sectarian propaganda. By visualising the Twitter bios of the fake accounts he was able to see what words they were using to describe themselves and their views and what personal characteristics they were adopting as bots to convince others of their sectarian values. In another visualisation Jones shows the difference between bots and actual accounts through demonstrating the non-interactive nature of bots and their repetitive content on a targeted hashtag.

Continuing that train of thought, the how-to resource Disclosures of a #Hashtag investigates alternative uses for Twitter's hashtag. The hashtag serves as an efficient feature to file, pool and find information along with promoting events and relaying information in real time from conferences to demonstrations. We aim to showcase the possibilities of using hashtags for investigations, and also raise awareness around possible threats so users can make informed choices about how to use Twitter. We focus on two security conferences in early August 2016. Through extracting a sample of tweets posted with the conference hashtags, we analysed the accounts of the users who attended the conference, mapping their locations and networks using Twitter's API, the mapping tool Carto and the visualisation platform Gephi. From the 550 tweets collected we could infer which representatives from which companies were present at the conference, who was following and talking to who on Twitter and have a good idea as to which countries the participants were based. The focus of this investigation was to collect data, the next step being to filter and analyse it to better understand this network of individuals.

What is considered closed is, in fact, open

Often, social media platforms are considered by their users to be closed spaces in which everything they share is accessible to users they have granted viewing permission. For example, the project IC Watch found and copied 409,820 LinkedIn resumes of people working in the US intelligence sector and placed them in a searchable database. This database is then used to find information about the intelligence community, surveillance programmes and other information that is very much private, but has been posted publicly via the professional networking platform, LinkedIn. We interviewed this project’s founder MC McGrath earlier this year.

This investigation is one of the most controversial we feature on Exposing the Invisible as it goes to the outer reaches of what is considered 'public data' and raises various ethical considerations. McGrath told us: “All the data that we use is public data. Some people have sent us emails saying, ‘Posting this personal information online threatens my family’, but really mostly we're using data about people in their professional context. I stick to the people themselves who have outed themselves online already. There's even more information you can get publicly. You can get most personal data publicly if people post about it freely enough online.”

Joana Moll and Cédric Parizot took a similar approach when they gained access to a Facebook group and monitored conversations between people who were surveilling the US/ Mexico border as a hobby. The project draws its content from a public-private partnership, launched in 2008, to deploy participatory surveillance. The initiative consisted of an 24/7 online platform called RedServant and a network of 200 cameras and sensors located in strategic areas along the US/Mexico border that allowed users to report anonymously if they noticed any suspicious activity on the border. Since its launch in 2008, RedServant had 203,633 volunteer users which resulted in 5331 interdictions, and overall “represents almost one million hours of free labour for the Sheriff.” The programme ended in 2012 due to lack of financial support.

The Virtual Watchers, the research project set up by Moll and Parizot, focuses on the exchanges that occurred within the RedServant Facebook group between 2010 and 2013. Moll joined the Facebook group of those participating in the initiative and recorded all the interactions held in it since 2010. She tells us that “the watchers were considerably more identifiable and trackable than any of the individuals that they were watching over. During the course of our investigation inside Facebook, we didn't need to ask for any “friend requests” in order to access personal information kept within most of the profiles of the members of the RedServant group, as most of their profiles were completely unprotected.”

We also wanted to delve into website ownership. You might expect things like payment contact details or registration information that contain names, addresses and phone numbers, to be very much hidden or protected, but they are freely available from many online and offline tools. Corporate structures are often confusing or intentionally obfuscated, which can make it difficult to understand who might own a particular company, how long it might have existed or where it might be based. Research into website registrant details is often a good first step to gather an indication of who owns a particular service or company and where they are based.  Another example, for those interested in looking at fake companies set up by criminal organisations, WHOIS searches can be used to uncover fake online identities and company representation. In Who is WHOIS? we reviewed existing tools to see who has registered a particular website, and how this can be useful knowledge to gain.

The consequences of living in quantified society

With each of these case studies and how-tos comes a cautionary tale. Where investigators can look into hashtags, so can law enforcement or any other third parties. Where an interested individual can look up who owns a particular website, so can a state or a competitor. Not only does living in a data society affect our autonomy, but it can also mean that incorrect information can spread like wild-fire.

In times when news moves faster than ever in an increasingly polarised world, where ‘viral’ rules over facts – from state propaganda, news blown out of proportion, fabricated statistics, misplaced images, wrongfully attributed videos, and twisted facts masqueraded as studies – accuracy is key for many facing life-or-death situations.

In our piece Busting the Viral we interviewed three organisations working on fact-checking: Africa Check (South Africa), Verify Syria and Stop Fake (Ukraine). The three organisations highlighted the challenges faced within the fact-checking sphere. Apart from lack of sufficient funding in certain cases, and for some groups the lack of access to tools and technical resources, fact-checking remains under the challenge of time and a race against the viral spread the unchecked original receives.

In this race against the viral, the three organisations have active websites where the public can fact-check news and submit threads to be investigated. One of the many cases Africa Check has worked on is that where two people died and 20 were hospitalised over the unfounded yet widely-spread information around a cure for Ebola spread via Blackberry messages (BBM) in Nigeria. Among other tasks, Verify Syria keeps an eye on images used in news pieces which are presented as visual evidence. In one instance they revealed an image portrayed as evidence of Turkish soldiers helping Syrian women and children in Jarabulus, a town in Syria when in fact the image was taken from an event two years prior in Turkey. Stop fake Ukraine has accumulated a list of websites known for spreading fake news and regularly monitor them. This puts the focus not only random fabricated links spread here and there, but rather a premeditated plan and a set structure to disseminate fake and misleading information.

It is estimated that 300 hours of video are uploaded to YouTube every minute. While we live in public and are surrounded by data it also has a certain fragility. The Syrian Archive project underlines how fragile this data is and how easily it can disappear. Hadi Al Khatib, the founder of The Syrian Archive, became aware that thousands of hours of YouTube videos depicting violations by all parties in the Syrian conflict were not only available on YouTube but also being taken offline, both by YouTube for violating terms and services of the platform, and by users who feared possible retaliation. The archive works towards taking these videos from YouTube and uploading and categorising them onto an open-source platform. This project preserves this vital content for future investigations and legal proceedings by storing, indexing and analysing it, as well as preserving the authorship of the footage, while putting an enormous effort into the verification and classification of these videos.

We need to move on from the debate about whether social media is good or evil. Instead, we should accept the fact that these platforms and apps are tools, to which users and owners can attribute certain values based on their intentions and actual use of them. A hashtag is both a great tool with which to advocate a cause, and expropriate a debate. It can be as helpful for networking and outreach as it is for surveilling and social mapping. Even social mapping can be a beneficial tool to map questionable networks of the status quo, or to expose activists. In the end, it all boils down to understanding the platforms and the tools at hand. As much as it is often exciting to explore their possibilities, it is important to look at the threats they might bring, to make not only an informed choice, but also the best of what we can make with the technology available to us.

Had enough of ‘alternative facts’? openDemocracy is different Join the conversation: get our weekly email

Comments

We encourage anyone to comment, please consult the oD commenting guidelines if you have any questions.
Audio available Bookmark Check Language Close Comments Download Facebook Link Email Newsletter Newsletter Play Print Share Twitter Youtube Search Instagram WhatsApp yourData