Your Soul for Data: Fair Trade?

code, data

A Chinese company has stored personal info of important people in the UK. (Image: via pixabay / CC0 1.0)

In an increasingly data-driven world, are we just walking data sources for the benefit of giant multinational corporations? Every single minute, there are 3.8 million search queries on Google, 4.5 million videos watched on YouTube, almost $1 million spent online, and 41.6 million messages sent via WhatsApp and Facebook Messenger — and these are a fraction of the interactions that currently happen online.

As we go about our daily lives — sharing our personal experiences on social media, asking Siri to set our alarms, and counting how many steps we walk on our wearables — we are essentially becoming walking data points, where our information is collected and analyzed to predict behavior. Where will it end?

Subscribe to our Newsletter!

Receive selected content straight into your inbox.

Professor Turgay Celik, Director of the National e-Science Postgraduate Teaching and Training Platform (NEPTTP) and the Wits Institute of Data Science (WIDS), predicts that in the next 10 to 15 years, saying that humans will be:

Wits Biomedical engineers have already connected a human brain to the Internet in real time. This Brainternet project essentially turned the brain into an Internet of Things node on the World Wide Web.

In 2019, the same team connected two computers through the human brain and transmitted words like “hello” and “apple” passively, without the user being aware that a message was present. Celik asks:

Android rights and the Big Other

Dr. Christopher Wareham, a Senior Lecturer in the Steve Biko Centre for Bioethics at Wits, argues that we need to think about the implications of such technological developments from the perspective of artificial agents. These “digital beings” will potentially have lives — and rights — of their own, adding:

The developments in machine learning and artificial intelligence (AI) already significantly affect how we live our lives today. American academic Shoshana Zuboff coined the term “surveillance capitalism” in 2014. Surveillance capitalism depends on “the global architecture of computer mediation… [which] produces a distributed and largely uncontested new expression of power.” Zuboff christens this the “Big Other”.

Currently, the “Big Other” includes Facebook, Google, Microsoft, and Amazon.

Surveillance capitalism

Writing in The Guardian, Zuboff explains:

Surveillance capitalism is a “real issue,” says Professor Brian Armstrong, Chair in Digital Business at the Wits Business School. “In my view, a very big concern is around the whole idea of social scoring.” This refers to the practice of developing a social rating system to establish if a person is a fit and proper member of society, in terms of their “social score.”

In China, private companies are already operating social credit systems, as are local governments in pilot projects. The plan is to develop a nationwide system that scores the individual’s behavior, including giving citizens a score and adding rewards and penalties for specific actions. For example, if you donate to charity, you score points, but you lose points for traffic violations.

But one need not look as far as China for Big Brother-style surveillance. In Johannesburg, thousands of surveillance cameras already monitor motorists and pedestrians 24/7. In June, the Financial Mail reported that Vumacam — a subsidiary of the Internet fiber company Vumatel — had installed more than 1,200 surveillance cameras to combat crime. By 2020, the number of cameras will increase to over 10,000.

Local security companies can access the Vumacam live feed and, as the artificial intelligence system learns what a typical day in a neighborhood looks like, it will flag behavior that is out of the ordinary for that area.

Dr. Helen Robertson, who lectures Data Privacy and Ethics in the School of Computer Science and Applied Mathematics, refers to the battle between our right to safety and our right to privacy that such forms of surveillance bring to the fore, saying:

Safety vs privacy

Our views on privacy have not only been impacted by safety concerns. The pervasiveness of social media has also played a role. Robertson says that the average person is willing to share a lot more about their private lives today compared to a few decades ago, adding these evolving views are not necessarily problematic:

Celik believes that privacy will become personalized, with individuals being able to define how much privacy they want for themselves. Our autonomy is another area influenced by the online world.

Wareham argues that a lot of micro-targeted advertising and political messaging is designed specifically to degrade our autonomy, adding:

The question then becomes about who decides what you read, listen to, or watch, and who makes the decisions on what content is “appropriate” for a specific digital platform, and what is not.

Toward tech that teaches

Data-driven advancements are, however, not all doom and gloom. Armstrong argues:

Adding that education is one area in which South Africa could benefit immensely:

In China, AI-enabled education has already blossomed with tens of millions of students using some form of AI to learn. This includes tutoring platforms where algorithms curate lessons and adapt the curriculum based on an individual’s understanding of specific concepts, reports MIT Technology Review.

Protecting personal data

Staggering amounts of data are generated daily, but who owns all this data? Robertson points out that there is currently no consensus among ethicists about this thorny issue.

(Image: Screenshot / YouTube)
Staggering amounts of data are generated daily, but who owns all this data? (Image: Nspirement)

Some argue that the data subject owns the data. Others say that the data processor who uses his/her resources to create and analyze a dataset has ownership rights, while some argue that in certain cases, such as medical research that benefits society, the public’s need for medical treatment and breakthroughs mean that data belongs to the public.

These different claims to ownership “add a lot of ethical greyness,” Robertson said, adding:

In the near future, South Africans will have considerable legal power regarding the protection of their data. The Protection of Personal Information Act (POPIA) aims to protect the right to privacy, while enabling the social and economic benefits that result from the free flow of information. POPIA stipulates conditions under which personal information must be processed lawfully, although there are exceptions.

These conditions include that personal information “must be collected for a specific, explicitly defined, and lawful purpose.” Further processing of personal information can only take place if it is in line with the purpose for which it was originally collected. Most sections of the Act have not yet commenced. The announcement of a commencement date is expected before the end of 2019, after which companies will have one year to comply.

Verine Etsebeth, a Senior Lecturer in the Wits School of Law who specializes in data protection and information security law, says the POPI Act is long overdue, saying:

Digital disempowerment

Despite the excitement over technology’s potential to solve some of our most complex problems, many South Africans are still excluded from these advances. Only 40 percent of Africa’s population has access to the Internet compared to 61 percent for the rest of the world. In South Africa, Internet penetration currently sits at 56 percent. Armstrong said:

Provided by: Dr. Retha Langa, Wits University [Note: Materials may be edited for content and length.]

Follow us on Twitter or subscribe to our weekly email

Recomended Stories

Send this to a friend