Keeping young people safe in an age of exponential digital technology

Updated: Jul 25

The power that that we are starting to see from technological innovation has serious implications for the safety of our children and students. We're going to look at a few examples of where things are going below, but the challenges we face in supporting students and keeping them safe even now are significant: online predators (warning: an unsettling read), sexting, bullying, porn, gossip, the list goes on.


The examples below will (hopefully) be thought provoking and may perhaps serve as a provocation for discussion. It does not take much stretching of the imagination to see how these emerging digital technologies, or versions of them, might be misused.

Realistic fake human images are being created by something called a Generative Adversarial Network, creating images so life-like they can't be distinguished from photos of real people. It's a heavy read, example:


"The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture."


Take a look at https://thispersondoesnotexist.com/ to have a look at what the result of this complicated language looks like. It's becoming harder to tell what's real and what's not.


This technology is starting to extend to videos in which one person's face is superimposed onto another. These are called 'Deepfakes', and the technology is advancing so rapidly that it may soon be impossible to distinguish between real and fake videos. Have a look at this Jennifer Lawrence / Steve Buscemi Deepfake, and then think of the implications for pornographic videos and bullying among students.

We're not only generating digital images expertly, we're now getting extremely good at identifying images as well. It's another complicated read, but briefly, technology called a neural network looks at the information contained in images and identifies patterns. After some initial training, a neural network can teach itself and learn more about images and objects the more it 'sees', ie, the more data it has to work with. With billions of images now online in the public domain to learn from, the networks have become very good at identifying images, including human faces.

Data privacy - get used to hearing this term, because it's going to become even more important as more and more of our lives are lived and stored online. Data is the new global currency, and companies are making billions through storing and mining massive amounts of it. Data is already being misused to influence our elections, and Facebook has allowed malicious 3rd party apps to gather and share personal data, operating with little transparency or oversight into how the data was (and still is) being stored and used.


If our data is being treated with little care by large companies on the one hand, it's being actively hacked on the other. As more and more government and private services are moved online, the risk of hackers being to access the data for purposes of identity theft and other nefarious means increases. Estonia and South Korea have faced challenges with their online ID systems, with Estonia's having serious vulnerabilities and South Korea's has already had the personal details and ID card numbers of 80% of its population stolen - In Singapore it happened to a major healthcare provider that serves millions of people.


So can we just refuse to participate? Maybe, maybe not. In this example, a man was fined when he protested against being filmed by facial recognition cameras during a trial in the UK. The police claimed the fine was for swearing, not for refusing to be filmed, but people do appear to be getting stopped regardless.

Data combined with powerful algorithms are also being used to control, hide or expose people. In Saudi Arabia, data is used to prevent women from travelling and to enforce traditional guardianship laws, and in China a person can receive an alert when they are in close proximity to someone in debt. In the interests of keeping people safe, it's essential for people to understand how data and digital technology can be manipulated and misused, so they can recognise when they may be exposed and take action to protect themselves.

A possible solution may be found here, through reducing harm in social media and online through a duty of care. In it, any developer of a digital product must take care as it relates to people or things. A duty of care can describe the types of behaviours or effects of technology to avoid, and can also be bound by law. Duties of care work well in workplaces and public spaces, so why not online as well?


In this article's example, there are a series of 'key harms' to be avoided, including:


1. Harmful threats to people or things such as pain, injury, damage etc.

2. Harms to national security such as violent extremism.

3. Emotional harm, for example encouraging others to commit suicide.

4. Harm to young people - exposure to harmful content such as bullying and grooming .

5. Harm to justice and democracy including protecting the integrity of the criminal trial process.


Anyone who is exposed to harm through a digital product would be able to sue the developer for failing to meet its duty of care, but only if the product was shown to fail at a systemic level. This is of benefit to the market as the company is responsible for the cost of its actions, not people.

The challenges for the misuse of data and AI are growing, and the risks are huge, not only for young people but for schools as well. Think about apps that schools use for digital portfolios - who owns the data that's stored and how is it used? The app developer's goals for the use of their product and data don't always align with schools and families, and that's a problem. How can we keep our young people safe in an era of Generative Adversarial Networks and Deepfakes? What actions can individuals take to protect their data, and how can we keep companies accountable for how their products are used?


Thank you for joining us this week. We welcome any and all feedback and comments, and please contact me with any questions or suggestions you have.


Sean

  • Facebook
  • LinkedIn Social Icon
  • YouTube

©2018 by The Future Learning Project. Proudly created with Wix.com