Teaching Kids About Deepfake Technologies

Deepfakes can deceive children, even leading them to believe that the world is flat. They can also cause unintended psychological harm to children.
How Should a Child Choose What to Believe?
Many parents and educators are concerned about the dangers of Deepfake videos. How do we educate children about Deepfakes? Given children’s proclivity for social media use, could they know more about manipulated content than we do? This article provides insights into Deepfakes that can help you engage children in meaningful conversations while also protecting their online safety.
What Exactly Are Deepfake Videos?
Deepfakes are videos that were been engineered using Artificial Intelligence tools and machine learning techniques that enable the superimposition of existing images and videos onto other pieces of content. Experts predict that technological advancements will enable the creation of more complete and longer pieces of video footage in the future. Deepfake manipulators, for example, could potentially create “anti-footage” depicting the inverse of what occurred in certain situations—for example, going to the moon or the result of a war.
Deepfake videos first appeared on the internet in 2017, and by the start of 2019, there were over 7900 Deepfake videos available. Nine months later, the figure had nearly doubled to 14,678.
At the moment, the term “Deepfake” is something of a catch-all, encompassing both deceptive content and Hollywood’s benign use of artificially created content. On occasion, the term is also used incorrectly to describe edited video content in which Artificial Intelligence and machine learning tools were not used.
What Causes Deepfakes A Threat?
Deepfake technologies have the potential to distort reality. The situation can harm a child’s mental health and well-being if it is used to bully, defame, or victimize children (or adults).
Who Makes Deepfakes?
While the internet occasionally produces amusing and benign Deepfakes, the answer is large “people with malicious intent.” Children’s use of social media sites, where they post photos of themselves, may make them more vulnerable to bad actors interested in developing Deepfakes. Every day, 300,000,000 photos are uploaded to Facebook and 46,740 photos are uploaded to Instagram. The images posted on these sites can serve as a content library for those looking to create phony video content. Once on the internet, the content can be used for almost any purpose—good or bad.
What Should We Teach Our Children About Deepfakes?
When deciding whether or not to share photos and videos of themselves on social media sites, young people should exercise caution. The larger the repositories of photos and videos containing people’s images, the easier it is for a bad actor to extract, and create harmful content. Furthermore, young people should keep track of where they post personal information. Well-known websites usually have privacy policies in place, but smaller social media platforms may have questionable policies or none at all.
How Can Children Spot Deepfakes?
To date, many Deepfake videos contain telltale signs of manipulation. For example,
- The audio and video speeds may not be perfectly aligned.
- The shadowing may appear “off”
- Videos may be pixelated
- The ideas expressed may appear contradictory to what is known about a specific person or topic
These questions can be included in discussions to address media literacy and help kids think deeply about Deepfake content:
- Why would someone want to make “fake” videos?
- Why would someone believe that “fake” or manipulated content could help them achieve their goals?
- Why is it sometimes difficult to distinguish between true and false information?
- Is there a distinction between a Deepfake politician video and identity theft?
- How are Deepfake videos able to look so real?
The specific questions used in any discussion, as well as the nature of the discussion, should, of course, vary depending on the ages and interests of the children with whom you’re speaking.
Conclusion
Individuals are not solely responsible for detecting Deepfakes, nor should they be. Deepfake detection technologies and laws governing Deepfakes are still being developed. The technology exists. There are laws. As Deepfake technologies advance, all could benefit from continuous improvement.
Informing children about the risks of always trusting their eyes when it comes to online content can help bridge the gap between laws, technological advancements, and private enterprise policies. Media literacy is essential in the digital age, especially as online learning expands and children rely on internet-retrieved information more than millennials or Generation Zers ever have.