Strategies to help students determine determine whether a video has been altered include analyzing what motivates people to create fakes in the first place.
Last year, a viral video featured the opening credits of Full House with all of the sitcom actors’ faces replaced with actor Nick Offerman’s face. The video, which utilizes deep fake technology to create these face swaps, is funny, at least in part because we are all in on the joke. As viewers, we know that Offerman didn’t play all of the roles on this popular TV show from more three decades ago.
But what happens when we’re not all in on the joke? How do we know when videos that look plausible have been manipulated to deceive or manipulate the viewer? More importantly, how do we teach today’s learners fact from fiction in a world where seeing is no longer believing?
The term deep fake refers to a video that has been edited using software to replace the person in the original video with someone else in a way that makes the video look authentic. Initially, these videos primarily targeted celebrities. But as the technology has grown more advanced, deep fakes have become a focus of U.S. intelligence efforts to curb misinformation and disinformation. Even former President Obama loaned his likeness to support efforts to inform Americans of the growing threat.
Currently, the software and skill set required to create really convincing deep fakes remains limited to a relatively small group of people. But as with all information, social media has sped up its democratization. Indeed, TikTok and Snapchat, two of the most popular social media among young people, recently integrated deep-fake technology that allows users to swap faces within each app in order to easily create their own deep fakes. Additionally, YouTube has usurped all other social media in popularity, making the medium of video, a large and ubiquitous part of our information diets.
According to KQED’s Above The Noise, there remain some digital “tells” that offer clues to a video’s authenticity:
It’s important to remember that these clues alone do not mean that a video has been altered. What’s more, as deep fake technology becomes more advanced, and access to those advances become more readily available, savvy digital detectives will have to look for other clues to determine whether a video is credible.
Whether working with a group of fifth graders or a room full of their teachers and librarians, we often nudge learners to ask themselves whether or not a video passes the "WHOA!" test. It’s simple: If a post, article, or video makes you say "WHOA!" because it's upsetting, outrageous, or too good to be true, that feeling indicates a need to investigate further.
Extreme emotional reactions to the news, and other information, should be a signal to RESIST the urge to share—although our instinct is often to do the exact opposite. When we let emotion take the wheel, we make poor choices about passing on information.
Read: It's Time To Go Mobile While Teaching Media Literacy
Videos are especially good at triggering emotional responses, in part because we often view video as irrefutable evidence of truth. Also, video has elements that aren’t present in text or still images—movement, vocal inflection, and music—all evoke meaning, emotion, and bias in our brains. Even those who lean towards healthy skepticism when consuming information online can be tempted to let down their guard when viewing a video. We have been trained to believe that seeing is believing.
Deep fake technology, coupled with a new information landscape where trained and citizen journalists alike compete to be first to produce the next viral account, has made the use of emotional triggers to gain clicks in the commonplace. These realities should prompt changes to media literacy instruction to include support for students learning to recognize the emotional triggers that cause us to click.
One effective strategy to helping learners recognize credibility issues is to have them consider the motives of the video creators. There are a couple of profiles of potential suspects for a virtual “lineup” that can help fledgling digital detectives connect a video’s clues with the possible motivation.
Suspect 1: the troll
Suspect 2: the click-chaser
Jennifer LaGarde (librarygirl.net, @jenniferlagarde) and Darren Hudgins (about.me/darren_hudgins) are co-authors of Fact VS Fiction: Teaching Critical Thinking In the Age of Fake News (ISTE 2018; #FactVSFiction). LaGarde's passions include leveraging technology to help students develop authentic reading lives and meeting the needs of students living in poverty. Hudgins is the CEO of Think | Do | Thrive. He works with educators, school leaders, districts, and school organizations to build experiences promoting thought, play, and innovation.
We are currently offering this content for free. Sign up now to activate your personal profile, where you can save articles for future viewing
Add Comment :-
Comment Policy:
Comment should not be empty !!!
Tiffany Clark
Thanks for this important article on media literacy - lessons like this should be incorporated into curriculum at every level.
Posted : Mar 01, 2020 02:24