We learned long ago that technology is a two-way street. History has shown that the greatest technological innovation of each era always has a dark side. Gamechanging inventions such as the automobile, personal computer, smartphone and the internet proved this to be true.
Today, our networked culture faces a serious new challenge—fake news or disinformation. Yes, the era of Walter Cronkite is definitely over. Today, you cannot believe what you see or read.
We are now living in the “Wild West” of disinformation, where anyone can say anything, no matter how outrageous or untrue. The saddest part is too many people believe this stuff. That was proven at the U.S. Capitol on Jan. 6, 2021.
When Tim Berners-Lee invented the World Wide Web in 1989, his invention was heralded as an opportunity for anyone on earth to have a global library at their fingertips. It did that, but—as with most technology—it had another side. In addition to changing our culture, the World Wide Web became a vast wasteland of ads and a conduit to disinformation representing the worst of human nature.
Original or Fake?
Many think it is too late for the internet. The cat is out of the bag, so to speak, and we’ll never go back to a more united, cohesive society. However, some scientists are now trying to fix what the internet wrought. It won’t be easy now that anyone can acquire cheap tools to build deep fake video and photos.
One international project trying to fix the fakery problem—called "Dissimilar"—is at the Open University of Catalonia (UOC) in Barcelona, Spain. It also includes academics from the Warsaw University of Technology (Poland) and Okayama University (Japan). The researchers are working on ways to differentiate original and fake multimedia content by combining techniques from digital forensics analysis, watermarking and artificial intelligence.
“The project has two objectives: first, to provide content creators with tools to watermark their creations, thus making any modification easily detectable; and second, to offer social media users tools based on latest-generation signal processing and machine-learning methods to detect fake digital content,” said David Megías, a lead researcher.
It’s a great irony that the same artificial intelligence that brought deep fakes to the world is now being used in an attempt to eradicate them.
We know that the rise of social media goes hand-in-hand with the increase in disinformation. Artificial intelligence made it easier to tamper with videos and photos, turning them into fakes so good that most people cannot tell the difference from the real thing.
When video and still images are combined and superimposed using artificial intelligence with advanced voice technology, the montages can look and sound like the genuine person or thing. Using this technology, known public figures can be made to say outrageous things.
People of different cultures perceive information in unique ways. These perceptions are being gauged by the researchers in a range of places and cultural contexts to incorporate individual idiosyncrasies when designing the solutions.
“This is important because, for example, each country has governments and/or public authorities with greater or lesser degrees of credibility,” said Andrea Rosales, a UOC researcher. “This has an impact on how news is followed and support for fake news. If I don’t believe in the word of the authorities, why should I pay any attention to the news coming from these sources?
“This could be seen during the Covid-19 crisis: in countries in which there was less trust in the public authorities, there was less respect for suggestions and rules on the handling of the pandemic and vaccination,” Rosales added.
A problem is people can watermark and protect disinformation as well as truthful information. I can easily see these disinformation-prevention tools being used to protect fake news from fraudulent players.
Sadly, we live in a society where huge numbers of people have developed into “disinformation cults” where facts don’t matter. They choose to block out information they don’t believe. This is a result of the internet creating niches of personal interest. What was originally conceived as a widening of available accurate information has resulted in distortions of the facts.
Cheap artificial intelligence tools give huge power to those who would abuse them. This abuse now happens routinely, and it will be hard to stop. Prevention tools like those being developed at UOC can certainly help legitimate content providers, but they won’t stop disinformation if the masses believe it.
Dan Gillmore, who teaches at the Arizona State University Walter Cronkite School of Journalism and Mass Communication, advises readers to “create an internal speedbump.”
“Say to yourself… ‘Just wait a minute,’” before believing anything on the internet, said Gillmore. “Skepticism, especially with highly sensational titles, is key.” He advises people to corroborate stories before believing them as fact.
Developing these critical evaluation skills, however, will have to come from good media education, which is very rare in today’s schools. Only when users of the internet are taught about the realities of disinformation and how to detect it, can we begin to tackle the problem.
“Information is only as reliable as the people who are receiving it,” said Julia Koller, a learning developer. “If readers do not change or improve their ability to seek out and identify reliable information sources, the information environment will not improve.”
Frank Beacham is an independent writer based in New York.
Future US's leading brands bring the most important, up-to-date information right to your inbox
Thank you for signing up to TV Tech. You will receive a verification email shortly.
There was a problem. Please refresh the page and try again.