This post was first published here: https://blog.knowbe4.com/ai-generated-disinformation
A researcher was alerted to a fake website containing fake quotes that appeared to be written by himself. The age of generative artificial intelligence (AI) toying with our public personas has truly arrived. As cybersecurity professionals we must ask, what are the implications of fake-news-at-scale-and-quality for individuals and organizations?
“How much of our public image can we really control,” asks the online platform Futurism and remarks, “The unholy union of SEO spam and AI-generated muck is here.” The website in question has many red flags, giving away its AI-generated origin: generic texts, no references to sources and AI-generated pictures. Worryingly so, the article also contains fabricated quotes that are somewhat believably real and most concerning even attributed to real people.
What makes this article interesting is the fact that the researcher himself found the quote somewhat believable, although he would have said something slightly different. Prof. Binns of Oxford University expects that AI-driven loss of control of our public personas is only just getting started. Our public personas are not something we will be able to control anymore, he suggests.
Given the recent advances in generative AI, that seems highly likely. Organizations must step up to the challenge, and the first step should be sensitizing their workforce to the dangers of fake news and generated texts. While we have been fighting fake news and have developed techniques such as lateral reading, we must add the competence to spot AI-generated texts to our online literacy curricula.
Part of raising awareness among staff for AI-generated text must also be learning about red flags, e.g., inconsistencies with assignment guidelines, voiceless, predictable, and somewhat directionless and detached. A competence to spot AI-generated disinformation is urgently required, as detection mechanisms for generated text are increasingly unreliable.
This matters for security awareness training because the internet as a source of information to verify entities will no longer be reliable. It has become incredibly easy to create fake corporations, with fake news, and fake personnel attached to them. These organizations might appear as legitimate buyers in phishing emails. Staff will need to remember to verify the authenticity of organizations by other means than searching the internet for believable references.
Beyond that, organizations and individuals must be concerned with protecting their online persona to maintain a good reputation. Even without generative AI, fake news has been used successfully to undermine trust in institutions, democratic systems and organizations. Disinformation attacks are employed by cybercriminals to cause havoc. Organizations have lost bidding wars, stocks have been manipulated and strategic decision-making has been led astray by disinformation campaigns.
Brand and reputational damage, loss of customer trust and even immediate financial losses are all possible. Even so much that governments release resolutions on the interference of misinformation on democratic processes. Many experts consider a change in the trust landscape to be the biggest short-term threat of generative AI for that reason. Organizations must start figuring out their AI landscape of opportunities and challenges today.
The European Union Agency for Cyber Security (ENISA) includes misinformation and disinformation in their annual cybersecurity threat report because such campaigns often are precursors for other attacks such as phishing, social engineering or malware infections.
Researchers argue it is crucial to understand the role of misinformation in risk management. They place misinformation at the center where deceptive information exploits psychological vulnerabilities, builds off biases and comprises logical reasoning leading to cognitive discrepancies, much like current social engineering threats.
Both social engineering and misinformation seek to exploit human features. Countermeasures such as security awareness training build on a shared foundation, where individuals are made aware of their emotional triggers and cognitive biases. These facilitate perceptiveness to social engineering attacks and increase the likelihood of someone believing fakes news. Appropriate training should focus on building triggers for logical reasoning as a competence to detect and contain social engineering as well as misinformation campaigns.
At an organizational level, we must also be prepared for disinformation attacks on steroids, generated by AI. To develop resilience for these kinds of attacks, organizations must work across departments and functions. Disinformation risks to the organization must be identified and assessed, such that they can be monitored on social media and other channels. Executives must act to fortify the brand against disinformation, e.g., by keeping an open channel of communication with customers.
Today, your organizations’ incident response and crisis management plan should also have an effective strategy to recover from disinformation attacks.