TITLE: The AI Hit List: Six Menacing Threats You Need to Know
https://pctechmag.com/2023/12/the-ai-hit-list-six-menacing-threats-you-need-to-know/
EXCERPT: AI is an algorithmic construct built on the bones of human creative endeavors and data that is often flawed and biased. “As Kate Crawford, a professor at the University of Southern California and Microsoft researcher, pointed out, AI is not truly artificial or intelligent. This poses risks that can have long-term consequences if users are unaware of them,” remarked Collard.
Here are six of the most concerning risks
1. AI hallucinations: Earlier this year, a New York attorney used a conversational chatbot for legal research. The AI deceitfully incorporated six fabricated precedents into his filing, falsely attributing them to prominent legal databases.
This is a perfect example of an AI hallucination, where the output is either fake or nonsense. These incidents happen when prompts are outside of the AI’s training data and so the model hallucinates or contradicts itself to respond.
1. Deepfakes: The implications of fake images extend to various areas. With the rise of fake identities, revenge porn, and fabricated employees, the range of potential misuse for AI-generated photographs is expanding.
One particular technology called Generative Adversarial Network (GAN) is a type of deep neural network capable of producing new data and generating highly realistic images by using random input. This technology opens up the realm of deepfakes, where sophisticated generative techniques manipulate facial features and can be applied to images, audio, and video. This form of digital puppetry carries significant consequences in political persuasion, misinformation, or polarization campaigns.
1. Automated and more effective attacks: This taps directly into the potential of GAN mentioned before, as cybercriminals make use of deepfakes in more sophisticated attacks. They use it in impersonation attacks, where fake voice or even video versions of someone can be used to manipulate victims into paying or following other fraudulent instructions.
Cybercriminals also benefit from jailbroken generative AI models to help them automate or simplify their attack methods, such for example automating the creation of phishing emails.
1. Media equation theory: This refers to the fact that human beings tend to attribute human characteristics to machines and develop feelings of empathy towards them. This tendency becomes even stronger when the interactions with machines seem intelligent.
Although this can positively impact user engagement and support in the service sector, it also carries a risk. People become more vulnerable to manipulation, persuasion, and social engineering because of this over-trust effect.
They tend to believe and follow machines more than they should. Research has shown that people are likely to alter their responses to queries to comply with suggestions made by robots.
1. The manipulation problem: AI, through the use of natural language processing, machine learning, and algorithmic analyses, can both respond to and simulate emotions.
By gathering information from various sources, agenda-driven AI chatbots for example can promptly react to sensory input in real time and utilise it to accomplish specific objectives, such as persuasion or manipulation. These capabilities create opportunities for the dissemination of predatory content, misinformation, disinformation, and scams.
1. Ethical issues: The presence of bias in the data and the current absence of regulations regarding AI development, data usage, and AI application all raise ethical concerns. Global efforts are underway to tackle the challenge of ethics in AI and reduce the risks of AI poisoning, which entails manipulating data to introduce vulnerabilities or biases.
TITLE: Facebook Is Being Overrun With Stolen, AI-Generated Images That People Think Are Real
https://www.404media.co/facebook-is-being-overrun-with-stolen-ai-generated-images-that-people-think-are-real/
EXCERPT: Of all of the awful things we’ve seen artificial intelligence used for, engagement baiting with stolen content on Facebook is relatively tame. And yet, I couldn’t help but feel a mix of wonder and dread while reporting this story. Every time I started researching the origins of a new stolen image, I would click through dozens and dozens of stolen versions of it.
There is no polite way to say this, but the comments sections of each of these images are filled with unaware people posting inane encouragement about artwork stolen by robots, a completely constructed reality where thousands of talented AI woodcarvers constantly turn pixels into fucked up German Shepherds for their likes and faves. I tried to determine if the commenters, too, were bots. The vast majority of them clearly are not. Most commenters I looked into are people who have been judiciously posting family photos, political arguments, and status updates on Facebook for decades.
Both [Brian Penny, a freelance ghostwriter], and [Hany Farid, a professor at the University of California, Berkeley], made the same observation I did. Farid said that, currently, “when I look at the harms being perpetrated from generative AI from nonconsensual imagery, child sexual abuse, fraud, and disinformation, this isn’t even on the bottom of the list. Somebody’s posting a photo, real people like it—it’s Facebook, who cares, right?”
But he added that this AI dreck isn’t good and that in the long term we could see some bad consequences grow out of the rapid spamming of AI-generated crap. Farid published a study on “nepotistic” AI, where generative AI tools are trained on other AI-generated outputs, creating highly-distorted images: “Once poisoned, the models struggle to fully heal even after retraining on only real images,” his study found. “The slightly less kind word is inbreeding, which we didn’t feel we could use in a scientific paper,” Farid said.
The other big problem is that, while more digitally literate people can tell that these are AI-generated images now, the technology is advancing so quickly that the typical artifacting and deformities seen in AI-generated images today could be gone tomorrow.
“The images [being generated] are more and more photorealistic,” Farid said. “So we really are entering this era where you can just type and get a hyper photorealistic image. And if it's not true today will eventually be true that it will be devoid of obvious visual artifacts that the average person looking at would be able to discriminate.”
“There’s something to be said for the fact that our ability to discriminate reality from fiction is important for a functioning society and democracy,” he added. “If every time you see a photo, you think it’s real because it’s a photo, that has consequences beyond the silliness we’re seeing here.”
Penny said he thinks that studying these images might eventually give him the opposite problem: “20 years from now, I don’t know what it’s going to be like then, but I’m not going to believe a single thing anyone shows me on the internet ever again.”
TITLE: Artificial intelligence can find your location in photos, worrying privacy experts
https://www.npr.org/2023/12/19/1219984002/artificial-intelligence-can-find-your-location-in-photos-worrying-privacy-expert
EXCERPT: To test PIGEON's performance, I gave it five personal photos from a trip I took across America years ago, none of which have been published online. Some photos were snapped in cities, but a few were taken in places nowhere near roads or other easily recognizable landmarks.
That didn't seem to matter much.
It guessed a campsite in Yellowstone to within around 35 miles of the actual location. The program placed another photo, taken on a street in San Francisco, to within a few city blocks.
Not every photo was an easy match: The program mistakenly linked one photo taken on the front range of Wyoming to a spot along the front range of Colorado, more than a hundred miles away. And it guessed that a picture of the Snake River Canyon in Idaho was of the Kawarau Gorge in New Zealand (in fairness, the two landscapes look remarkably similar).
The ACLU's Jay Stanley thinks despite these stumbles, the program clearly shows the potential power of AI.
"The fact that this was done as a student project makes you wonder what could be done, by, for example, Google," he says.
In fact, Google already has a feature known as "location estimation," which uses AI to guess a photo's location. Currently, it only uses a catalog of roughly a million landmarks, rather than the 220 billion street view images that Google has collected. The company told NPR that users can disable the feature.
Stanley worries that companies might soon use AI to track where you've traveled, or that governments might check your photos to see if you've visited a country on a watchlist. Stalking and abuse are also obvious threats, he says. In the past, Stanley says, people have been able to remove GPS location tagging from photos they post online. That may not work anymore.
The Stanford graduate students are well aware of the risks. They've held back from making their full model publicly available, precisely because of these concerns, they say.
But Stanley thinks use of AI for geolocation will become even more powerful going forward. He doubts there's much to be done — except to be aware of what's in the background photos you post online.



What an interesting problem that I had never considered!
**Farid published a study on “nepotistic” AI, where generative AI tools are trained on other AI-generated outputs, creating highly-distorted images: “Once poisoned, the models struggle to fully heal even after retraining on only real images,” his study found. “The slightly less kind word is inbreeding..."**
Thanks for the great and useful synopsis of all 3 topics.