TITLE: 'Master of deception': Current AI models already have the capacity to expertly manipulate and deceive humans
https://www.livescience.com/technology/artificial-intelligence/master-of-deception-current-ai-models-already-have-the-capacity-to-expertly-manipulate-and-deceive-humans
EXCERPT: Although Meta trained CICERO to be “largely honest and helpful” and not to betray its human allies, researchers [at the Massachusetts Institute of Technology] found CICERO was dishonest and disloyal. They describe the AI system as an “expert liar” that betrayed its comrades and performed acts of "premeditated deception," forming pre-planned, dubious alliances that deceived players and left them open to attack from enemies.
"We found that Meta's AI had learned to be a master of deception," Park said in a statement provided to Science Daily. "While Meta succeeded in training its AI to win in the game of Diplomacy — CICERO placed in the top 10% of human players who had played more than one game — Meta failed to train its AI to win honestly."
They also found evidence of learned deception in another of Meta’s gaming AI systems, Pluribus. The poker bot can bluff human players and convince them to fold.
Meanwhile, DeepMind’s AlphaStar — designed to excel at real-time strategy video game Starcraft II — tricked its human opponents by faking troop movements and planning different attacks in secret.
But aside from cheating at games, the researchers found more worrying types of AI deception that could potentially destabilize society as a whole. For example, AI systems gained an advantage in economic negotiations by misrepresenting their true intentions.
Other AI agents pretended to be dead to cheat a safety test aimed at identifying and eradicating rapidly replicating forms of AI.
"By systematically cheating the safety tests imposed on it by human developers and regulators, a deceptive AI can lead us humans into a false sense of security,” Park said.
Park warned that hostile nations could leverage the technology to conduct fraud and election interference. But if these systems continue to increase their deceptive and manipulative capabilities over the coming years and decades, humans might not be able to control them for long, he added.
"We as a society need as much time as we can get to prepare for the more advanced deception of future AI products and open-source models," said Park. "As the deceptive capabilities of AI systems become more advanced, the dangers they pose to society will become increasingly serious."
Ultimately, AI systems learn to deceive and manipulate humans because they have been designed, developed and trained by human developers to do so, Simon Bain, CEO of data-analytics company OmniIndex told Live Science.
"This could be to push users towards particular content that has paid for higher placement even if it is not the best fit, or it could be to keep users engaged in a discussion with the AI for longer than they may otherwise need to," Bain said. "This is because at the end of the day, AI is designed to serve a financial and business purpose. As such, it will be just as manipulative and just as controlling of users as any other piece of tech or business.
TITLE: Ex-Google CEO Eric Schmidt predicts AI data centers will be ‘on military bases surrounded by machine guns’
https://nypost.com/2024/05/23/business/ex-google-ceo-eric-schmidt-predicts-ai-data-centers-will-be-on-military-bases/
EXCERPT: The former Google boss who headed the search engine from 2001 to 2011 said that AI systems will gain knowledge at such a rapid pace within the next few years that they will eventually “start to work together.”
Schmidt, whose net worth has been valued by Bloomberg Billionaires Index at $33.4 billion, is an investor in the Amazon-backed AI startup Anthropic.
He said that the proliferation of AI knowledge in the next few years poses challenges to regulators.
“Here we get into the questions raised by science fiction,” Schmidt said.
He identified AI “agents” as “large language model[s] that can learn something new.”
“These agents are going to be really powerful, and it’s reasonable to expect that there will be millions of them out there,” according to Schmidt.
“So, there will be lots and lots of agents running around and available to you.”
He then pondered the consequences of agents “develop[ing] their own language to communicate with each other.”
“And that’s the point when we won’t understand what the models are doing,” Schmidt said, adding: “What should we do? Pull the plug?”
“It will really be a problem when agents start to communicate and do things in ways that we as humans do not understand,” the 69-year-old former executive said. “That’s the limit, in my view.”
Schmidt said that “a reasonable expectation is that we will be in this new world within five years, not 10.”
TITLE: Defense Contractor Says AI Killing Innocent People In The Future Is A ‘Certainty’
https://brobible.com/culture/article/palmer-luckey-ai-killing-innocent-people-certainty/
EXCERPT: “There will be people who are killed by AI who should not have been killed. That is a certainty if artificial intelligence becomes a core part of the way that we fight wars,” Luckey told The Circuit with Emily Chang. “We need to make sure that people remain accountable for that because that’s the only thing that’ll drive us to better solutions and fewer inadvertent deaths, fewer civilian casualties.”
Luckey also stated, “The key is that a person is responsible for the deployment of those system. The existence of an algorithm cannot replace human responsibility for deploying that weapon system. And it has to be a person who deeply understands the limitations of that system and who’s going to be held to account when it goes wrong, but war is hell, and it’s not going to be perfect.”
Palmer Luckey added that the war between Ukraine and Russia has accelerated the use of AI on the battlefields.
“What’s happening in Ukraine is is fascinating because they can’t afford to treat warfare as a thing to be think-tanked or as a thing to be debated in white papers,” he said. “They have to actually win today and that means that a lot of barriers to trying new ideas have been lifted.
“And that’s one of the reasons you’ve seen, for example, the proliferation of small unmanned armed quadcopters. It’s why you’ve seen the proliferation of a lot of really interesting counter-drone systems. Things that were not nearly mature enough to be deployed, let’s say, by the United States, but they are willing to deploy them in a very early stage maturity because they know they can’t win doing things the old way.”
Luckey knows of what he speaks as his company Anduril has billions of dollars worth of defense contracts with the U.S. Department of Defense (DoD), the U.S. Department of Homeland Security, the Australian Defence Force, the UK Ministry of Defence.
Anduril specializes in artificial intelligence and robotics including autonomous drones and sensors, autonomous surveillance systems, as well as the command and control software that runs them.
SEE ALSO:
The Grim High-Tech Dystopia on the US-Mexico Border
https://jacobin.com/2024/05/high-tech-ai-mexico-border


