THE SET-UP: Here’s a magic number: 350 billion.
That’s the amount of money President Trump claims the United States has given to Ukraine. He says it every single time the topic comes up. In fact, he said it today during a speech at the Justice Department. One problem, though, is that it’s not true. And, per the Special Inspector General for Operation Atlantic Resolve, Promoting Whole of Government Oversight of the U.S. Ukraine Response, it’s not even close:
Congress appropriated $174.2 billion through the five Ukraine supplemental appropriation acts enacted FY 2022 through FY 2024, of which $163.6 billion was allocated for OAR and the Ukraine response, and $10.6 billion was allocated for other, primarily humanitarian, purposes. Additional funds of $18 billion were allocated from annual agency appropriations and $1.1 billion was allocated from other supplemental appropriation acts.
So, why does Trump continue to bark out an easily provable lie? And yes, it is a lie. The Special Inspector General is obviously not hiding the data from the public, let alone from the President of the United States. He knows the truth. But he also knows he doesn’t have to tell the truth … not about Ukraine, not about anything. That was made perfectly clear when he doubled-down on pet-eating Haitians in the last election. It was an easily provable lie … just like $350 billion. The veracity of his claim was irrelevant, though. What mattered was each individual’s reaction to his claim (lie) and what that reaction said about them. You see, every lie he tells is an opportunity to join a group and participate in a collective identity. It’s a religious impulse … an act of faith that functions like a baptism. If you take the leap and accept his “truth,” you are instantly reborn into a movement. All you have to do is believe you’ll be embraced by millions of people. Suddenly, you are not alone … and you belong… and you are special.
In an increasingly cacophonous world of tech-driven alienation … a world where we are all managing our two-dimensional identities on heavily manipulated platforms designed to feed our desires and exploit our tendencies … think of the comfort his lies offer increasingly alienated people sucked into binge-watching a micro-processed reality on smaller and smaller screens. Maybe the key is to never look up so we’ll never see how truly alone or unremarkable we are. - jp
TITLE: Have humans passed peak brain power?
https://www.ft.com/content/a8016c64-63b7-458b-a371-e0e1c54a13fc
EXCERPTS: What is intelligence? This may sound like a straightforward question with a straightforward answer — the Oxford English Dictionary defines it as “a capacity to understand” — but that definition itself raises an increasingly relevant question in the modern world. What happens if the extent to which we can practically apply that capacity is diminishing? Evidence is mounting that something exactly like this has been happening to the human intellect over the past decade or so.
Nobody would argue that the fundamental biology of the human brain has changed in that far-too-short time span. However, across a range of tests, the average person’s ability to reason and solve novel problems appears to have peaked in the early 2010s and has been declining ever since.
When the latest round of analysis from PISA, the OECD’s international benchmarking test for performance by 15-year-olds in reading, mathematics and science tests, was released, the focus understandably fell on the role of the Covid pandemic in disrupting education. But this masked a longer-term and broader deterioration.
Longer-term in the sense that scores for all three subjects tended to peak around 2012. In many cases, they fell further between 2012 and 2018 than they did during the pandemic-affected years. And broader in that this decline in measures of reasoning and problem-solving is not confined to teenagers. Adults show a similar pattern, with declines visible across all age groups in last year’s update of the OECD’s flagship assessment of trends in adult skills.
Given its importance, there has been remarkably little consistent long-running research on human attention or mental capacity. But there is a rare exception: every year since the 1980s, the Monitoring the Future study has been asking 18-year-olds whether they have difficulty thinking, concentrating or learning new things. The share of final year high school students who report difficulties was stable throughout the 1990s and 2000s, but began a rapid upward climb in the mid-2010s.
This inflection point is noteworthy not only for being similar to performance on tests of intelligence and reasoning but because it coincides with another broader development: our changing relationship with information, available constantly online.
Part of what we’re looking at here is likely to be a result of the ongoing transition away from text and towards visual media — the shift towards a “post-literate” society spent obsessively on our screens.
The decline of reading is certainly real — in 2022 the share of Americans who reported reading a book in the past year fell below half.
Particularly striking however is that we see this alongside decreasing performance in the application of numeracy and other forms of problem-solving in most countries.
In one particularly eye-opening statistic, the share of adults who are unable to “use mathematical reasoning when reviewing and evaluating the validity of statements” has climbed to 25 per cent on average in high-income countries, and 35 per cent in the US.
So we appear to be looking less at the decline of reading per se, and more at a broader erosion in human capacity for mental focus and application.
Most discussion about the societal impacts of digital media focuses on the rise of smartphones and social media. But the change in human capacity for focused thought coincides with something more fundamental: a shift in our relationship with information.
We have moved from finite web pages to infinite, constantly refreshed feeds and a constant barrage of notifications. We no longer spend as much time actively browsing the web and interacting with people we know but instead are presented with a torrent of content. This represents a move from self-directed behaviour to passive consumption and constant context-switching.
Research finds that active, intentional use of digital technologies is often benign or even beneficial. Whereas the behaviours that have taken off in recent years have been shown to affect everything from our ability to process verbal information, to attention, working memory and self-regulation.
EXCERPTS: Financial Times’s chief data reporter John Burn-Murdoch … dropped a lengthy thread on X expounding on some of the key points in his article.
Most discussion about the societal impacts of digital media focuses on the rise of smartphones and social media, but I think that’s simultaneously an incomplete explanation, and one that lumps together benign/positive use of digital technologies with the more problematic.
I would point to something more fundamental: a change in the relationship between our brains and information.
The way we used smartphones and social media in the early 2010s was different to today. Usage was largely active, self-directed. You were still engaging your brain.
But since then we’ve had:
• The transition from the social graph (seeing a selection of content from people you know and actively engage with) to algorithms (an infinite torrent of the most engaging content in the world, with much less active participation)
• The shift from articles (longer material that requires the reader to synthesise, make inferences and reflect) to short self-contained posts (everything is pre-packaged in a few sentences, no critical thought required)
• An explosion in the volume and frequency of notifications, each one at risk of pulling you away from what you were previously doing (or taking up some headspace even if you ignore it)
Burn-Murdoch ends by noting that the data is not all doom and gloom and argues that both brains and society at large can easily be retrained to focus on complex data.
TITLE: Science shows AI is probably making you dumber. Luckily, there’s a fix
https://www.fastcompany.com/91290531/science-shows-ai-is-probably-making-you-dumber-luckily-theres-a-fix
EXCERPTS: A new scientific study warns that using artificial intelligence can erode our capacity for critical thinking. The research, carried out by a Microsoft and Carnegie Mellon University scientific team, found that the dependence on AI tools without questioning their validity reduces the cognitive effort applied to the work. In other words: AI can make us dumber if we use it wrong.
“AI can synthesize ideas, enhance reasoning, and encourage critical engagement, pushing us to see beyond the obvious and challenge our assumptions,” Lev Tankelevitch, a senior researcher at Microsoft Research and coauthor of the study, tells me in an email interview.
But to reap those benefits, Tankelevitch says users need to treat AI as a thought partner, not just a tool for finding information faster. Much of this comes down to designing a user experience that encourages critical thinking rather than passive reliance. By making AI’s reasoning processes more transparent and prompting users to verify and refine AI-generated content, a well-designed AI interface can act as a thought partner rather than a substitute for human judgment.
The research—which surveyed 319 professionals—found that high confidence in AI tools often reduces the cognitive effort people apply to their work. “Higher confidence in AI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking,” the study states. This over-reliance stems from a mental model that assumes AI is competent in simple tasks. As one participant admitted in the study, “it’s a simple task and I knew ChatGPT could do it without difficulty, so I just never thought about it.” Critical thinking didn’t feel relevant because, well, who cares.
This mindset has major implications for the future of work. Tankelevitch tells me that AI is shifting knowledge workers from “task execution” to “task stewardship.” Instead of manually performing tasks, professionals now oversee AI-generated content, making decisions about its accuracy and integration. “They must actively oversee, guide, and refine AI-generated work rather than simply accepting the first output,” Tankelevitch says.
The study highlights that when knowledge workers actively evaluate AI-generated outputs rather than passively accepting them, they can improve their decision-making processes. “Research also shows that experts who effectively apply their knowledge when working with AI see a boost in output,” Tankelevitch points out. “AI works best when it complements human expertise—driving better decisions and stronger outcomes.”
The study found that many knowledge workers struggle to critically engage with AI-generated outputs because they lack the necessary domain knowledge to assess their accuracy. “Even if users recognize that AI might be wrong, they don’t always have the expertise to correct it,” Tankelevitch explains. This problem is particularly acute in technical fields where AI-generated code, data analysis, or financial reports require deep subject matter knowledge to verify.
Confidence in AI can lead to a problem called cognitive offloading. This phenomenon isn’t new. Humans have long outsourced mental tasks to tools, from calculators to GPS devices. Cognitive offloading is not inherently negative. When done correctly, it allows users to focus on higher-order thinking rather than mundane, repetitive tasks, Tankelevitch points out.
But the very nature of generative AI—which produces complex text, code, and analysis—brings a new level of potential mistakes and problems. Many people might blindly accept AI outputs without questioning them (and quite often these outputs are bad or just plain wrong). This is especially the case when people feel the task is not important. “Our study suggests that when people view a task as low-stakes, they may not review outputs as critically,” Tankelevitch points out.
The study raises crucial questions about the long-term impact of AI on human cognition. If knowledge workers become passive consumers of AI-generated content, their critical thinking skills could atrophy. However, if AI is designed and used as an interactive, thought-provoking tool, it could enhance human intelligence rather than degrade it.
TITLE: Research warns cellphones are diminishing Gen Z’s creativity & critical thinking
https://www.abcactionnews.com/news/anchors-report/research-warns-cellphones-are-diminishing-gen-zs-creativity-critical-thinking
EXCERPTS: According to a study by the University of Cambridge, smartphones are destroying the creativity of Gen Zers.
It is leaving them with shattered attention spans, unable to think critically, or unable to filter out noise from meaningful information.
“They're just connecting with people, who think the same way that they do. And if that person doesn't think the way they do, they might just simply unfriend them,” said Katie Trowbridge, a veteran educator and founder of Curiosity 2 Create.
Trowbridge’s nonprofit Curiosity 2 Create helps teachers engage students more. She believes teens are deeply affected by what they see on their cellphones and have lost the ability to focus or think critically.
“They are being surrounded by one way of thinking. So, they're not getting multiple perspectives. They're just seeing what the algorithms are giving them,” said Trowbridge.
Trowbridge said teens often don’t question what they see online. They tend to believe everything as fact.
“It really is educating them on how to think in creative and critical ways. How to be those deep thinkers. How do you look at a source and say, 'Maybe I need to see if that's really true?” explained Trowbridge.
She also wants to encourage parents to ask more thought-provoking questions and to lead by example.
“What was your favorite part of your day today? Why was that the favorite part of your day? What frustrated you at school today? What was the most challenging thing you learned?” said Trowbridge.
She continued, “So I think modeling it, showing our students, showing our children, how do you question information? Why do we think that this is really true? And if we don't think it's true, how could we figure out if it's true? So that they're learning from us.”
According to research in Jonathan Haidt’s bookThe Anxious Generation, social media is rotting the brains of Gen Zers. Haidt said their brains are rewiring to “passively absorb content” rather than actively engage in “Critical thought.”
“We're not connecting. We're not learning. We're not seeing different perspectives. We're not having this empathy. We're anxious. We're depressed. We're not getting out and talking to each other,” said Trowbridge.”


