The U.S. government is reportedly advancing a far-reaching contract clause that could require technology companies to prioritize government directives over existing safety and data protection standards. At the center is a conflict between the AI company Anthropic and the Pentagon: the company insists on clear boundaries, including a ban on autonomous weapons systems without human oversight and on the use of its technology for mass surveillance. While Anthropic is defending these principles, a competing company has shown willingness to comply with such government demands. Critics warn of an increasingly close relationship between the state and tech corporations, as well as potential infringements on civil liberties.
The growing use of artificial intelligence in military contexts is intensifying these concerns. Reports of AI deployment in military operations are accompanied by risks stemming from malfunctions or so-called “hallucinations,” such as errors in target selection that could have deadly consequences for civilians. Experts warn of a potential loss of control over autonomous systems and call for regulatory safeguards. At the same time, geopolitical tensions and possible disruptions to global energy and supply chains could have economic repercussions, also affecting the heavily AI-driven technology sector.
At the same time, the technology is already having economic and social impacts. A rising number of jobs are being eliminated or left unfilled, particularly in software development and traditional white-collar roles. In addition, AI is making it increasingly difficult to distinguish between authentic and manipulated content in the digital space. The spread of misinformation and disinformation is growing and could particularly influence political processes. Experts expect these developments to intensify and advise greater source scrutiny and stronger regulation.