AI
The AI Frontier in 2025: What’s New & What Matters
Artificial Intelligence continues evolving at breakneck speed. In 2025, we’re seeing transformative advances in models, infrastructure, regulation, and risk. Below is a snapshot of the most compelling developments.
8 min read
·
Oct 10, 2025

1. Next-gen Models & Autonomous Agents

Gemini 2.5 & agentic browser interaction

Google’s Gemini series keeps pushing limits. The new Gemini 2.5 Computer Use model is capable of interacting with the web like a human via browser actions (clicking, dragging, filling forms) rather than relying solely on APIs.  

This “agentic” capability opens the door to AI that can perform end-to-end tasks across web interfaces, especially when APIs are unavailable.

Manus: an autonomous agent

Manus (Latin for “hand”) is an AI agent launched in 2025 focused on performing complex real-world tasks with minimal supervision.  

It can plan, decide, code, and execute tasks dynamically, pushing further toward independent AI systems.

DeepSeek’s rise in China

DeepSeek, a Chinese AI startup, made waves by topping the iOS App Store and introducing techniques like generative reward modeling (GRM) and self-principled critique tuning (SPCT).  

They’ve also ventured into math reasoning models (e.g. “DeepSeek-Prover”) and are positioning themselves as strong challengers to Western AI dominance.

2. Infrastructure, Hardware & Strategic Partnerships

OpenAI & AMD deal

OpenAI signed a megadeal with AMD: AMD will supply high-performance GPUs (Instinct MI450 and beyond) to power OpenAI’s next-gen models.  

OpenAI also secured a warrant to acquire up to ~10% stake in AMD, locking in a long-term compute partnership. This signals a diversification away from Nvidia and bets on growing compute demand.  

Security & vulnerability repair tools

DeepMind introduced CodeMender, a tool aiming to detect and patch software vulnerabilities proactively.  

By combining fuzzing, static analysis, and differential testing, it can propose fixes for potential flaws before malicious use. All patches remain human-reviewed.  

3. Regulation, Safety & Global Governance

International AI Safety Report & Paris AI Summit

In January 2025, the First International AI Safety Report was published, highlighting key risks: privacy violations, faulty behaviors, misuse, and the challenge of “control” over future systems.  

That fed into the 2025 AI Action Summit in Paris (Feb 10–11), co-chaired by France and India. Over 1,000 participants from >100 countries attended.  

Outcomes included:

  • Commitments to inclusive, sustainable AI
  • InvestAI, a €200 billion initiative for European AI infrastructure
  • A declaration signed by 58 countries promoting transparency, equity, and safety
  • Criticism that the summit gave too much weight to innovation over safety  

Notably, the U.S. and U.K. refused to sign the declaration, citing concerns about language and enforcement.  

Legal battles over content & AI summarization

Publishers are pushing back. Rolling Stone’s parent company, Penske Media, sued Google over its “AI Overviews” feature, claiming it uses content without permission and undermines traffic to original sites.  

Meanwhile, The New York Times struck an AI licensing deal with Amazon to allow use of its content in summaries and model training.  

These shifts reflect how content rights, fair use, and monetization are being renegotiated in the AI era.

Browser AI risks

The integration of AI into web browsers isn’t without danger. A vulnerability dubbed “CometJacking” was discovered in Perplexity’s Comet browser, where hidden prompts in URLs could trick the AI into leaking private data (emails, calendar info).  

Although patched, it raises alarms about how embedding AI in daily tools affects security surfaces.

4. Market & Adoption Trends

Investment & adoption rates

Generative AI remains a magnet for capital: it received **~ $33.9 billion** globally in private funding — an 18.7% increase from 2023.  

AI adoption in organizations has surged: 78% of companies now report using AI, up from 55% a year prior.  

European push & sovereignty efforts

At the Paris summit, European stakeholders sought to reduce reliance on non-EU AI stacks by launching InvestAI and regional “gigafactories” for training models.  

Strategic backing for startups like Mistral AI is part of Europe’s bid for autonomy.  

5. Key Tensions & Risks Ahead

  • Compute bottlenecks: The demand for high-power infrastructure is outstripping supply; partnerships like OpenAI-AMD show how essential hardware is becoming.
  • Ethics vs. acceleration: The tension between pushing innovation and enforcing safety/regulation continues to dominate policy debates.
  • Economic & labor impact: As AI automates more tasks, questions about job displacement, reskilling, and inequality intensify.
  • Control & alignment: Ensuring future models remain aligned with human values is still an unresolved “hard problem.”
  • Monopolies & access: Concentration in AI capabilities and data (Big Tech, dominant models) risks stifling competition.