The Brooklyn based startup founded by ex Google DeepMind researchers seeks 2.5 billion dollars in fresh capital as it builds freely available AI systems to rival Chinese offerings like DeepSeek.
The three rivals started sharing information to detect adversarial distillation attempts that let Chinese firms create cheaper imitations of their frontier AI models.
The deal gives Anthropic access to roughly 3.5 gigawatts of capacity starting in 2027 to train and serve its frontier Claude models as customer demand drives run rate revenue past 30 billion dollars.
Rocket 1.0 combines research competitive intelligence and product building in one workflow to help teams figure out what to build next instead of just how to code it.
The news network has a clear timeline for agent-to-agent ad trading and wants to get ahead by creating the infrastructure in-house while the technology is still new.
The AI company behind Claude snaps up a small team of computational biology experts to deepen its push into healthcare and speed up drug discovery efforts.
The company secures capital from tech giants and global investors while expanding its credit facility, pushing ahead with compute buildout and a unified AI superapp as ChatGPT revenue hits new highs.
The French AI company takes on its first major debt financing to build out sovereign compute capacity in Europe and reduce reliance on foreign cloud providers.
The standalone tool powered by Anthropic Claude turns natural language requests into personalized algorithms on the open AT Protocol, with conference attendees getting first access while the company eyes broader ecosystem growth.
Volunteer editors voted to prohibit large language models from creating or rewriting encyclopedia entries because they often violate core policies on verifiability and sourcing, while still allowing limited use for translations and basic copy edits.
The new family includes Voxtral Mini Transcribe V2 for batch jobs and open weights Voxtral Realtime for live use, delivering strong accuracy across 13 languages with speaker diarization and sub 200ms latency options.
Users can now bring memories, preferences and past conversations from other AI tools into Gemini via a quick prompt copy or ZIP file upload, helping the model pick up right where they left off without starting from scratch.