๐ค AI & LLMs Anthropic
Claude 4 Opus ships with 200K context and computer use 2.0
Anthropic releases Claude 4 Opus with native computer use, 200K token window, and significantly improved code generation benchmarks.
The context size alone unlocks whole-codebase reasoning โ this changes what you can hand off to AI agents.
๐ค AI & LLMs Hugging Face
Llama 4 Scout outperforms GPT-4o on multi-step reasoning
Meta's compact Llama 4 variant hits SOTA on reasoning tasks using only 17B active parameters via MoE with 16 experts.
Open-source model at this quality means local inference without API costs โ a real alternative for production.
๐ค AI & LLMs The Gradient
Mixture-of-Experts is now the default LLM architecture
Analysis of the top 20 new models released in Q1 2025 shows 17 use MoE. Dense transformers are becoming legacy.
Understanding MoE routing is now table stakes for anyone deploying or fine-tuning foundation models.