====== Today in AI: May 02, 2026 · 4 min read ====== **Anthropic is quietly outpacing OpenAI on growth—and the market is noticing.** [[https://thecreatorsai.com/p/anthropic-eyes-900b-big-tech-bets|Anthropic's revenue expansion between late 2025 and mid-2026 significantly outpaced OpenAI's]], according to recent reporting. The divergence reflects different market positioning: Anthropic betting hard on enterprise relationships and long-context reasoning, while OpenAI chases consumer dominance. The gap matters less for who wins and more for what it signals about AI adoption patterns in the wild. 🚀 **Frontier agents now see the world—literally.** [[https://www.rohan-paul.com/p/frontier-ai-can-now-autonomously|OpenClaw, a new agentic framework, pairs with World2Agent sensor systems to let AI agents respond to real-world environmental signals in real time.]] No more text-only interactions. Agents can now receive streaming sensor data and execute decisions on physical systems. This bridges simulation and reality in ways that make autonomous workflows actually useful beyond chatbots. 🛠️ **DeepSeek-V4-Pro is swallowing long contexts whole.** DeepSeek-V4-Pro ships with hybrid attention and KV cache optimizations tuned for extended context windows—addressing the classic tradeoff between depth and length. The [[muon_optimizer|Muon Optimizer]] keeps training stable even when you're routing attention through complex mechanisms. Builders caring about cost-per-token on long documents should pay attention here. 📊 **Image-to-3D is finally leaving the research lab.** [[https://www.latent.space/p/ainews-not-much-happened-today|The needle-in-haystack benchmark keeps getting longer]], and so does the ability to turn 2D images into production-ready 3D assets with PBR textures. Gaming and e-commerce teams are now shipping this in pipelines instead of hiring modelers. The bottleneck isn't the model anymore—it's integrating it into your asset workflow. 🤖 **Open-source agents are getting infrastructure.** [[playwright|Playwright]] and [[https://developer.nvidia.com/blog/nvidia-cloud-functions-serverless-gpu-computing/|NVIDIA Cloud Functions]] are becoming the plumbing for autonomous systems. Playwright abstracts browser control; NVCF abstracts GPU access. The meta-pattern: every tool that used to require custom infrastructure is becoming a commodity service. Agents eat these APIs for breakfast. 🎯 **The "vegan model" movement is real—and growing.** [[https://simonwillison.net/2026/Apr/28/talkie/#atom-blogmarks|Models trained exclusively on licensed or out-of-copyright data]] are multiplying. Not because they're better (they're not), but because teams are finally asking legal questions. Expect this category to matter more once the copyright dust settles. Still no GPT-5.5 in the wild—just rumors. Gemini 3.5 remains silent. That's the brief. Full pages linked above. See you tomorrow.