Big steps in AI safety
Earlier this month, Anthropic endorsed California's SB 53, a bill that forces big AI companies to publish safety reports and explain how they prevent catastrophic risks. Companies have to report safety incidents within 15 days and face real penalties if they don't follow through.
We've seen this play out in the policies created to protect users from data breaches and cyber attacks (see GDPR). Because of it, companies now invest way more in breach detection systems. My view is that the AI safety problem is just as massive (if not bigger) which will lead to a flood of new startups in the space, an increase in safety issues, and even more regulation.
Source: Anthropic
MIT found the jobs AI can't steal
Planet Money released an excellent podcast which asks which jobs are safe from AI. In it, they cover MIT’s “EPOCH” research which aims to evaluate tasks across all occupations and better understand the effects of AI on the job market. They call out categories that are uniquely human where machines are limited: Empathy, Presence, Opinion, Creativity, and Hope.
Emergency management directors, therapists, and film directors scored high because their work requires judgment calls in messy situations. Turns out, between 2016 and 2024, jobs actually became more human-intensive, not less.
Source: Planet Money, MIT Sloan
Wizard of Oz gets a glow-up
Google and Warner Bros used AI to upgrade the 1939 Wizard of Oz for the massive Sphere screen in Vegas. They enhanced grainy footage to 16K resolution and generated new character performances based on the director's original intent. The AI learned the film's visual style so well it could create new scenes that felt authentic to 1939.
Really impressive work, though I'm sure this raises big questions about authenticity when AI generates new performances. Personally, I'd love to see it played backwards with a live performance of Pink Floyd's Dark Side of the Moon (for the curious: Dark Side of the Rainbow) 🌈 🌒
Source: Google
Thanks for reading!