. Top News.
Smart home tech has always had a timing problem. You say "turn off the lights" and there's this awkward half-second where you wonder if it heard you. Then it does. Then you feel slightly stupid for doubting it.
Google just patched that gap. Gemini for Home got a backend overhaul with faster processing for controls, alarms, and timers, building on earlier fixes that already shaved up to 1.5 seconds off response times. They also tightened age-gating and content controls.
This is the kind of compounding work that makes a product go from "kind of useful" to "I'd actually miss this." Smart home AI lives and dies on reliability. Google knows that and they're grinding on it.
Text is forgiving. You can re-read it, skim it, or ignore it. But voice puts AI in the room with you, and that changes everything about how much you trust it.
OpenAI just made a significant push here. Three new tools in the Realtime API: GPT-Realtime-2 brings GPT-5-level reasoning to live conversations. GPT-Realtime-Translate handles real-time translation across 70+ input languages and 13 output languages. GPT-Realtime-Whisper does live speech-to-text mid-interaction.
The pricing is metered, per minute for Translate and Whisper, per token for GPT-Realtime-2. Guardrails are built in against spam and fraud.
This opens up customer support that doesn't feel like a call center, language learning tools that work in real time, and media applications that can caption and translate simultaneously. The infrastructure is now there. The interesting part is what developers build on top of it.
Mira Murati left OpenAI last year. Now we're seeing what she's been building.
Thinking Machines Lab just introduced "interaction models," full-duplex AI that listens and responds at the same time, the way an actual conversation works. Their first model, TML-Interaction-Small, hits 0.40-second response times. That's roughly human speed.
Most AI voice interfaces are still turn-based. You speak, it processes, it replies. That pause is small but noticeable, and it's enough to make the whole thing feel like a demo rather than a conversation.
If Thinking Machines cracks this at scale, it shifts the category. Not AI that mimics conversation. AI that actually holds one.
. Together with Opal Security.
Your AI agent just accessed prod, but no one approved it. It didn't log in or submit a request. It chained a few API calls, touched a system with standing access, and moved on.
Non-human identities now outnumber humans in most stacks. They're overprovisioned at creation, rarely reviewed, and carry no lifecycle events.
The gap between what your access controls assume and what's actually happening is widening.
. Signals.
Tools
Links
. Poll.
What is driving the biggest AI ROI in enterprises right now?
. Market.
Funding
Ciridae raised $20M (Seed) to bring AI transformation to real-economy businesses.
Dishio raised $2.5M (Seed) to help restaurants turn guest data into repeat revenue.
. Prompt of the Day.
Executive Thinking Partner (Weekly)
When to use this?
When a week has passed, a lot has happened, and you want someone to think with you.
You are my executive thinking partner.
I’ll paste a raw dump of what happened this week — meetings, decisions, issues, signals, and noise.
Your job is to do the thinking for me and return:
What actually mattered this week (ignore the rest)
What patterns or themes are emerging
What I should lean into next week
What I should consciously stop paying attention to
One uncomfortable truth I might be avoiding
Keep it clear, direct, and practical.
No motivational fluff.
Weekly dump: [paste notes, bullets, thoughts, Slack snippets, meeting points]P.S. Get more such prompts in the Prompting Playbook (free for you)
Stay curious, {{first_name | leaders}}
PS. If you missed yesterday’s issue, you can find it here.
