Apple buys Israeli startup Q.ai
Posted by ishener 3 hours ago
Comments
Comment by tchalla 1 hour ago
Twice, well done!
Comment by tartoran 56 minutes ago
Comment by causalmodels 37 minutes ago
Comment by cyrusradfar 25 minutes ago
you mean something that improves the detection and transcription of voices when the person doesn't realize the mic is on, like when it's in our pocket?
Comment by Noaidi 23 minutes ago
Comment by yomansat 16 minutes ago
Comment by myth_drannon 1 minute ago
Comment by Sir_Twist 1 hour ago
This is an interesting acquisition given their rumored Echo Show / Nest Hub competitor (1). Maybe this is part of their (albeit flawed and delayed) attempt to revitalize the Siri branding under their Apple Intelligence marketing. When you have to say the exact right words to Siri, or else she will add “Meeting at 10” as an all day calendar event, people get frustrated, and that non-technical illusion of the “digital assistant” is lost. If this is the model of understanding Apple have of their customers’ perception of Siri, then maybe their thinking is that giving Siri more non-verbal personable capability could be a differentiating factor in the smart hub market, along with the LLM rebuild. I could also see this tying into some sort of strategy for the Vision Pro.
Now, whether this hypothetical differentiating factor is worth $2 billion, I’m not so sure on, but I guess time will tell.
https://www.macrumors.com/2025/11/05/apple-smart-home-hub-20...
Comment by deepfriedchokes 1 hour ago
Comment by Lammy 1 hour ago
Comment by cyrusradfar 25 minutes ago
Comment by clueless 1 hour ago
Yep, looks like that is it. Recent patent from one of the founders: https://scholar.google.com/citations?view_op=view_citation&h...
Comment by mikestorrent 1 hour ago
Pardon the AI crap, but:
> ...in most people, when they "talk to themselves" in their mind (inner speech or internal monologue), there is typically subtle, miniature activation of the voice-related muscles — especially in the larynx (vocal cords/folds), tongue, lips, and sometimes jaw or chin area. These movements are usually extremely small — often called subvocal or sub-articulatory activity — and almost nobody can feel or see them without sensitive equipment. They do not produce any audible sound (no air is pushed through to vibrate the vocal folds enough for sound). Key evidence comes from decades of research using electromyography (EMG), which records tiny electrical signals from muscles: EMG studies consistently show increased activity in laryngeal (voice box) muscles, tongue, and lip/chin areas during inner speech, silent reading, mental arithmetic, thinking in words, or other verbal thinking tasks
So, how long until my Airpods can read my mind?
Comment by concavebinator 40 minutes ago
Comment by danhite 1 minute ago
Comment by stefanos82 1 hour ago
Comment by alecco 40 minutes ago
Comment by assaddayinh 2 hours ago
Comment by bnchrch 2 hours ago
Comment by alighter 2 hours ago
Comment by tobmlt 1 hour ago
I wish the iphone had word prediction and autocorrect that was from the previous centruy
Comment by thewebguyd 47 minutes ago
Crazy he had pretty much perfected the tech of typing out text on a smartphone and then decided to throw it all away by moving to all-screen devices instead. A virtual keyboard with no tactile feel will never compare until we can have screens that can recreate the tactile bumps of a physical keyboard.
Comment by darth_avocado 1 hour ago
Comment by tartoran 57 minutes ago
Comment by wahnfrieden 1 hour ago
Comment by robinsoncrusue 1 hour ago
Comment by tiffanyh 1 hour ago
> enable devices to interpret whispered speech and enhance audio in noisy environments.
I personally see a lot of people using Siri on speakerphone in public places and am amazed due to the background noise … that Siri can even capture half of what’s said.
Comment by null_deref 1 hour ago
Comment by blastro 1 hour ago