Google's Gemini for Home AI hallucinates family members, misidentifies dogs as deer, and fabricates events in camera summaries
housinghousing0 views
Google's Gemini for Home AI, which powers camera summaries and alerts on Nest cameras, routinely hallucinates events that never happened. Documented cases include: fabricating social interactions ('Jenni and R were seen interacting with trick-or-treaters') when the person was not home, identifying a shotgun as a 'garden tool,' consistently misidentifying dogs as cats or deer, and reporting 'fake people' on live feeds when nobody was present. The AI refuses to identify weapons even when deliberately shown one, and repeatedly misidentifies pets despite user corrections.
This matters because security cameras exist for one reason: to accurately tell you what is happening at your home when you are not there. When the AI fabricates events, homeowners cannot trust any alert. A false 'intruder detected' at 2 AM causes genuine panic — elevated heart rate, calling the police, or rushing home. When it happens repeatedly, users start ignoring alerts entirely, which is even more dangerous. The boy-who-cried-wolf effect means that when a real break-in happens, the homeowner dismisses the notification as another Gemini hallucination. Google is charging for these AI features through Google Home Premium (which increased from $8 to $10/month in August 2025), meaning users pay a subscription fee for a system that actively makes them less safe.
This problem persists because Google shipped Gemini for Home as a marketing-driven feature rather than a safety-critical system. The AI model was not trained or validated to the standard required for security applications. There is no accuracy SLA, no false-positive rate published, and no liability when a hallucinated 'all clear' summary fails to mention an actual intruder. Google's incentive is to ship AI features fast to justify subscription revenue, not to achieve the 99.9%+ accuracy that a security system demands. Users have no way to revert to the previous, simpler motion-detection system that was less 'intelligent' but more reliable.
Evidence
Digital Trends on Gemini misidentifications: https://www.digitaltrends.com/home/googles-new-home-ai-keeps-seeing-things-literally/ — Tom's Guide on hallucinated identities: https://www.tomsguide.com/audio/smart-speakers/google-home-acting-pretty-creepy-as-users-report-fake-identities-and-hallucinated-chores — TechBuzz on dogs misidentified as cats: https://www.techbuzz.ai/articles/google-s-gemini-ai-keeps-mistaking-dogs-for-cats-in-smart-homes — Android Headlines on deer misidentification: https://www.androidheadlines.com/2025/11/google-gemini-for-home-ai-misidentification-deer-accuracy-issues.html — TechBuzz on creepy family watching: https://www.techbuzz.ai/articles/google-s-gemini-home-ai-gets-creepy-watching-families