banner
News center
Instant delivery

Simplisafe camera AI capacity raises privacy issues

Oct 30, 2024

There’s a guy on your front porch. He’s hovering near the door like he’s thinking about breaking in. But he hasn’t made a move yet.

What will your home security system do about it?

Boston-based SimpliSafe says its newest security camera will use artificial intelligence to try to figure if the visitor is a threat.

“Instead of waiting for your door to be trespassed and then we call the police, we want to detect the danger before it happens,” said SimpliSafe chief executive Christian Cerda.

SimpliSafe pioneered inexpensive security systems that don’t need costly wiring and can be installed by homeowners. Now the company hopes to build a system smart enough to spot potential housebreakers. For now, the system still relies on human assistance to decide whether someone is friend or foe. Agents are cautioned against using unfair criteria like race when making such decisions. But Cerda hopes one day to create a version that could make such decisions on its own. And that possibility is making privacy advocates a little nervous.

“I don’t want a robot deciding that one person’s behavior is dangerous and another person’s is not,” said Adam Schwartz, privacy litigation director of the Electronic Frontier Foundation, a digital civil liberties organization.

Advertisement

“Dumb” security cameras like Amazon’s popular Ring doorbell cameras have already taken flak for posing a threat to privacy. Under pressure from civil liberties watchdog groups, Amazon said in January that it would no longer allow police agencies to request video footage from Ring camera owners for use in criminal investigations.

The SimpliSafe system raises different questions, about whether it’s permissible to run facial recognition scans without the subject’s permission, and whether an AI is smart enough to correctly identify possible criminal activity.

The new SimpliSafe system captures video of people approaching the house, and alerts a human agent at the company’s alarm monitoring center. The camera’s AI system can be taught to recognize the faces of family members and friends. If the visitor has a familiar face, the SimpliSafe agent is told to stand down.

But for anybody else, the AI generates a text description of the person. For instance, it might report, “an unfamiliar person wearing a green shirt and blue pants was detected by the front door camera at 10:15 pm.”

Advertisement

The agent can watch live video from the camera and talk to the visitor through the camera’s built-in speaker and microphone. The agent would say something like, “this is SimpliSafe. Your visit is being recorded on video. Can I help you?” If the visitor is delivering a package, no problem. If he’s a criminal, the agent can call the police.

The feature is being built into the company’s new outdoor security camera, priced at $200. Users will also pay $80 a month for humans to monitor the camera around the clock, or $50 for nighttime-only monitoring.

“I would say what they’re doing is quite advanced,” said Elizabeth Parks, president of market research firm Parks Associates. She said it’s one of many efforts by home security companies to embed AI technology like facial recognition into their products. “The major players and newcomers are all looking at what are the benefits of this machine learning,” she said.

SimpliSafe already makes an indoor camera that allows agents to warn off thieves after they’ve detected a break-in at the property. The new outdoor camera is supposed to deter them before they smash a window or kick in a door.

According to Hooman Shahidi, SimpliSafe’s chief product officer, many thieves aren’t afraid of burglar alarms. “When people break into homes they hear the siren, and they’re like, yeah, of course there’s a siren or whatever,” said Shahidi. “I’ve got 20 minutes before somebody shows up.

Indeed, in some US cities it can take an hour or more for police to respond to an emergency call. Shahidi said this is partly because around 90 percent of burglar alarm calls are false alarms. Knowing this, police are often slow to respond to them. This might discourage people from buying an alarm system at all — why bother if help will arrive too late? So SimpliSafe hopes to win over more customers with a system that tries to scare off thieves before calling for backup.

Advertisement

By combining AI with a human security agent, the SimpliSafe system minimizes the risk that a computer might call the police in response to an innocent visit. But the system still raises some civil liberties concerns.

For instance, the automatic face matching feature won’t be available to SimpliSafe customers in Illinois, Texas and Portland, Ore. These jurisdictions have enacted laws that forbid the collection of facial recognition data without the person’s permission. Mapping the faces of anybody who comes along could run afoul of these laws.

There’s also the worry that SimpliSafe’s new system is just the beginning. How long before it uses AI instead of humans to decide who’s a threat? In an email, Cerda said that while the system doesn’t do this now, he made it clear SimpliSafe is heading in that direction.

“After our launch period and as we gain scale, we will continue to refine this technology, further train and validate our models, and narrow on specific cases where we are confident we are not introducing bias or other errors,” Cerda wrote. “At this point we will use the tool to filter out or even take action on an event; this is critical in our roadmap to delivering this service at scale and cost.”

Advertisement

Cerda declined to provide further details about what kind of action the AI might take. But it’s possible that a future SimpliSafe system could respond to an unidentified visitor using AI alone. The computer might decide to issue a verbal warning, or even to call the police, without a human as backup.

Already companies use AI analysis of facial expressions to gauge people’s reactions to TV commercials or political campaign ads. And developers of self-driving cars are working on AI systems that would assess the body language of nearby pedestrians, to predict whether they’ll step into the street.

That raises the concern in some quarters that a company such as SimpliSafe could eventually use these methods to automatically assess criminal intent. And that’s what Schwartz is worried about.

“All of the bias and irrationality in human life is fed into the AI and replicated by the AI,” said Schwartz. He worries that racial and class prejudices embedded in the software could cause harm to innocent people.

Cerda admits this is a legitimate concern, at least until SimpliSafe builds better AI models. That’s why the system will keep humans in control — for now.

Hiawatha Bray can be reached at [email protected]. Follow him @GlobeTechLab.