Cybersecurity, Emerging Issues in Security

The Dark Side of Conversational AI: Google Duplex and Its Potential Risks

Back in 2018, Google introduced its Duplex Artificial Intelligence (AI) software that, with the help of Google Assistant, can speak for a user with an AI-based, but human-sounding, voice. The powerful voice assistant was developed to make calls and perform a few services, like making dinner reservations.

Woman frustrated with AI on smart speaker

aslysun / Shutterstock.com

A recent survey report showed that 81% of respondents believe AI-powered voice assistants need to declare they are robots before proceeding with a call, potentially due to Duplex’s realistic human-sounding voice.

This article will focus on the security risks technology like Google Duplex poses because it can be so convincingly human. These risks can include:

  1. Automating more lifelike robocalls
  2. Calls being recorded by Google for QA

Simply put, Google Duplex’s lifelike AI can pose ethical concerns, particularly if the technology is misused.

AI Can Create Superpowered Robocalls

On a whole, people hate robocalls. Because the number of robocalls has increased, so has people’s anger toward the often malicious calls. One saving grace for consumers is the ability to determine (most of the time, at least) that a call is automated.

Google Duplex has the power to change that. The voice assistant software has such advanced AI that people are not able to distinguish machine from human. This could lead to more robocalls successfully scamming people.

Experts have identified some possible negative scenarios, including using AI to schedule all available tables in a popular restaurant to later resell them, or stealing people’s personal information by tricking them into thinking the robocall they are speaking with is someone they know.

Aside from the potential financial losses, the lifelike AI caller can have an even deeper impact on society. According to the Clutch survey, 61% of people stated they would feel uncomfortable if they believed they spoke to a human and later learned they had spoken to AI.

These feelings relate to the idea of the “uncanny valley,” or humans’ instinctive distrust of things that are almost -but -not -quite human. Humans are biologically trained to pick up on nearly imperceptible triggers that may indicate a robot—or a robotic voice—as not fully human. When these triggers become even more difficult to distinguish, though, humans may begin to increasingly distrust AI and phone communication overall.

Conversational AI May Lead to Privacy Risks

In order to continue improving the technology, AI voice assistants like Duplex, Siri, and Alexa must record people’s voices. While most users assume that the companies only keep the recordings just long enough to analyze and understand what is said, it is possible that the data get stored for longer than most people would prefer.

Automated systems recording a user’s voice without consent is both a privacy concern and a potential illegal act. Federal law permits recording telephone calls and in-person conversations only if you have the consent of at least one of the parties; 38 states and the District of Columbia have similar laws, while 11 states require consent of all parties on a call.

But how does that impact AI calls?

Although Google updated Duplex to warn call recipients that it is recording the call, legal experts will debate whether that solves the consent issue, especially if the call recipient doesn’t know how the recording will be used and for what period of time.

It appears the only way to decline being recorded is to hang up the phone, which may not always be the most prudent action.

Evolving Technology, Expanding Misuse

AI voice assistants like Google Duplex can provide useful services to people and businesses by making phone interactions more efficient and effortless. However, the appeal of the technology may be overshadowed by the potential for misuse.

Google and developers of AI software similar to Duplex should consider the ethics of these technologies and understand how increasingly lifelike AI will impact society.

History has shown that people often abandon their phobias once they become accustomed to using new technology, though. Time will tell if people grow comfortable with Duplex’s lifelike voice and if its potential adverse consequences are able to be controlled.

alt link text Riley Panko is a Senior Content Developer & Marketer at Clutch, a B2B platform for ratings and reviews, and also writes for The Manifest, a B2B news and how-to site