Signal’s president warns AI agents are an existential threat to secure messaging apps

As the tech landscape increasingly embraces AI features, Meredith Whittaker, president of Signal, stands in stark opposition. In a recent conversation with Fortune, she articulated her concerns regarding the proliferation of AI agents, labeling them as an “existential threat” not only to secure messaging platforms like Signal but also to any application developers operating within the mobile and desktop environments.

AI agents, designed to perform tasks on behalf of users, necessitate access to vast amounts of sensitive information, including bank details and passwords. This requirement introduces a significant new “attack surface,” which can be exploited by cybercriminals and espionage entities aiming to siphon off confidential personal or corporate data.

Whittaker highlighted the particular vulnerabilities of AI agents to prompt injection attacks. These attacks occur when malicious websites embed deceptive instructions that manipulate the AI into executing harmful actions. Given that AI-powered web browsers can interpret and act upon web content, the potential for data breaches escalates dramatically. Attackers could, for instance, steal emails, access accounts, exfiltrate data, overwrite clipboards, or redirect users to phishing sites.

“The way an agent works is that it completes complex tasks on your behalf, and it does that by accessing many sources of data,” Whittaker explained during the Slush technology conference in Helsinki last week. “It would need access to your Signal contacts and your Signal messages…that access is an attack vector and that really nullifies our reason for being.”

Signal has garnered a reputation as a preferred choice for journalists and politicians, primarily due to its unwavering commitment to privacy and security. The platform champions end-to-end encryption by default and minimizes data collection to safeguard user communications. However, should AI agents gain unrestricted access to these communications through the operating system, the risk of exploitation becomes alarmingly high.

“The integration of agents at the operating system level is being done in ways that are very reckless and insensitive to cybersecurity and privacy basics,” Whittaker asserted. “It is a very, very dangerous architectural decision that threatens not only Signal, but the ability to develop safely at the application layer and the ability to have safe infrastructure that operates with integrity.”

AI agents risk undermining the internet’s security foundations

In contrast, competing messaging applications like Meta’s WhatsApp and Facebook Messenger are actively incorporating AI features—an approach Whittaker deems unnecessary and unwelcome by users. “No one wants AI in their messaging app. It’s really annoying,” she remarked. “If we look at what they’re useful for at a consumer level, it’s really not clear to me that that trade-off is worth it…What are we actually optimizing for with these yawn-inducing conveniences?”

Consumer interest in AI within messaging applications appears to be mixed, with some curiosity surrounding features such as translation and summarization. Companies have made strides to alleviate certain security concerns associated with these features, aiming to reassure users about their privacy. Meta, for instance, has positioned some of its new AI tools as enhancements to safety rather than compromises to privacy, citing features like scam detection and automated assistance. The company emphasizes that its AI capabilities are only activated upon user engagement and that the assistant cannot access messages unless explicitly permitted.

Whittaker cautions that the rush by Big Tech to implement AI—particularly agentic AI—poses escalating security risks across the internet, overshadowing its potential benefits. “Part of what we’re seeing is that there is a bit of nervousness around the amount of [capital expenditure] that has been expended to support this scale at all costs…the infrastructure spend is eye-watering,” she noted. “There’s a need to continually float these valuations in this market to investors and shareholders quarterly, leading to what I’m seeing as a lot of reckless deployments that bypass security teams…That is very dangerous.”

AppWizard
Signal’s president warns AI agents are an existential threat to secure messaging apps