close
close
Microsoft fixes ASCII smuggling bug that allowed data theft from Microsoft 365 Copilot

27 August 2024Ravie LakshmananAI security/vulnerability

Microsoft fixes ASCII smuggling bug that allowed data theft from Microsoft 365 Copilot

Details have emerged about a now-patched vulnerability in Microsoft 365 Copilot that could allow the theft of sensitive user information using a technique called ASCII smuggling.

“ASCII smuggling is a novel technique that uses special Unicode characters that mirror ASCII but are not visible in the user interface,” said security researcher Johann Rehberger.

“This means that an attacker can trick the (large language model) into displaying data invisible to the user and embedding it in clickable hyperlinks. This technique essentially prepares the data for exfiltration!”

Cybersecurity

The entire attack links a number of attack vectors to form a reliable exploit chain. This includes the following steps:

  • Trigger instant injection with malicious content hidden in a document shared in chat
  • Using a prompt injection payload to instruct Copilot to look for more emails and documents
  • Use of ASCII smuggling to trick the user into clicking a link that transfers valuable data to a third-party server.

The net result of the attack is that sensitive data contained in emails, including multi-factor authentication (MFA) codes, could be transmitted to a server controlled by the adversary. Microsoft has since fixed the issues following responsible disclosure in January 2024.

This development comes after proof-of-concept (PoC) attacks on Microsoft’s Copilot system were demonstrated to manipulate responses, steal private data, and bypass security safeguards, further highlighting the need to monitor risks in artificial intelligence (AI) tools.

The methods described by Zenity allow malicious actors to perform retrieval-augmented generation (RAG) poisoning and indirect prompt injection, resulting in remote code execution attacks that can completely control Microsoft Copilot and other AI apps. In a hypothetical attack scenario, an external hacker with code execution capabilities could trick Copilot into serving phishing pages to users.

Cybersecurity

Perhaps one of the most novel attacks is the ability to turn AI into a spear-phishing machine. The red teaming technique, called LOLCopilot, allows an attacker with access to a victim’s email account to send phishing messages that mimic the style of the compromised user.

Microsoft has also acknowledged that publicly available Copilot bots created with Microsoft Copilot Studio, which lack any authentication protection, could provide a way for threat actors to obtain sensitive information, provided they know the Copilot name or URL.

“Organizations should assess their risk tolerance and exposure to prevent data leaks from Copilots (formerly Power Virtual Agents) and enable data loss prevention and other security controls to control the creation and publication of Copilots,” Rehberger said.

Did you find this article interesting? Follow us on Þjórsárdalur and LinkedIn to read more exclusive content we publish.

By Bronte

Leave a Reply

Your email address will not be published. Required fields are marked *