Upon visiting its official website, Xanthorox boldly declares itself as “the killer of WormGPT and EvilGPT,” a proclamation set against a dramatic black backdrop with striking red lettering. This new artificial intelligence (AI), akin to ChatGPT, has emerged in 2023, developed by an anonymous creator who has chosen to promote the tool through various public channels, including GitHub, Telegram, and Discord.
In stark contrast to most commercial AI systems that impose limitations to mitigate potential misuse, Xanthorox positions itself as a facilitator for illicit online activities. Its homepage boasts capabilities such as generating “the first ransomware AI-generated that bypasses all antiviruses,” equipped with “powerful and intense encryption, optimized for fast and deep penetration into the system.”
Xanthorox’s features extend beyond mere ransomware; it can create fake videos and audio clips (deepfakes), generate phishing emails designed to steal data with a single click, and produce malware. Additionally, it offers functionalities such as image analysis, reasoning, voice and audio chats, file analysis, camera usage, and web searches, all while ensuring user privacy through its own servers. The pricing structure is straightforward: 0 per month for limited features, 0 for full chat access, or a negotiable amount based on individual user needs, which must be discussed directly with the owner, known as Gary Senderson, via chat.
A Repeating Pattern
Xanthorox operates on open-source AI models that lack the security measures typically found in commercial systems like ChatGPT. This absence of filters enables the generation of unregulated content, including instructions for illegal activities such as programming computer viruses. The emergence of such technology is not new; in 2023, platforms like WormGPT and FraudGPT surfaced, offering similar functionalities that Xanthorox claims to have “killed.”
Daniel Kelley, a security researcher at SlashNext, noted earlier this year in Scientific American that Xanthorox “is more effective than WormGPT and FraudGPT” due to its sophisticated design. Casey Ellis, founder of the cybersecurity platform Bugcrowd, elaborated that while specific details remain scarce, Xanthorox appears to utilize advanced systems allowing various AI models to review and validate each other’s outputs, a hallmark of high-level architectures.
“It’s Just After the Money”
Jordi Serra, a cybersecurity expert, shared insights with ARA, explaining that tools like Xanthorox “make any potential attack much more general.” These platforms enable users to test and generate different code in attempts to evade antivirus programs, which typically search for specific virus behaviors. However, the actual capacity of Xanthorox to facilitate large-scale cybercrimes remains uncertain. On monitored cybercrime forums, the tool’s current impact appears limited, as it generates outputs without extensive prior training.
Serra further commented on the challenges of impersonating individuals using AI-generated content, stating, “That an attack of this type can impersonate a person without the AI having a known voice is very difficult if there are not minutes of recordings of that person behind it.” He emphasized that the necessary data to deceive individuals is currently lacking, noting that many attempts involve sending random numbers online while claiming to be a relative. The overarching motivation, he asserts, is purely financial.
Why is Xanthorox Legal?
Kishon explained that while AI itself is not inherently harmful, it significantly eases the tasks of cybercriminals. Open-source AI systems can be utilized freely as long as they do not directly contravene existing laws, making the creation of a system like Xanthorox legal. While using the tool for criminal activities is illegal, the onus of responsibility currently rests with the consumer. Serra concluded that artificial intelligence “perfects deception,” raising important questions about the ethical implications of such technologies.