The launch of ChatGPT just over a year ago ushered in the golden age of AI, leading to the adoption and understanding of advanced concepts in computing on a massive scale. While much of the population was fascinated by having large language models generate fascinating works, bad actors and criminals were learning and practicing new and advanced ways to augment and scale their fraud operations overnight. This triggered a twelvefold increase in phishing emails as LLMs helped aid in increasingly sophisticated spear-phishing methods, generating credible emails, text messages, and other forms of convincing attacks at incredible speeds.
While legitimate LLMs are equipped with safeguards meant to prevent unethical or illegal misuse, those with ill intent quickly found ways to overcome these protections using cloned malicious LLMs and “jailbroken” prompts to assist phishers in their work. With these new methods, a single attacker can run thousands of scams in any language, in any part of the world, fully automated and with minimal effort. Thanks to advanced LLMs, the customizability and quality of text-based phishing has greatly improved since the early days of barely legible messages sent by fictional princes asking for loans. Since fraudsters can now produce and disseminate high-quality phishing messages at scale, they are increasingly successful in finding more victims and circumventing security measures established by companies and institutions.
Over the last year, we have witnessed a 1,265% increase in malicious phishing emails, with 68% of all phishing emails classified as text-based business email compromise (BEC) attacks. In these attacks, malicious actors gain access to company email credentials or impersonate a trusted user to steal company data or initiate financial transactions. Phishing efforts to obtain company credentials span across mobile, email, social, and collaboration platforms. In a similar vein, smishing (SMS-based phishing attacks) has also risen in recent years, with many of these attacks occurring on employee-owned devices and outside of the purview of companies.
The root of the attacks remains the same: the ease with which fraudsters can generate believable phishing emails, text messages, phone calls, and voice clips, and the speed at which these messages can be sent to vulnerable individuals and company infrastructures via automated AI models. These same models can alter malware code on the fly, creating instant variations for phishing attacks that further complicate efforts to combat cybercrime. It is no coincidence that this spike in phishing coincides with the rise in AI usage. According to the FBI Internet Crime Report, BEC accounted for $2.7 billion in losses for affected companies in 2022 alone.
Protecting Against Advanced Attacks
While LLM usage spiked overnight, Reality Defender was already working to assist companies in preventing such losses and addressing the significant rise in weaponized AI-generated text. Today, our cutting-edge suite of all-inclusive detection tools aid anti-fraud and security/IT departments with real-time AI-generated text detection, capable of spotting a phishing attack with as few as 200 characters. Our platform-agnostic text detection system sits atop existing cybersecurity systems, to identify AI-generated text with the purposes of phishing and fraud.
The breakthrough capabilities of generative AI will continue to equip fraudsters with unreasonably effective tools of deception. Reality Defender empowers enterprises, governments, and institutions with state-of-the-art tools to meet these attempts head-on and ensure those at the first line of defense can thrive in this new tech landscape unharmed, preventing damage to their systems, reputation, user data, and financial stability.