Artificial Intelligence Elevates Social Engineering Tricks to Alarming Levels
Enhancing Cybersecurity Awareness and Utilizing Advanced Technical Tools for Resistance.
Social engineering represents one of the most potent and enduring forms of cybercrime.
There's a reason behind the widespread proliferation of social engineering among internet criminals: breaching personal data is far easier than hacking into computer programs. Social engineering involves a set of tricks and techniques used to deceive people.
Personal Electronic Deception
To breach software networks, one needs an understanding of the targeted environment, how to discover vulnerabilities, and monitor loopholes - a process that requires significant technological skills and resources. On the other hand, breaching human data simply requires a basic knowledge of human nature specifically, our susceptibility to greed, lust, curiosity, and impatience. Success in breaching the right individual's accounts, namely anyone unaware of the bait of phishing scams and its warning signs, grants access to the sought-after kingdom, with the illicit intentions remaining undetected.
Technology also plays a role, as our reliance on it grows with each advancement. Moreover, deceiving humans has become easier; initially, frauds were perpetrated via email (phishing), followed by short message service texts (smishing), then came "vishing" (voice phishing), social network breaches, and finally, the hacking of QR codes known as "quishing". It is evident that social engineering has evolved hand in hand with technology.
The Wave of Artificial Intelligence
A sudden wave of artificial intelligence technologies has propelled such attacks to new levels of complexity.
Let's examine 5 new developments in artificial intelligence and their potential implications on social engineering-related fraud:
Widescale Professional and Personal Phishing
Research by Google Cloud concluded that generative artificial intelligence is already being used in developing phishing attacks devoid of spelling and grammar errors, making them harder to detect and block.
Additionally, automation allows attackers to personalize phishing messages or modify them according to the target, making them appear more realistic and convincing.
Voice Cloning and Video Synthesis
Artificial intelligence technologies enable users to clone voices, superimpose faces onto video clips, and impersonate others. Notably, convincing attacks have occurred worldwide, with attackers cloning voices and creating virtual personas to defraud institutions and steal money through their employees.
*Increased Attacks Using Large Language Models
Standard large language models only process text. In contrast, multimedia large language models offer significant benefits for their ability to process and link additional media forms, like images, video clips, audio snippets, and sensory data.
This enables artificial intelligence tools to develop a deeper context awareness, leading to smarter responses, improved reasoning, and more human-computer interactions. Attackers may soon utilize multimedia large language models to create highly contextual phishing messages, significantly enhancing the effectiveness of social engineering attacks.
Malicious Applications of Text-to-Video Technology
The conversion of text to video is an emerging artificial intelligence technology. It allows users to create high-quality visual content simply by providing textual inputs.
If such technology falls into the wrong hands, it could become hazardous and be exploited in fabricating false narratives (misinformation), generating fake images through deepfake technology on a large scale, deceiving individuals and institutions, and launching engineered attacks related to social engineering.
Rise of Artificial Intelligence Technology as a Service
A report by Google Cloud predicts that artificial intelligence tools will soon be offered as a service, aiding other malicious actors in their nefarious campaigns. Tools related to artificial intelligence, such as "FraudGPT", have already appeared on the dark web, enabling cybercriminals to draft sophisticated phishing email messages.
As these artificial intelligence technologies mature and become more accessible, less skilled malicious actors will be able to deploy these tools, leading to an increase in artificial intelligence-driven social engineering attacks.
Combating Cyber Attacks
How can companies mitigate the risks of artificial intelligence-driven social engineering attacks? Social engineering attacks are not limited to large institutions, as estimates suggest a worker in a company with fewer than 100 employees is likely to face 350% more social engineering-related attacks than their counterparts in larger companies.
Moreover, with the spread of artificial intelligence technology and companies operating more digitally and interactively than in the physical world, these attacks will become more common.
Here are the best practices that can help mitigate this threat:
Enhancing Awareness of Artificial Intelligence Risks
Through regular communications and reminders, employees should be educated about the emerging risks associated with artificial intelligence. Artificial intelligence risks should be documented in security policies, allowing workers to recognize and handle them and whom to contact in case of a threat.
End-User Training
Regular (monthly) security awareness training is crucial. The organization can provide personalized training, and if necessary, adjust the training to meet individual needs, in addition to running phishing attack simulations to strengthen employees' security skills. Generally, the success or failure of social engineering attacks depends on the employees' vigilance and level of knowledge.
Maximizing the Use of Tools and Technology
Although detecting social engineering attacks is often challenging, institutions can implement controls to reduce the risks of identity theft and fraud. For instance, activating multi-factor authentication against phishing can enrich the verification process. Organizations could also consider using artificial intelligence-supported cybersecurity tools that can inspect the metadata of email messages to detect phishing attempt indicators.
Typically, social engineering forms the initial phase in the cycle of a cyber attack. If institutions learn how to exploit human intuition developed through repeated phishing simulations, they will be able to detect any attack and prevent it before it causes material damage. Besides fostering the right intuition, it is equally important for employees to be accountable and act responsibly in reporting suspicious elements and incidents. To achieve this, institutions must strive to promote a healthy and supportive cybersecurity culture.
Translation:
Translated by AI
Newsletter
Related Articles