
Summary
Gartner predicts AI agents will accelerate account takeovers by 50% within two years, prompting a need for enhanced security measures. This rise in AI-driven attacks necessitates a shift toward passwordless authentication and improved employee training. Organizations must adopt proactive security strategies to combat the evolving threat landscape.
Dont let data threats slow you downTrueNAS offers enterprise-level protection.
** Main Story**
The cybersecurity landscape is constantly evolving, with new threats emerging at an alarming rate. One such threat, identified by Gartner, is the rise of agentic AI in account takeovers (ATOs). Gartner predicts that within two years, AI agents will accelerate the time it takes threat actors to hijack exposed accounts by a staggering 50%. This prediction underscores the urgent need for organizations to bolster their security measures and adapt to this evolving threat landscape.
The Rise of Agentic AI
Agentic AI is considered the next major advancement in artificial intelligence, following the impact of generative AI in recent years. These autonomous agents can make decisions and adapt to changing environments without human intervention. While this capability holds immense potential for various applications, it also poses a significant risk in the hands of malicious actors.
In the context of ATOs, agentic AI can automate many steps necessary to compromise accounts, including deepfake-driven social engineering and credential compromise. Deepfakes, which are AI-generated synthetic media, can create highly realistic fake audio and video content, making it easier for attackers to deceive individuals and gain access to sensitive information.
The ATO Challenge
ATOs have become a major concern for organizations, surpassing even ransomware in some instances. Driven by a surge in malicious bot and infostealer activity, ATOs enable large-scale fraud and enterprise breaches. A report by Abnormal Security revealed that 83% of organizations experienced at least one ATO incident in the past year, highlighting the pervasiveness of this threat.
Combating the Threat
In the face of this escalating threat, Gartner recommends several key actions for security leaders:
- Passwordless Authentication: Expedite the move toward passwordless, phishing-resistant multi-factor authentication (MFA). Passwords are often the weakest link in security, and eliminating them altogether significantly reduces the attack surface.
- Passkeys: Encourage users to migrate from passwords to multi-device passkeys where applicable. Passkeys offer a more secure alternative to passwords, leveraging cryptographic techniques to protect user accounts.
- Employee Education: Educate employees about the evolving threat landscape and provide training specific to social engineering tactics involving deepfakes. Raising awareness about these threats can empower employees to identify and avoid potential attacks.
The Deepfake Dilemma
Gartner also predicts that by 2028, 40% of social engineering attacks will use deepfake audio and video to target both executives and the broader workforce. This highlights the increasing sophistication of social engineering attacks and the need for organizations to adapt their security procedures and workflows.
A Two-Sided Coin
While agentic AI poses a significant threat to cybersecurity, it can also be a valuable tool for security teams. AI-powered security solutions can process security alerts much faster than traditional methods, enabling quicker identification and response to threats. This duality of AI underscores the importance of responsible development and deployment of the technology. As AI continues to advance, the battle between attackers and defenders will intensify, requiring ongoing innovation and adaptation on both sides. The rise of agentic AI in ATOs presents a formidable challenge, but by taking proactive steps and embracing advanced security measures, organizations can mitigate the risks and protect their valuable assets.
Given the prediction that 40% of social engineering attacks will leverage deepfakes by 2028, what strategies beyond employee education can organizations implement to verify the authenticity of communications, particularly those seemingly originating from internal leadership?
That’s a great point! Beyond employee education, exploring technical solutions like communication authentication protocols and AI-driven deepfake detection tools could be vital. Layered security, combining tech and updated verification workflows, will be crucial to protect against sophisticated attacks.
Editor: StorageTech.News
Thank you to our Sponsor Esdebe
50% faster account takeovers thanks to AI? So, if my calculations are right, does that mean I only have half the time to change my ridiculously weak password now? Asking for a friend, naturally.