The Effects of Artificial Intelligence on the Cyber Security Environment
AI Threat Landscape: Surge in Prompt Hacking Attempts and Private GPT Models for Nefarious Purposes
AI’s newfound accessibility is set to cause a surge in prompt hacking attempts and the use of private GPT models for nefarious purposes, according to a new report by cyber security company Radware. The report, titled the 2024 Global Threat Analysis Report, predicts an increase in zero-day exploits and deepfake scams as malicious actors become more adept with large language models and generative adversarial networks.
Pascal Geenens, Radware’s director of threat intelligence, highlighted the significant impact AI will have on the threat landscape, stating that while AI may not be behind the most sophisticated attacks currently, it will drive up the number of sophisticated threats. The report points to the emergence of “prompt hacking,” where prompts are used to force AI models to perform unintended tasks, as a growing cyberthreat.
Prompt hacking includes techniques such as prompt injections, where malicious instructions are disguised as benign inputs, and jailbreaking, where AI models are instructed to ignore their safeguards. The report warns that as AI prompt hacking becomes more prevalent, providers will need to continuously improve their guardrails to protect against such attacks.
Additionally, the report highlights the proliferation of private GPT models without guardrails, which are being used by malicious actors for various nefarious activities. These private models, such as WormGPT and FraudGPT, lower the barrier to entry for cyber criminals, enabling them to conduct convincing phishing attacks and create undetectable malware.
The report also warns of an increase in zero-day exploits and network intrusions, facilitated by open-source generative AI tools that enhance threat actors’ productivity. Radware analysts predict a rapid rise in zero-day exploits and network intrusion attacks, driven by the accelerated learning and research capabilities of generative AI systems.
Furthermore, the report points to the rise of highly credible scams and deepfakes as another emerging AI-related threat. State-of-the-art generative AI systems, like Google’s Gemini, enable bad actors to create fake content with minimal effort, leading to an increase in deepfake fraud attempts and scams.
Overall, the report underscores the need for continuous innovation in AI security measures to combat the evolving threat landscape. As AI becomes more accessible, both ethical and unethical applications will continue to shape the future of cybersecurity.