AI and Cloud-Based Attacks Are Rising: Why Security Must Evolve
AI and Cloud-Based Attacks Are Rising: Why Security Must Evolve/Photo via FreePik

AI and Cloud-Based Attacks Are Rising: Why Security Must Evolve

The widespread use of artificial intelligence (AI) and the cloud creates new challenges for business leaders and software security technology architects. Increased cloud adoption expands the attack surface, while AI introduces fresh attack vectors, particularly in software supply chains, data governance, and access control. The CrowdStrike Global Threat Report 2024 found a 110 percent year-over-year rise in cloud-conscious attacks, with 84 percent involving financially motivated cybercriminals. AI is driving a surge in opportunistic, lower-skill cyberattacks, while highly advanced threat groups are also expanding their focus on cloud-based targets. As attack sophistication grows across the spectrum, traditional security approaches are proving insufficient, requiring a shift in enterprise defense strategies.

Cloud Vulnerabilities Expose Organizations

Cloud adoption expands enterprise infrastructure, but public and shared environments generate critical vulnerabilities. Misconfiguration remains a top risk, as seen in the Capital One 2019 breach, where a simple firewall misconfiguration exposed 106 million customer records. Such errors continue to plague organizations across industries. According to the director of Cybersecurity at MIT Sloan, 80 percent of data breaches now occur in cloud environments, with the National Security Agency (NSA) identifying misconfiguration as a primary attack vector. Security teams struggle to manage complex cloud settings that can change rapidly, creating persistent security gaps.

Another key risk lies in cloud-hosted software libraries. Modern applications rely heavily on shared repositories, but attackers can insert malicious code into widely used cloud-based dependencies, compromising multiple organizations simultaneously. AI exacerbates this problem by automating vulnerability discovery, allowing cybercriminals to scan cloud-based libraries for weaknesses in minutes rather than months. Some attacks specifically target open-source packages, using poisoned updates to create backdoors or insert data exfiltration tools.

Public AI models present further concerns as they rapidly experiment to gain first-mover advantages. ChatGPT, the first public AI model, grew to have millions of users shortly after its release in late 2022. By March 2023, user chat histories from ChatGPT were leaked due to the model’s use of the Redis open-source library, which contained a vulnerability. Public AI models are also vulnerable as they train on vast datasets from multiple sources, making them vulnerable to data leaks and adversarial attacks. Sometimes, organizations unknowingly expose proprietary or sensitive data to publicly available AI training sets. Cybercriminals can take advantage of AI models to extract that data. They also can produce misleading outputs and inject bias into AI models, affecting their security and reliability. Once an AI model is compromised, any system relying on it inherits its vulnerabilities.

AI-Powered Threats Escalate Cybersecurity Risks

AI is transforming cybersecurity, but while it enhances defenses, it also provides powerful tools for attackers. One of the most immediate concerns is automated hacking, where AI-driven bots rapidly scan for vulnerabilities, identifying weak points in cloud systems faster than human hackers ever could. AI-assisted attacks can adapt in real time, making traditional security responses slower and less effective.

Deepfake fraud is a growing risk, where AI-generated audio and video convincingly impersonate executives or employees to authorize fraudulent transactions. In a 2024 case, cybercriminals used a deepfake video call to impersonate a finance executive at a UK-based engineering firm, tricking an employee into transferring £20 million to fraudulent accounts. As deepfake technology improves, businesses face a rising threat of AI-enabled social engineering attacks.

As AI reshapes software development, it brings significant security risks. AI-powered coding assistants frequently generate insecure code, embedding hard-coded credentials, weak encryption, or exploitable vulnerabilities. A 2023 Snyk report on AI code security found that 96 percent of development teams use AI coding tools, yet over half reported that these tools commonly suggest insecure code. Compounding the issue, fewer than 10 percent of respondents automate security scanning, allowing vulnerabilities to slip through undetected. Meanwhile, automated dependency resolution increases the risk of software supply chain poisoning, where compromised third-party libraries embed security flaws into enterprise software. Without stronger oversight, AI-driven development is speeding up security gaps rather than closing them.

Cloud and AI Integration Challenges Legacy Security Models

Blending on-premises and public cloud environments complicates security, often weakening existing protections. The cloud’s shared responsibility model introduces gaps where misconfiguration, excessive access privileges, and third-party dependencies create new vulnerabilities. Zero-trust architecture (ZTA), initially designed for static enterprise networks, struggles in a cloud-based world where workloads shift dynamically and access requests come from distributed locations. AI models further complicate access control by processing sensitive data outside traditional security perimeters, making it difficult to enforce strict authentication and monitoring policies.

Meanwhile, traditional threat detection tools are losing ground as attackers use AI to automate, disguise, and speed up breaches, making reactive defenses ineffective. Poor control over AI training data also poses risks—cybercriminals can subtly manipulate inputs to mislead AI models, extract proprietary information, or inject vulnerabilities. Organizations remain exposed to faster, more sophisticated cyber threats without adapting their security models to account for AI and cloud dynamics.

Mitigating AI and Cloud Cybersecurity Risks

As AI and cloud adoption accelerate, it’s critical for security strategies and tools to develop just as quickly. Securing AI and cloud environments requires redefining ZTA with dynamic access control, where permissions adjust in real time based on user behavior and risk analysis. Traditional security models assume static perimeters, but AI-driven access monitoring can adapt to fast-changing cloud workloads and dynamic threats.

It’s also essential for organizations to balance public cloud scalability with tighter control. The bare-metal cloud offers dedicated infrastructure, eliminating the risks of shared environments. Confidential computing methods reduce data exposure by processing information locally rather than sending it to public AI models. There are two methods to consider: Federated AI enables AI training across multiple devices or locations without transferring raw data, while edge AI processes data directly on local hardware, minimizing the risk of data leaks.

To secure third-party software dependencies, it’s crucial for enterprises to vet AI-generated code, scan for vulnerabilities in open-source libraries, and restrict unverified dependencies to prevent software supply chain attacks. Quantum computing is on the horizon and will render traditional encryption obsolete. Astute organizations will test quantum-resistant cryptographic methods now to prepare for future security challenges. 

Forward-Looking Security Frameworks

It’s time for companies to re-evaluate their cybersecurity as the use of AI and cloud-based systems grows. Key options include moving AI computing closer to home to keep sensitive data out of the public cloud and improving security monitoring with the help of real-time visibility into the environment to identify threats as they occur. It’s also vital to prepare for future risks, whether from quantum computing or new attack methods, that will determine which enterprises will remain secure and which will not.

Picture of By Nuruddin Sheikh

By Nuruddin Sheikh

Nuruddin Sheikh is a software performance architect with over 20 years of experience leading cloud and big data transformations in ML-driven content recommendation, virtual collaboration, software-defined infrastructure, and enterprise security. He has spearheaded strategic initiatives at Fortune 500 companies, driving innovations in search, e-commerce, digital conferencing, cybersecurity, and fintech. His expertise spans performance engineering, cloud computing, machine learning, and security, with a focus on designing large-scale, low-latency, and secure enterprise architectures. Nuruddin holds a master’s degree in software systems. Connect with him on LinkedIn.

All Posts

More
Articles

[ninja_form id=16]

SEARCH OUR SITE​

Search

GET THE LATEST ISSUE IN YOUR INBOX​

SIGN UP FOR OUR NEWSLETTER NOW!​

* indicates required

 

We hate spam too. You'll get great content and exclusive offers. Nothing more.

TOP POSTS THIS WEEK

INNOVATION & TECH TODAY - SOCIAL MEDIA​