Blog
Recent
Threat Intel

“Well, We Had a Good Run”: Thinking About Cyber Threats in the Age of AI

Mike KosakPublishedApril 14, 2026
“Well, We Had a Good Run”: Thinking About Cyber Threats in the Age of AI

A lot of ink has been spilled on AI threats over the last few years and with good reason. AI is undoubtedly changing the cyber threat landscape in ways positive and negative. Anthropic’s announcement of Project Glasswing this week is a great example of a positive way (for now) AI is helping keep organizations more secure as major companies like Google, Apple, Anthropic, and NVIDIA band together to use Claude Mythos2 to scan for vulnerabilities across critical software in a responsible manner. Are there concerns this capability can and will be replicated by less responsible organizations and malicious threat actors to identify and exploit vulnerabilities before they are patched? Absolutely, but we’ll get to that another day. For now, let’s focus on the topic of AI threats and how to frame them in a way that is manageable, particularly for small and midsized businesses that may be overwhelmed by the sheer volume of threat reporting out there. This post will be the first in a series on how to think about AI threats that will help organizations frame these issues in a way that will allow them to parse out what’s noise and what needs action and what can be done about it.

For this first piece, let’s focus on breaking down “AI threats” into two subdivisions…threats TO AI, which we will address in future posts, and threats FROM AI, which is what the majority of current cybersecurity media reporting is currently focused on. These threats generally fall into a few major categories; examples of these include:

  • Phishing: “AI-enabled phishing” served as the bow wave of AI threats. This started largely as phishing campaigns using AI to draft higher quality emails, devoid of the typical spelling and grammar mistakes that served as the hallmark of spam for decades, before evolving to include AI-driven reconnaissance and spearphishing that enabled the creation of more targeted messages and campaigns at a larger scale. This has now progressed to the creation of whole phishing kits enabled by AI, including writing the code for interstitial (referral) pages, security research detection and obfuscation measures, and the creation of custom phishing pages. The LastPass Threat Intelligence team assesses we have seen this last phase of activity in phishing campaigns targeting our own customers, in fact.
  • Malware Development: The use of AI to improve and even develop new and custom malware has been a long-standing concern and one that has borne out over the last few years. Threat actors are either developing unique and malicious large-language models (LLMs) to help them with this or finding ways to jailbreak legitimate models to assist in the creation and/or improvement of malware. Reports from major AI companies such as Anthropic confirm these efforts and again, our own analysis has uncovered indicators of AI involvement in the creation of custom phishing kits which, while not malware per se, demonstrate the use of AI to create code with malicious intent.
  • Deep Fakes and Disinformation: This category refers to the use of AI to generate deliberately misleading written, audio, or video material. The use of deep fakes has long been a concern but is becoming more of a reality in cyber attacks targeting organizations, most notable in recent campaign by North Korea in which nation-state cyber threat actors leverage deep fakes of whole teams to arrange virtual meetings with individuals generally with goal of compromised their machine and/or network to enable further attacks targeting credentials, tokens, and/or cryptocurrency. The recent supply chain attack targeting the Axios software on npm is a great example of this.
  • Increased Speed and Scale of Operations: Just as many people are, cyber threat actors are using AI (particularly agentic AI) to automate their operations, allowing them to plan and execute their attacks at unprecedented speed and scales. A separate Anthropic report from last November detailed an extensive cyberespionage campaign linked to Chinese nation-state threat actor that was orchestrated by AI and since then, there are increasing indications these approaches are being leveraged by a broader range of threat actors.
  • Lowered Technological Barrier of Entry: “Vibe coding” isn’t just for people looking to create their killer app… cyber threat actors are doing it, too. The derisive terms “script kiddies” or “skiddos” have been around in cybersecurity forever, referring to individuals using simple techniques or scripts to attempt to conduct cyber attacks… for a long time, this was largely a nuisance issue, particularly for well-defended organizations. Now, however, AI is allowing “n00bs” to pose legitimate threats as they are able use AI tools to create and launch various attacks, similar to what we discussed above. When these “’bad vibe’ coding” attacks are combined with the larger “evil-as-a-service” cybercrime ecosystem (ransomware-as-a-service, phishing-as-a-service, etc.), these relatively unsophisticated threat actors become serious concerns that can punch well above their weight.

This list is by no means exhaustive but it does demonstrate the threats organizations are facing FROM AI… in other words, those threats that are enabled by AI. The good news is that while these threats generate a lot of reporting, for the most part, existing controls when well-implemented can mitigate much of the threat from these. Remember, the TYPES of attacks associated with the points above aren’t fundamentally different from what existed previously, it is HOW they are creating and conducting them that is largely changing. I would argue the largest shift on this front right now is the last bullet point: the threat environment is growing because just about anyone with a computer and a bad attitude can now jump into the cybercrime ecosystem, which means there are MORE attacks to worry about. But again, passwords, MFA, passkeys, patching, and other basic cyber hygiene measures can address most of these concerns.

Stay tuned as we move on in this series… next up, we will start to dig into threat TO AI, starting with a look at why the use of AI by organization creates a unique threat model and how to frame these threats in a way that can help you address them in an organized and methodical manner.

Share this post via:share on linkedinshare on xshare on facebooksend an email