LastPass Labs is the content hub for the Threat Intelligence, Mitigation and Escalations (TIME) team at LastPass. Our focus is in-depth analysis of the latest security developments, a keen eye toward forward-looking tech, and unique threat perspectives.
Riding the escalator in the Moscone Center at RSA Conference some years ago, a fellow conference attendee noticed my badge. “Threat intelligence?” he asked. “You know artificial intelligence is going to replace your job.” I laughed. “I just had to reboot my watch because it locked up, so I think I’m safe for now,” I countered.
Flash forward to 2023. With the advances made by ChatGPT and other Large Language Models (LLM), I’m not so confident now. The new age of AI is upon us, it’s disruptive and it’s moving extremely quickly into everyday life.
With its capacity for rapid processing and learning, AI has become crucial in various sectors, including cybersecurity. However, AI can also be a tool for malicious cyber activities, creating a complex and ever-evolving security landscape.
Potential Uses of AI in Cybersecurity Defense
There are three potential applications of AI and machine learning (ML) in cyber defense. The first is Anomaly Detection, where AI algorithms sift through huge amounts of data to pinpoint potential unusual patterns that may indicate a cyber threat. This use of AI could help companies discover threats they may not have otherwise detected. In layman’s terms, this is, “Here is a big haystack, show me the needles.” This could be an extremely powerful capability.
The second use is Predictive Analytics. Here, machine learning algorithms analyze past data to predict future security breaches. This strategy could allow organizations to patch vulnerabilities before they become a problem.
With the recent boom in LLMs, the third, especially useful, application in Threat Intelligence is summarization. This could allow a cyber practitioner to quickly extract explicit details on an attack or research a report or news article, at scale, with nothing but a plain language request tailored to the precise needs of the organization.
AI Use by Attackers
AI can also be a potent tool in the hands of attackers. Historically, phishing has been relatively easy to spot based on language-use, formatting, and link analysis. AI-powered phishing attacks, which leverage LLMs to create deceptive emails, are incredibly personalized and harder for the recipient to identify as malicious. Deepfake technology, another AI innovation, is used by cybercriminals to create highly realistic audio-visual impersonations of individuals. This technology can be used for various malicious purposes, such as spreading misinformation or committing fraud. Recently, the threat environment has seen AI-generated photos of celebrities that are almost indistinguishable from real ones (for example, the Pope in the white jacket image), deepfake audio technology being used to impersonate executives to transfer money, and AI being used to create polymorphic malware, which changes automatically to prevent signature-based tools from detecting them.
Novel AI Attack Scenarios
As AI technology continues to evolve, we can expect to see new types of attacks. For instance, adversarial attacks aim to deceive cyber security systems by subtly altering the input data, which can cause the system to make incorrect predictions. Additionally, as society becomes more reliant on AI and machine learning, our infrastructure (like autonomous vehicles or smart cities) could become targets for cyber attacks. Malicious agents could potentially manipulate traffic control algorithms or disrupt autonomous vehicle operations, leading to chaos and potential harm. Of particular concern is the current LLMs’ ability to analyze and create code, with the potential of a jail-broken LLM (one that has had its safety guardrails bypassed) being able to create zero-day exploits (exploits that are dropped publicly with no responsible disclosure or available patch) automatically.
Future State of AI in Cybersecurity
Advancements in AI will continue to shape the landscape of cybersecurity. Looking to the future, AI may alter the cybersecurity “arms race,” where attackers and defenders seek to one-up each other with new and more complex technology. The rise of quantum computing could offer a way to enhance secure communication significantly, by creating encryption that is much harder to break. Furthermore, as our world becomes more interconnected, with physical and digital systems increasingly intertwined, AI could be a useful component in developing comprehensive and resilient security systems that can withstand complex threats.
Businesses and organizations must be cognizant of how AI is reshaping cybersecurity and stay ahead of potential threats by mitigating them before they can cause any real damage.
The LastPass Threat Intelligence, Mitigation, and Escalation (TIME) team is focused on protecting our community by monitoring for, analyzing, and mitigating threats targeting our customers, our company, and our industry. The team has nearly 50 total years of intelligence and cyber experience and firmly believe in information sharing and relationship building as the key to a successful intelligence program.
Our goal within LastPass is to provide timely and actionable intelligence to stakeholders that allow our security teams to protect our customers, their data, and the company. In addition to conducting analysis and informing our security teams on developments in the larger cyber threat environment, we are also working to automate our intelligence inputs into our partners’ processes and minimize the timeframe from threat awareness to mitigation.