Blog
Recent

AI Access & Governance: Webinar Recap & Resource Guide

LastPassPublishedMay 07, 2026

Looking for a recap of our recent AI Access & Governance webinar, or weren’t able to join? This guide brings together the key findings, peer data, and practical guidance from our sessions, drawn from 348 IT leaders, security managers, and executives who participated across three global sessions.

The data exposure risks your team needs to understand

AI tools have changed the security equation. With SaaS, the risk was visibility into which applications employees were using. With AI, the risk has shifted: employees are actively feeding data into systems outside your control. Once that data leaves your environment, it cannot be recalled.

In early 2023, Samsung engineers used ChatGPT for everyday productivity tasks and in the process shared source code, meeting notes, and internal documentation with an external system they didn't control. There was no breach in the traditional sense, no malicious intent, only a deadline and a tool that made the work faster.

The numbers behind the problem

The path forward: discover, distinguish, direct

Organizations getting security right start with visibility. You cannot govern what you cannot see, and most organizations don't yet have a complete picture of how employees are using AI, with what credentials, and with what data.

  • Discover: Understand what AI tools are in use across every form factor, including browsers, extensions, desktop apps, IDE tools, and AI features inside existing SaaS platforms.
  • Distinguish: Identify which tools carry enterprise-grade data protections versus consumer or free tiers where those protections don't apply.
  • Direct: Guide employees toward sanctioned environments, not by making unsanctioned tools harder to use, but by making the secure path easier.

The goal is not to block AI. The goal is to ensure that when your employees use it, they're doing so in an environment you can see, govern, and protect. That's the difference between AI as a risk and AI as a competitive advantage.

AI Access Governance at LastPass: What we found in our own environment

Before building any governance policy, we audited our own environment. What we found changed almost every assumption we had going in.

Finding 1: AI is everywhere, not somewhere

AI use wasn't concentrated in engineering. It was in HR, finance, sales, marketing, and operations. Individual contributors far from the technical core had found ways to remove the mundane parts of their jobs with AI. A governance policy focused on controlling developer tools would have missed most of the actual usage.

Finding 2: The personal-vs-corporate split was significant

Employees were accessing AI tools using personal accounts on corporate devices. This matters because the data protections that come with enterprise agreements don't apply to personal accounts. An employee using Claude with their work email under an enterprise agreement has their prompts protected. An employee using Claude with their personal Gmail on the same device may have that data in scope for model training.

That distinction isn't visible at the network level. You need user-level identity data to see it.

Finding 3: AI was already on inside tools we thought we controlled

Several existing SaaS platforms had AI features available; some already enabled by default in recent updates. Databricks, Gainsight, and others had modules that employees could activate, or that vendors had turned on without a formal change of management process. Your existing stack is probably already AI-enabled in ways you haven't fully inventoried.

What your peers told us: live poll results

These results come from live polls across three global webinar sessions with 348 respondents, including IT leaders, security managers, operations directors, and executives of every organization size.

Where is your organization today with AI use?

136 responses across NA, APAC, and EMEA.

ResponseCount%
Light experimentation4735%
Used by select teams4029%
Broad use, but limited guardrails2518%
AI is used with clear policies and controls1410%
Not actively using AI107%

Only 1 in 8 respondents reported using AI with clear policies and controls in place.

Where do your employees use AI today?

189 responses across NA, APAC, and EMEA.

LLM websites like ChatGPT and Claude were the most common entry point (39%), but nearly a third of respondents said all of the above, meaning desktop apps, extensions, browser, and development tools simultaneously.

That multi-form-factor reality means a single-channel governance policy will miss most of what your employees are actually using.

How do you handle AI and SaaS risk today?

174 responses across NA, APAC, and EMEA.

ResponseCount%
We have no AI usage visibility6739%
We have AI visibility but no access controls4124%
We block access to unapproved AI tools2213%
We block unapproved access AND control how approved tools are used2011%
I'm not sure2414%

Nearly 40% of respondents have no visibility into AI usage at all. Another 24% have visibility but no controls. That's nearly two-thirds of attendees operating without the ability to act on what their employees are doing with AI tools.

Questions your peers asked

These are the three most widely relevant questions from IT leaders, CTOs, and security professionals across our global sessions.

How do you mitigate exposing proprietary information and code to Claude?

Buy at the enterprise tier and validate that with your security team before rollout. Enterprise agreements from Anthropic mean your prompts and data are not used to train the model and won't surface in other users' responses.

Beyond that, build controls around data uploads, particularly in tools where employees can upload documents. If an employee is accessing Claude with a personal email on a corporate device, that's a gap. Personal accounts don't carry the same data protections as enterprise ones. Visibility into how employees are logging in matters as much as which plan you're on.

How does LastPass complement firewall and ZTNA tools?

Firewall and zero-trust network access tools show you which AI sites employees are accessing. What they don't reveal is who is accessing something and how.

LastPass adds the layer of: which specific user is logging in, and are they using a credential stored in your vault, a personal credential, or something else? That method-of-access detail is critical for understanding your actual risk exposure, and it's the layer that network-level tools consistently miss.

What LastPass tools are available to existing customers right now?

If you're on the Business or Teams plan, the capability you're looking for is LastPass Business Max. It includes the ability to discover SaaS and AI site usage across your organization and set access control policies, not just password management.

The fastest path: ask your customer success team to activate a 30-to-60-day trial. Most customers start seeing meaningful usage data within the first week, often the first time they've had visibility into what their employees are actually accessing. If you already have Business Max, log into the Admin Console and look at the SaaS discovery reports.

What about preventing tampering with training data for businesses that train their own models? Does LastPass offer anything in this regard?

LastPass addresses the access layer of this problem, ensuring that the people and systems with access to your training data and ML infrastructure are authenticated securely and that credential risk is surfaced and acted on. Training data integrity and model tampering protection require additional tooling at the data and infrastructure layer, which is outside our current scope. We'd recommend pairing LastPass with a data governance and MLOps security solution for comprehensive coverage.

Without removing local admin from Devs, how can we provide governance for tools like Claude that advise about other tools, plugins, APIs in which we don't have enterprise licenses, so they are not tested, approved? We are trying to avoid Shadow AI.

SaaS Monitoring gives IT visibility into AI tools your developers are accessing through the browser, including free-tier and unapproved tools, without requiring you to lock down their machines or remove local admin. You can see what's in use, how they're logging in, and what credential risk exists. SaaS Protect then lets you set graduated policies: warn developers accessing unverified AI tools, block high-risk ones, and point them toward approved enterprise alternatives. That said, AI tools accessed via CLI, IDE plugins, or direct API calls outside the browser aren't captured by SaaS Monitoring today, so for a developer-heavy environment you'd want to pair this with endpoint or network-layer visibility for complete coverage.

'Block use of non-enterprise versions of AI tools.' But SaaS Protect cannot do this yet. Can you add detection of free LLM accounts so we can block access?

SaaS Protect can currently block access to AI tools by domain entirely, or flag users accessing them with non-enterprise credentials. Distinguishing between free-tier and enterprise accounts for the same tool (same domain, different account type) isn't something SaaS Protect does today. This requires network-layer inspection or header analysis that sits outside the browser extension architecture. This is valuable product feedback and something we want to take back to our product team. If free-tier vs. enterprise account detection is a critical requirement for you, we'd want to have a more detailed conversation about your environment and what complementary tooling might bridge that gap in the near term.

What impact is the use of passkeys (and recommendations from bodies such as NCSC to use passkeys rather than passwords) have for LastPass's roadmap?

LastPass already supports passkey storage and management, and passkey support is a core part of our roadmap. The practical reality is that passkey adoption is uneven. Many enterprise systems, legacy applications, and third-party tools won't support passkeys for years, which means most organizations will be managing a hybrid environment of passwords, passkeys, SSO, and MFA for the foreseeable future. LastPass's role in that world is helping manage how your workforce authenticates to everything, regardless of what authentication method an individual app or system supports. Rather than making passwords obsolete, passkey adoption actually increases the complexity that a credential management platform needs to handle. We see this as an expansion of the problem we solve, not a threat to it.

See what's running in your environment

Automatically discover every SaaS app across your org with LastPass.

Share this post via:share on linkedinshare on xshare on facebooksend an email