Weak credentials recently gave researchers access to years’ worth of McDonald’s applicants’ data. While McDonald’s McHire AI chatbox lightened the administrative load for the multinational corporation’s HR department, it also presented an opening for hackers to access job applicants’ information going back years. This incident demonstrates how the “intelligence” in AI isn’t a given when it comes to basic cybersecurity measures, like passwords. This breach raises questions about data privacy and transparency for job applicants and the security of AI-enabled tools. We’ll take a closer look at how this simple brute force attack happened and what you can do to protect yourself to avoid falling victim to a similar hack.
What happened?
Researchers discovered a vulnerability in McHire, McDonald’s chatbot job application platform built by AI software firm Paradox.ai. The AI chatbot screens applicants, collects their contact information and resumes, and engages in basic interactions that an HR department would typically handle. The breach was enabled by weak credentials of a Paradox.ai administrative account that used an appallingly weak username and password (“123456”). This is one of the most common passwords out there that makes it practically effortless for threat actors to crack. Once the researchers gained access to the admin account, they were able to access every application to McDonald’s over the past few years. The researchers were also able to complete and manipulate a test application on the backend and access other applicants' chat logs and contact information by changing the applicant ID number for their application. This was due to an insecure direct object reference (IDOR) on an internal API. Luckily, the account was not accessed by any third party other than the researchers.
If someone had the opportunity to exploit the data, it could have made years’ worth of McDonalds applicants the targets of phishing attacks and scams. For instance, attackers could hypothetically have used this list of individuals to conduct a type of payroll scam or use applicants’ sensitive information to conduct other attacks.
Root causes: How did it happen?
The root cause of this breach was poor identity access management of a third-party vendor's administrator account. Enforcing strong password guidelines and MFA could have protected the admin account from the start. Good password hygiene is typically the first line of defense against attackers. People who use weak or reused passwords increase their risk of data breaches, extortion, and more. In fact, over half of all reported data breaches resulted from weak passwords. Hackers can easily get access to exposed credentials from previous breaches and infostealer logs sold on the dark web. Using weak passwords can make you an easy target for hackers looking for a way into your accounts, either to target you directly or gain access to other sensitive systems you may have access to.
Additionally, the breached admin account appeared to be inactive, with its last login reportedly in 2019. Inactive accounts are prime targets for cyberattacks because they can provide attackers with a foothold in the network to move laterally and access critical systems. Admin accounts are especially enticing for hackers since oftentimes they have elevated access permissions, allowing attackers to infiltrate deeper into systems with less effort.
A warning: Don’t be blinded by AI
As AI-powered bots get implemented to help streamline the hiring process or for other purposes, this new technology also introduces vulnerabilities—in this case, web-based vulnerabilities involving a simple administrator account's username and password (“123456”) that served as the keys to a data breach. While AI can help boost productivity and security defenses, unwavering excitement can simultaneously blind people to potential risks. This breach carries a warning for folks: don’t let your AI-enthusiasm distract you from basic cybersecurity best practices. Sometimes the promise of AI isn’t all it’s cracked up to be when these tools open businesses up to new vulnerabilities.
As AI usage has picked up, there have been several reports of AI data leaks due to misconfigurations, training data exposure, model extraction, in-memory leaks, adversarial attacks, and more. That should be a reminder for companies to closely evaluate their tool suite. Just like any new technology, security professionals should rigorously evaluate third-party AI apps and tools.
What can you do?
- Use strong, unique passwords. Think of this as a single bolt lock to secure your door. See more password hygiene tips in our blog post “How to Elevate Your Password Hygiene: Tips and Tricks for Protecting Your Credentials.”
- Another simple but crucial step to securing your online accounts is implementing multi-factor authentication (MFA). This is like adding a padlock on top of your locked door (with strong passwords). There are many forms of MFA available, including SMS text, hardware tokens, push notifications, and biometrics. Two-factor authentication (2FA) is a common form of MFA that combines your password with either a generated code or biometric verification. Learn more about the pros and cons of common 2FA methods.
- Password managers can help protect accounts by generating strong, unique passwords for each account and storing them securely in an encrypted vault. This helps prevent the use of weak, easily guessed passwords.
- As far as third-party vendors are concerned, your security is only as strong as the sum of your parts. Businesses should evaluate their third party's security standards, including their password hygiene.