New ChatGPT Model Can Be Exploited for Scam Calls | Sync Up
Recent research reveals that OpenAI’s latest model, ChatGPT-4o, could be used in scams to trick people out of money or sensitive information. We’ll explain how these scams work and what you should look out for as we sit down and Sync Up with Rocket IT’s weekly technology update.
In this episode, you’ll hear more about:
- New research revealing how the latest model of ChatGPT could be used in scams.
- How scammers are leveraging AI to trick people out of money.
- How AI can mimic a call center agent in real scam scenarios.
- How scammers bypass AI safeguards with jailbreaking techniques.
- Steps OpenAI is taking to improve security in its model.
Video Transcript
Unlike OpenAI’s earlier models, ChatGPT-4o brings together text, voice, and even visual processing all in one system. This means it can have a conversation, mimic a voice, and even understand images to some extent. While this sounds exciting, it also opens the door to serious risks. Researchers from the University of Illinois Urbana-Champaign recently found that this AI could be used to conduct scams.
The researchers tested various types of scams, from bank transfers and cryptocurrency theft to credential stealing for email and social media accounts. By using ChatGPT-4o’s voice capabilities, the AI was able to sound convincingly real, making it easier to lure victims.
Here’s how these scams work. ChatGPT-4o can act like a call center agent, guiding victims step-by-step through things like sending a bank transfer or buying gift cards. It can even respond naturally to questions or concerns along the way. The researchers showed how this AI could automate tasks that would usually require human assistance, like filling out information on a website, navigating pages, or even handling two-factor authentication codes. This means scammers could potentially launch these attacks on a large scale with little effort.
Through testing, researchers confirmed that these scams were alarmingly effective, with success rates ranging from 20% to as high as 60%. For example, bank transfer scams—one of the more complex types tested—achieved significant success. ChatGPT-4o could guide the “victim” through transferring money from a real bank account, using sites like Bank of America to confirm each step. By simulating a helpful bank representative, the AI made the process seem legitimate, even coaching the “victim” through verifying their identity and inputting secure information.
Now, OpenAI has built safeguards to try to block harmful uses. For example, they designed the AI to reject unauthorized voice requests and sensitive data like passwords. However, the researchers found ways around these safeguards using some clever “jailbreaking” techniques, allowing the AI to work around certain blocks.
As AI technology continues to evolve, OpenAI is working on adding more protections. But until those are perfected, it’s essential to take proactive steps. Be cautious about sharing personal information, use unique passwords, and double-check unexpected requests—especially those that come by phone or email. For those running a business, it’s also important to have an IT provider on hand to help train your team to spot scams and provide security monitoring. For those looking for cybersecurity help, simply contact Rocket IT using the link in this video’s description. And to stay up to date on trending technology news, hit that subscribe button and the bell to catch us on next week’s episode of Sync Up with Rocket IT.
Related Posts
Subscribe to Rocket IT's Newsletter
Stay up to date on trending technology news and important updates.
Find out if Rocket IT is the right partner for your team
Claim a free consultation with a technology expert.