- Posted by Action Catalyst
- On October 31, 2023
- 0 Comments
- AI, artificial intelligence, author, Business, ChatGPT, cybersecurity, fraud, scams
Peter Warmka, former CIA agent, professor at Webster University, and the founder of the Counterintelligence Institute, returns to the podcast with updates on protecting your business and professional networking tools from bad actors utilizing the latest in AI, how safeguards against fraud have potentially lead to more fraud, and what upcoming events on the calendar could signal an uptick in scams.
Peter Warmka is the founder of the Counterintelligence Institute and author of the books “Confessions of a CIA Spy: The Art of Human Hacking” and “Why Are You Messing With Me?”
Peter Warmka is a retired senior intelligence officer with the U.S. Central Intelligence Agency (CIA) where he specialized in clandestine HUMINT (human intelligence) collection. Following his CIA career, Warmka has made it his personal mission to help U.S. Government departments, NGO’s, major corporations and academic institutions understand and effectively combat the ever increasing threat of security breaches resulting from the artful manipulation of insiders by those seeking to steal proprietary information and personal data.
Warmka is a Certified Fraud Examiner (CFE) and Certified Protection Professional (CPP). He is a certified instructor with the CIA-U and an adjunct professor with Webster University where he lectures on social engineering, intelligence and counterintelligence in the Master’s in Cyber Security program.
Warmka has authored several articles for publications within the fields of security and fraud and presented on these topics at global conferences held by the Association of Certified Fraud Examiners (ACFE) and ASIS International.
Learn more at Counterintelligence-Institute.com.
The Action Catalyst is presented by the Southwestern Family of Companies. With each episode, the podcast features some of the nation’s top thought leaders and experts, sharing meaningful tips and advice. Learn more at TheActionCatalyst.com, subscribe below or wherever you listen to podcasts, and be sure to leave a rating and review!
SUBSCRIBE TO OUR RSS FEED: https://feeds.captivate.fm/the-action-catalyst/
SUBSCRIBE ELSEWHERE: https://the-action-catalyst.captivate.fm/listen
(Transcribed using A.I. / May include errors):
Host: We are joined once again today by Peter Warmka, a former CIA agent that specialized in what he calls human hacking and now uses that knowledge to help businesses assess their vulnerability to foreign intelligence groups, criminals, and other threats. Peter is a professor at Webster University where he lectures on counterintelligence and cyber security and is the founder of the Counterintelligence Institute.
Since you last joined the program back in February of last year, there’s been an artificial intelligence explosion with ChatGPT leading the charge. How have AI tools changed the sort of threats we should be looking out for?
Peter Warmka: AI goes back to the 1950s. It was coined when the first computers came out, right?
Artificial intelligence. So a lot of this was developing, but it really wasn’t out in public, you know, the public, the general public didn’t know very much about it until all of a sudden you had, uh, Chad GPT, you know, released back in November, then all of a sudden it was like, it was instant everything, right?
And now you get to the point where people are looking at doom and gloom, that the machines are going to take over mankind. I don’t see that happening. Technology, uh, is neither good nor evil. It depends on the intent of the person, the human behind the technology, right? Well, we can already see in an area that I, you know, I focus in a lot in regard to human hacking, social engineering.
Where, first of all, we, we try to gather as much information as possible on a target, whether it’s an individual or a company, and some of these features such as, um, FAPTPT, they want to, there’s certain things they regulate to ensure that it’s not used for malicious intent, they would not allow you to do extensive searches on ordinary individuals to collect on them, they might give you information about, you know, people that are celebrities, high profile individuals, but the general, you know, Joe and Sally, they’re not pulling that information, but the technology is already existing to do it and that technology in the hands of someone else can easily pull up a lot more information on all of us that can be used for these types of operations.
We see for a long time now, one of the biggest areas for spear phishing has been the use of social media, especially like LinkedIn accounts. LinkedIn profiles are currently like 930 million out of which 5 to 7 percent of them are fake. A lot of those fake profiles are made by foreign intelligence services, criminal groups, and a lot of them, English is not their native language, so you might be able to find some that are not quite well worded, or they copied and pasted that text from somewhere else, and it’s perfect.
It’s just so much. But now with GPT Chat, they can utilize that and come up with a great profile, well worded profile. The pictures previously were taken from other people’s profiles, and by doing a reverse engineering, reverse image search, you can find that picture in someone else’s name, but now computer, uh, uh, computer generated, uh, pictures of people are one of a kind, and they’re almost near perfect.
So these profiles are stronger, easier to make, they’re going to be used more and more to reach out to people, uh, for social engineering attacks. The scene. If you’re a people, if people don’t know what vision is, it’s about using the telephone to impersonate whoever the, the, the threat actor wants to impersonate, whether it’s your boss, whether it’s a coworker, whether it’s your client, whether it’s a, you know, your financial advisor to ask specific sensitive question, you know, trying to procure sensitive information, or getting an individual to take an action that could be very detrimental, such as, you know, sending that wire transfer.
The boss calls, you know, we need to make this urgent wire transfer. And there’s been a lot of cases of that. But now with voice cloning, this is becoming much more dangerous. And there’s been cases already, a bank manager in Hong UAE. Spoof the caller ID number and also clone the voice of the client and requested 35 million transfer.
The money was transferred out. And it’s sort of like, you know, people hear a business email compromise of creating that email that looks like it’s coming from the boss here. This is much more effective because you’re getting somebody online who believes it’s you. to hear your voice. For example, if you ask me a question or if I see that you’re kind of hesitating, I can talk you into it with an email.
It’s just a one shot. And I don’t know if you’re going to do it or not, but with a voice, I can hear you and I can, you know, walk you through it. And, and almost with certainty, I can get you to undertake the action that I want you to. So voice cloning is huge. About a year ago, it took maybe 50, 60 seconds of sampling the voice to clone it.
Now it’s a matter of two seconds, three seconds. And how can I get your voice? If you’re, if I, you know, maybe if you’re on podcast, I can get your voice. Right. But the average citizen, the average person is probably not out there very much on podcasts or on YouTube. So I just call your phone number when it goes to voicemail.
A lot of people are using their own voice for voicemail. I mean, so it’s not that hard to get samplings of people’s voices, but technology is making it easier and easier and easier for. For threat actors to be effective.
Host: So is the best defense against AI just more AI?
Peter Warmka: Yeah, in the short I mean, long term we could develop, and there are being developed, uh, different types of AI detection.
I mean, uh, deepfakes, if you want to call them deepfake, detection tools. Those are rolling out, but they’re always going to be behind, you know, they’re not going to be ever necessarily catch up to the other technology that’s being, that’s coming out. And the other problem is lack of regulation. The U. S. is so far behind in proper regulation, whether it’s by states or the federal government.
So, right now the biggest, the most important thing is that public awareness. Increasing the public awareness, uh, to be able to recognize and question, is this real or is this not?
Host: So how can we tell if who or what we are dealing with is actually another person or AI?
Peter Warmka: There’s very few tells anymore. I think the, the important thing is the following.
The threat is not when we reach out to somebody else. The threat is when we have something incoming. Whether it’s an email, text message, or a message on, you know, approach via social media, or a phone call. Or even a face to face encounter, the threat is not when we reach out at the threat is when potential threat is when someone else reaches out to us, we don’t know, or who maybe is impersonating somebody who we think they are, right?
Like the voice. We have to decide when do we need to verify. And when does it really matter? Because if it’s coming in and they’re asking very sensitive information, asking questions for very sensitive information, or if they’re asking you to do something that, if fraudulent, could be very detrimental to you or the firm, such as wire transfer or, for example, issuing a facility access Passed to a very sensitive area of the company, and if that’s fraudulent, you know, if you have a threat actor that all of a sudden is wandering around in your company or grants access onto the IT networks, if those are the cases, you have to sit back and say, Let me first verify before trusting, that’s what we have to do, but there is a segment of the population that we’re, that I’m looking at that, that really is the most vulnerable and victimized in society.
And that’s, that’s our seniors. A lot of times they’re not, not quite up to date on this type of technology and these different things that can happen to their, their more trusting. And also they are the most attractive targets, right? They have a lot of money. They’re so accessible. What’s really growing are these grandparents scams.
And you’ve heard maybe a few of these coming out, talking about them, but it’s also using the voice cloning. Where all of a sudden the grandparent will get a telephone call that sounds like it’s coming from their grandchild. You know, saying, you know, Hey, I got this issue, an emergency issue. I’ve been arrested or, you know, I’m in a car crash.
This is what’s really, really growing now. Considerably are these types AI, the voice cloning. Using the influence technique of fear, there’s a problem whether it’s a family member that’s in trouble and you want, and they need your help or they might be in trouble, you know, because you hear these scams that maybe the calls from the, the, from the social security administration or from the IRS or something that you’re, you know, they’re in trouble and, and it’s totally like the fear factor that really, really gets their attention and, and gets them to do something quickly without really assessing it.
You know, if they had time, they would think it through, but using the fear, right. They call it a scarcity technique, that if you don’t resolve this issue right now, it’s gonna get worse. Those scams seem to be working very, very well.
Host: Is there any seasonality to scams? Are people more likely to be targeted at particular times of the year?
Peter Warmka: One of the biggest factors is when, it depends on what’s going on in the world, right? And especially if there’s been a major event that’s taken place, like a natural disaster, like a, like an earthquake in some location, or, or the fires, or where people… There’s a, there’s a need now to collect money from people to help.
So then, you know, all of a sudden you’ll get all kinds of, uh, approaches to people claiming to be, you know, representing non profit groups that are collecting these money. So especially when it comes to seniors, that’s very effective, right? The events are one thing, the, uh, also time of year for the elections.
When, uh, COVID started, there were people were, were, were hungry for information. Everybody was getting out and trying to find as much information as possible. And that was a huge bump in, in, in scams and hacking attempts because of people getting that and, and, and pursuing information from sources that were not trustworthy.
Times, you know, different things at times of the year, probably, you know, for the holidays, for specific times of the year when it, but more so when it comes to, uh, consumer shopping scams, you know, if you get a line looking for something. Even at Google, it’s going to show you all the different companies compared to shopping at the prices and some of them might be almost identical across multiple outlets, but all of a sudden there’s one that’s like half the price.
You click on that link, and if you just go immediately to buy it, you’re giving your credit card, your name and your address and that’s it. And if you do a further research into the website, there’s probably not even a physical address. It’s just a name, right? One of the problems in my, my opinion is that fortunately or unfortunately is that people are so used to not having to really pay for fraud.
It’s the credit card companies or it’s the insurance that pays the price. And so I think if people, you know, were more accountable and feeling the financial burden when it comes to this type of consumer fraud, right, using a credit card, uh, there might be a little bit of a more, more concern.
Host: You’ve just released a new book aimed at one of the most vulnerable parts of the population. What’s it called and where can people find it?
Peter Warmka: The name of the book is called Why Are You Messing With Me? Senior Survival Guide on Fraud, Privacy, and Security. And it kind of starts with privacy, because the more that’s known about us, The more we can be approached by different fraudsters. And it goes through security issues regarding physical security in your home, physical security on the street.
And it goes through all the different types of security, how you might be approached via telephone or online and the different types of scams. Each of these areas is broken down to different chapters and it starts out with a story, a real story of what’s happened to a victim. It provides the best practices and also ones that you might decide to incorporate in your life right now.
And it’s available on Amazon.
Host: Peter, thanks for joining us once again and stay safe out there.
Peter Warmka: Well, thank you very much. Take care.