Beware: Phishing Attacks Enter the Deepfake Era 

Bob’s boss was asking for something really weird. A wire transfer this big was never done. In all the years Bob worked for Alice, she had never asked for a transfer of this magnitude. But there she was in the zoom meeting, in the flesh (well, digital flesh anyway). How was Bob to know that wasn’t really Alice? 

In the digital dimension, threats to our life aren’t always the mortal kind. They also lurk behind screens, ready to exploit our human weaknesses. Those are the ones that we too often overlook. While phishing attacks are nothing new, they have evolved. Welcome to the Deepfake world. Oh, is that word new to you? Well, buckle up. You need to learn it… and fast. A deepfake is a video or audio of yourself or someone you know created by Artificial Intelligence (AI) out of parts and pieces of other audio or video. With deepfake voice and video capabilities, cybercriminals can now mimic your trusted contacts (like your boss) and authority figures (like your spouse) with alarming accuracy, aiming to deceive and manipulate you. If you use the internet to do banking or email, you are a target. You need to understand the risks and implement precautionary measures to safeguard your online identity and personal information. 

Deepfake technology uses AI to combine audio and video recordings, seamlessly grafting a person’s likeness onto another’s voice or image. This tool, once restricted to Mission Impossible, is real. And it has been weaponized by cybercriminals seeking to exploit your trust in familiar voices and faces. 

Imagine receiving a phone call. On the other end someone is demanding you confirm sensitive account information. The voice on the other end sounds EXACTLY like your boss, complete with the cadence and intonation you’ve come to recognize. Or perhaps you receive an email from your biggest client requesting urgent wire transfers, accompanied by a convincing video message imploring immediate action. In both scenarios, the other person isn’t a person at all. It’s an AI impostor, leveraging deepfake technology to deceive and manipulate you. 

The consequences of falling victim to a deepfake phishing attack can be dire – from financial fraud and identity theft to reputation damage and compromised personal data. The ramifications are deep. Being deceived by someone you trust, even if it was a fake someone, creates a psychological fissure that erodes your confidence in digital communications and exacerbates feelings of vulnerability and distrust. 

The threat posed by deepfake phishing attacks is unsettling. But there are proactive steps you can take to mitigate risks and bolster your defenses. 

Verify Identities: Before responding to any requests for sensitive information or financial transactions, independently verify the identity of the sender through alternative channels. Contact your bank or employer directly by phone using a number you know to be good to confirm the legitimacy of any requests. 

Exercise Caution: Whenever you receive unsolicited emails, phone calls, or messages treat them with profound skepticism. This is especially true if they contain urgent or unusual requests. Scrutinize the content for inconsistencies or irregularities. It may indicate a phishing attempt. 

Stay Informed: Find someone you trust to keep you informed about emerging cybersecurity threats and trends, including advancements in deepfake technology. Educate yourself and your loved ones about the risks posed by phishing attacks.  

Use Multi-Factor Authentication: Implement multi-factor authentication wherever possible to add an extra layer of security to your online accounts. This additional step can help thwart unauthorized access, even if your credentials are compromised. 

Report Suspicious Activity: If you encounter a suspected deepfake phishing attempt, report it to the relevant authorities, such as your IT department, cybersecurity agency, or the Federal Trade Commission. 

The emergence of deepfake technology underscores the evolving nature of cyber threats and the importance of proactive cybersecurity measures. By remaining vigilant, verifying identities, and staying informed, you can safeguard yourself against the perils of deepfake phishing attacks. Together, we can navigate the digital landscape with resilience and confidence, thwarting cybercriminals at every turn. 

The original article was publish in the Sierra Vista Herald and can be found here.

The Cyber Guys: Swatting customers, cyber hackers’ new extortion method

What you are about to read is fiction, but the scenario is feasible and, in a few months, may be likely.

Bob was sitting on the couch watching the Chiefs play the Bills. The Bills had just made a touchdown, bringing the score to Bills 17, Chiefs 10. Suddenly the front door burst open and a heavily armed group of people flowed into his home. In moments Bob was on the floor face down, arms behind him zip tied. Bob was under arrest.

Bob wasn’t guilty of a crime. He was the victim of a horrible extreme prank called “swatting.” Someone had accused Bob of posting extreme anti-government threats on social media. Bob’s social media account had been compromised, then filled with anti-government rants. Enough evidence to justify the temporary chaos you just witnessed.

Why was Bob targeted? Unfortunately, he was the client of a medical center that recently had fallen victim to a cyber-extortion group. The patient information was stolen (including Bob’s) and the threat group promised that if the ransom wasn’t paid, the threat group would make life a literal hell for the patients.

Because Bob had the bad habit of reusing his passwords it was trivial for the threat group to take over Bob’s social media account using his stolen credentials and make those false posts. Bob became the first of many to endure such humiliation.

The story is fictitious. But the threat is real. Swatting as a service is the latest tactic threat actors are using to coerce businesses into paying cyber ransom. You are truly just a pawn. Because cyberattack reports are so common today, we’ve become overwhelmed and desensitized to the implications of the threat. But now the implications are physical. Visits from actual police to your home. So far, the police visits have resulted in only momentary inconvenience for the victim and a waste of police resources. But it is conceivable this will escalate.

You are probably thinking, “There’s no way this could happen. Who would ever go to such an extent just to get money?”

The reason you think this is because you are not evil. But there are truly evil people who absolutely don’t care about the pain this causes innocent people. The effort it would take to conduct such a campaign as described above is very little on the part of the threat actor, especially in the age of artificial intelligence.

An AI bot can easily craft the content for social media posts at scale. The level of effort on the part of the human is then as little as copying and pasting the content into a compromised social media account.

But you can do something to make sure it isn’t you who suffers. First, if you don’t absolutely need social media, you can cancel your accounts. One principle of cybersecurity is “if you don’t need it, remove it.” If you do use your social media accounts, make sure you use a password manager like Bitwarden to create and securely store your passwords.

Lastly, you do have a right to ensure your data is secure. The tactic described above has been used against medical centers. Your protected health information is governed by the Health Information Portability Accountability Act. You have the right to ensure your medical provider is protecting you. Ask it to provide you with evidence it is doing more than the bare minimum. If it refuses to show you, then you may consider changing doctors.

I know this sounds extreme, but so is “swatting.”

Original article was featured in the Sierra Vista Herald and can be found here.