How Dark AI Can Jeopardize Your Business

The AI systems you hear about on a daily basis aren’t inherently dangerous. ChatGPT, for example, is built by a team of developers who want the program to help people, not hurt them. As such, there are safe guards in place and in development to ensure there are limits to what the chatbot can do, and that the actions it
December 18, 2023
 / 
meritsolutions
 / 
Image

The AI systems you hear about on a daily basis aren’t inherently dangerous. ChatGPT, for example, is built by a team of developers who want the program to help people, not hurt them. As such, there are safe guards in place and in development to ensure there are limits to what the chatbot can do, and that the actions it takes always trend in a positive direction.

However, that isn’t the case for all of AI. Artificial intelligence, like all tech, is dependent on the intentions of its developers. And seeing as there are nefarious developers out there creating malicious programs each and every day, there are, unfortunately, malicious AI systems being developed as well. This is what’s known as “dark AI.”

What is dark AI?

Simply, dark AI is the idea of creating AI systems that execute malicious activities, or taking advantage of existing AI systems to achieve the same result. Perhaps developers want to create an AI system that can create malware for you (and potentially execute it on your behalf). Or, perhaps bad actors find a loophole in a platform like ChatGPT that can enable it to act maliciously.

There are many functions dark AI can execute, whether it’s intentionally designed or not. Bad actors are already using existing AI platforms to generate misinformation via images and video: Deepfakes, for example, allow video creators to swap the faces of people in a video with whoever they want, so it can look like someone was doing something they really weren’t.

In another twist, AI programs can falsify the voices of real people: Combined with deepfakes, bad actors can make a video of someone, say, an important politician, giving a speech they never actually made. But because it looks like the politician is giving the speech, and it sounds like their voice in the voice, some (or many) people may be tricked by the false footage.

Dark AI can also be used to write the words in malicious messages. Phishing emails and texts have traditionally been pretty easy to spot, because bad actors would write them with poor spelling and grammar. But now, even if the bad actors don’t know the language their target speaks, AI can write something convincing for them. All they have to do is tell the AI program what they want it to say, and it’ll not only translate the ideas, but draft up a compelling reason why you should click on the link in the email. (Do not click on the link in the email.)

How to protect your business from dark AI

Dark AI sounds intimidating, and the technology behind it (and AI in general) is notable. However, dark AI’s tactic can be thwarted by good cybersecurity practices.

One, think critically about anything you see online. If you see an image or video, especially one that seems designed to elicit a strong emotion, investigate it closely. Are you sure it’s 100% real? Look for anything “wrong” with the image or video, such as poor quality, shaky elements, and incorrect lip movement. Check the source of the content, then check other sources to see if they agree the content is legitimate.

Second, remember to always verify who is sending you messages. If you don’t know the sender, exercise caution. Don’t click on links in emails and texts from strange contacts, and especially never give away any MFA codes or passwords. If in doubt, reach out to the sender directly. Your boss is not emailing you about wiring money, and your bank wouldn’t send you a MFA code without you prompting it first.

Share This

Leave a Reply



Sign Up for weekly MERIT Security Briefing

By signing up, you agree to our Privacy Policy.