ChatGPT’s Latest Trend Could Put Your Cybersecurity at Risk

Think twice before sharing this photo online.
April 29, 2025
 / 
meritsolutions
 / 
Image

If you’ve spent anytime online recently, you may have seen a trend sweeping your social media feeds. For a while there, it seemed everyone was posting an image of themselves as an action figure in a box—complete with accessories. In fact, people are still generating and posting these images of themselves on the internet.

Of course, in order to generate the image, people are turning to ChatGPT: These photos are totally AI-generated, and are based on information ChatGPT knows about you. The more it knows, the more realistic your action figure design will be—both in likeness to you, as well as the accuracy of the accessories.

The problem is, this trend isn’t harmless fun. By posting one of these images on your social media feeds, you might be welcoming future cyberattacks.

Social engineering goes after the personal things

When you post anything personal on a public forum, you’re taking a risk. That’s because bad actors can (and will) use your personal information against you in order to break into your accounts and compromise your networks.

One pressing concern here is social engineering: Bad actors will learn things about you, then target you via email, text, or phone call with that information in mind. Maybe they learn you are interested in travel, and pose as a company offering you discounts on air fare. Perhaps they learn you like animals, and reach out with “opportunities” to support pets in need.

Whatever the case, bad actors come up with ways to potentially pique your interest with a cold call. Then, if they hook you in, they convince you to share more information that might help them access your personal accounts—or simply trick you into sharing money directly.

These AI-generated images are just one way for them to learn about you: If you post your AI action figure image on a public LinkedIn account, for example, anyone with access can see all of the accessories the AI made for you—and, thus, multiple interests of yours they could use to target you.

Sharing data with AI could put you at risk

The other side of the coin here is that sharing too much data with AI could put your privacy and security at risk. Unless you explicitly tell the AI service not to, every time you interact with an AI bot, it’s using your conversation to train its models to improve.

That includes any data you share with the model—such as photos of yourself, or personal identifying information. So, if you upload an image of yourself, and tell the AI three or four of your hobbies, it may take that information in as training to inform and improve itself for future conversations.

This is not exclusive to this particular trend, or ChatGPT alone—this is how AI models work and evolve. Still, it’s worth being cautious about the data we willingly give away to AI programs.

Share This

Leave a Reply

  1. This is a great article! What’s happening out there is mostly with the teenagers doing this… and this is really scary! They just don’t think. I’m grateful that my children are 20 somethings now! Thanks for sharing



Sign Up for weekly MERIT Security Briefing

By signing up, you agree to our Privacy Policy.