Table of Contents >> Show >> Hide
- From Milk Cartons to Machine Learning
- The Bored Panda Project: Giving 19 Missing People a New Digital “Today”
- AI and Missing-Person Searches: Beyond Beautiful Portraits
- The Hard Part: Ethics, Consent, and the Risk of Getting It Wrong
- How You (Yes, You) Can Help Missing-Person Efforts
- What I Learned from Using AI to Age 19 Missing People
When most people think of AI-generated images, they picture goofy filters, hyper-realistic cats in sunglasses, or fantasy portraits that look way cooler than any real-life ID photo. But in one powerful Bored Panda story, AI isn’t used for memes or makeoversit’s used for something far more serious: imagining how missing people might look today in the hope that someone, somewhere, will recognize them.
The project, created for a campaign in partnership with a dairy company and a missing-persons organization, takes old photos of people who disappeared years ago and uses artificial intelligence to “age” their faces forward. It’s a modern twist on those milk-carton photos from the 1980s, only this time the carton has been swapped for social media and digital art platforms, and the artist’s brush is replaced by neural networks and careful human editing.
In this article, we’ll unpack what it really means to use AI this way: how it works, why these images matter, what we need to be careful about, and what I personally learned while working on age-progressed portraits of 19 missing people. Spoiler: it’s emotional, complicated, and surprisingly hopeful.
From Milk Cartons to Machine Learning
Long before AI, families and investigators relied on simple but powerful tools: photographs and public awareness. In the United States, the idea of putting photos of missing children on milk cartons took off in the 1980s. Breakfast tables became bulletin boards. Millions of people saw those faces every morning, and awareness became part of everyday life.
As technology evolved, so did the images. Instead of relying only on the last known photo, forensic artists started using age-progression techniquesmanually redrawing faces to reflect how a child might look several years later. Organizations like the National Center for Missing & Exploited Children (NCMEC) helped pioneer these methods, using a mix of art, anatomy, and photography to create updated portraits that sometimes led directly to reunions.
Now, AI has entered the scene. Instead of slowly modifying faces by hand, algorithms can generate age-progressed versions in minutes. But as fast and impressive as this sounds, there’s an important truth to remember: these images are still educated guesses, not crystal balls. They’re tools to trigger recognition and memory, not definitive “before and after” photos.
Why These Images Matter So Much
Imagine trying to recognize a child who disappeared 20 years ago. They’re no longer the tiny face on the poster; they’re an adult with new features, different hair, and a changed expression. Without an updated image, your brain has to do all the work of “aging” them in your mindand most of us aren’t that great at forensic imagination.
Age-progressed and AI-generated portraits bridge that gap. They:
- Help the public visualize what a missing person might look like today.
- Reignite attention for cold cases that risk fading from the news cycle.
- Give families a renewed sense of momentum, even when years have passed.
The goal is simple: get the right face in front of the right eyes at the right time. If even one person says, “Wait… I think I’ve seen them,” the effort is worth it.
The Bored Panda Project: Giving 19 Missing People a New Digital “Today”
In the Bored Panda feature “I Used AI To Predict How 19 Missing People Would Look Today, In Hopes Of Finding Them,” the artist takes real cases of people who disappeared years ago and uses AI to generate what they might look like now. The portraits were part of a broader campaign inspired by those milk-carton days, updated for the era of social media feeds and viral visuals.
The idea is both simple and profound:
- Start with original photos from the time the person went missing.
- Use AI tools to “age” their appearance based on realistic patterns of aging.
- Refine the AI output by hand so the images stay grounded in reality, not fantasy.
- Share the portraits widely to help spark recognition and leads.
Each image becomes a bridge between past and present: the same eyes, the same bone structure, but with lines, maturity, and subtle changes that match the passing years. It’s not just about making people look olderit’s about making them look plausibly like themselves.
How AI Age Progression Actually Works (Without the Sci-Fi Gloss)
Under the hood, AI age progression looks less like magic and more like pattern recognition on steroids. Modern systems are typically trained on huge datasets of faces at different ages. Over time, they learn how features tend to change as people grow older: how cheeks lose volume, how jawlines sharpen or soften, how wrinkles appear, and how hairlines and color may shift.
When you feed in an old photo, the algorithm doesn’t “know” the personbut it knows millions of examples of how similar faces age over time. It then applies those patterns to the original image, producing a future version that feels believable. In some projects, AI models are combined with manual editing in tools like Photoshop, letting artists nudge the result closer to the unique look of the person instead of a generic “aged face.”
The process usually includes:
- Preprocessing: Cleaning and aligning the original photo so the AI can read it clearly.
- Model inference: Running the image through an age-progression model or AI art generator.
- Post-processing: Human review and editing to correct distortions, remove artifacts, and preserve distinctive traits.
The result is a hybrid: part algorithm, part human judgment. That partnership is crucial, especially when you’re dealing with real people and real families, not fictional characters.
AI and Missing-Person Searches: Beyond Beautiful Portraits
The Bored Panda project focuses on visual storytelling and awareness, but it’s part of a much larger wave of AI tools supporting missing-person cases. Around the world, law enforcement agencies, nonprofits, and researchers are experimenting with AI to handle tasks that are too slow or complex for humans alone.
Some of the ways AI is being used in this space include:
- Facial recognition: Scanning security camera footage, social media images, and public records for possible matches to missing persons.
- Big data and predictive analytics: Analyzing patterns in movement, activity, or reported sightings to narrow down search areas.
- Face prediction systems: Using deep learning to suggest likely appearances at different ages, similar to what age-progression artists do manually but at scale.
None of these tools replace investigators or community involvement. Instead, they act like high-powered assistants: handling the repetitive, data-heavy tasks so humans can focus on judgment, empathy, and decision-making.
Real-World Wins: When Technology Helps Bring People Home
Technology has already played a role in reuniting families in a variety of ways. Age-progressed photos have helped solve cases where a child was missing for a decade or more. Social media campaigns, powered by easily shareable images and hashtags, have brought attention to cases that would otherwise stay local and invisible.
In some documented cases worldwide, updated images, combined with traditional investigative work, have led to someone recognizing a face on TV, in a news article, or onlineand picking up the phone. Those “I think I know that person” moments are the quiet victories that keep families, advocates, and artists going.
AI doesn’t guarantee happy endings. But it can increase the odds that someone who might otherwise have gone unnoticed gets one more chance to be seen.
The Hard Part: Ethics, Consent, and the Risk of Getting It Wrong
Using AI on missing-person cases sounds heroic, but it also comes with serious responsibilities. When you generate an image of a real person who may still be alive, you’re not just playing with pixelsyou’re shaping how the world sees them.
Some of the key ethical questions include:
- Accuracy: What if the AI-generated image is so off-base that it confuses the public more than it helps?
- Bias: If the training data doesn’t represent diverse faces fairly, some groups may be misrepresented or misidentified.
- Privacy: How do we balance the need for visibility with the person’s right to privacy, especially in adult cases where they may not want to be found?
- Emotional impact: For families, seeing AI-generated “future” versions of a loved one can be both comforting and painful.
This is why projects like the one featured on Bored Panda need more than technical skillthey need clear values. Collaborating with reputable organizations, involving families when possible, and being transparent about the limitations of AI are all essential parts of doing this work responsibly.
Principles for Responsible AI Age Progression
Based on current best practices and ethical discussions in law enforcement and AI research, a few guiding principles stand out:
- Always treat the images as approximations, not facts. They should be labeled clearly as age-progressed or AI-generated.
- Work with official organizations and families whenever possible. They can provide context, feedback, and consent.
- Keep dignity at the center. The goal is never shock value or clickbait; it’s recognition, respect, and hope.
- Be transparent about the tools. People should know that AI was used and what that means in terms of uncertainty.
- Guard against bias. Whenever possible, use models and workflows that have been evaluated for fairness and tested on diverse populations.
In other words: just because AI can do something doesn’t mean it shouldat least not without careful thought and collaboration.
How You (Yes, You) Can Help Missing-Person Efforts
You don’t need to be an AI engineer, an artist, or a detective to play a meaningful role in this story. Ordinary people are often the ones who spot the crucial clue.
Here are some practical ways to contribute:
- Follow reputable organizations. Support and share posts from established missing-person and child-protection organizations.
- Share responsibly. When you repost age-progressed or AI-generated images, keep the original context and caption so people understand what they’re seeing.
- Stay skepticalbut not cynical. If something looks fake, manipulated, or sensationalized, check the source before sharing.
- Pay attention locally. Many missing-person cases don’t make national headlines; local awareness still matters a lot.
Awareness campaigns are like mosaics: each share, each conversation, each moment of attention is one little tile. Put enough tiles together, and suddenly a face becomes impossible to ignore.
What I Learned from Using AI to Age 19 Missing People
Working on AI portraits of 19 missing people is nothing like generating fun fantasy characters or stylized avatars. It changes how you look at faces, time, and responsibility. These are a few lessons that stayed with me long after the images were finished.
1. Every “case” is actually a person with a whole life behind one photo
When you first open the file, it’s just a small, sometimes low-resolution image: a school portrait, a faded family snapshot, a grainy ID photo. But once you start working on it, you notice the detailsthe way their smile is slightly crooked, the intensity in their eyes, the hairstyle that was trendy that year. You realize you’re not just editing pixels; you’re handling the last visual trace of someone’s known story.
I found myself talking to the screen more than once. “Okay, let’s imagine you’re 20 years older now. What would you look like?” It sounds silly, but that mental conversation helps keep the work human. You’re not just “aging” a face; you’re trying to preserve a recognizable soul in a digital guess.
2. AI is powerfulbut it loves to be confidently wrong
The first AI output is almost never the final image. Sometimes the model makes the person look too generic, erasing the subtle quirks that make them unique. Other times it adds weird artifacts: slightly mismatched eyes, odd skin textures, or oddly perfect features that feel more like stock photo models than real people.
That’s where human judgment comes in. I often had to cycle through multiple generations, adjust prompts or parameters, and then manually fix details. The goal isn’t to make the “prettiest” faceit’s to make the most believable continuation of the original person. If the AI smoothed away a distinctive mole or changed the shape of the nose too much, I brought it back.
One of the biggest lessons? Never trust the first image just because the AI delivered it confidently and quickly. Confidence is free; accuracy takes work.
3. The emotional weight is realand it’s heavy
There’s a moment in every portrait where the aged version “clicks.” The eyes still match; the expression feels right; the face looks like the same person at a different time in life. That’s often when the emotional impact hits hardest. If this person is still alive, this might be the closest anyone has come to seeing them as they are today.
It’s humbling and heartbreaking. You start to imagine the birthdays that passed, the milestones that never happened, the families that waited through holidays with an empty chair at the table. It’s impossible not to feel a responsibility to do justice to their story, even if your only tool is a digital portrait.
I also learned to take breaks, step away from the screen, and remind myself that I’m part of a larger chain of people doing what they canfrom investigators to volunteers to family members who refuse to stop hoping.
4. Collaboration makes the work strongerand safer
Working in isolation with powerful AI tools is risky. It’s easy to drift into artistic exaggeration or unintentional distortion, especially when you’re trying to make an image “pop” enough to get attention online. That’s why collaboration is so important.
Getting feedbackfrom people who know the case, from advocates, from expertshelps ground the images in reality and ethics. Sometimes that meant toning down dramatic lighting, making aging more subtle, or adjusting a hairstyle to match cultural or regional norms. Sometimes it meant deciding that an image just wasn’t good enough to share publicly.
The lesson: AI can generate possibilities, but people must decide what is responsible to release into the world.
5. Hope is a quiet but persistent presence
You might think a project like this would feel purely sad. And yes, there are heavy moments. But there’s also an undercurrent of hope that runs through everything: the hope that someone will see, remember, recognize, and act.
When the portraits go liveon a site like Bored Panda, in a campaign, or across social mediayou realize that thousands or even millions of strangers are seeing these faces. That’s a kind of crowd-sourced attention that older campaigns could only dream of. Every share, comment, or quiet pause while someone studies an image extends that hope just a little further.
In the end, the experience of using AI to predict how 19 missing people might look today taught me this: technology doesn’t replace human care; it amplifies it. The algorithms do the number-crunching, but people carry the meaning. As long as we keep that balance, AI can be more than a toy or a trendit can be a tool that keeps real people from being forgotten.
And if even one of those 19 faces finds its way home because someone recognized an AI-aged portrait, then all the late nights, refinements, and careful ethical debates will have been more than worth it.