Star Trek: Prodigy’s most recent episode highlights the real issues with AI

STAR TREK: PRODIGY: Ep#106 -- Kate Mulgrew as Janeway, Jason Mantzoukas as Jankom Pog, Angus Imrie as Zero, Ella Purnell as Gwyn and Rylee Alazraqui as Rok-Tahk in STAR TREK: PRODIGY streaming on Paramount+ Photo: Nickelodeon/Paramount+ ©2021 VIACOM INTERNATIONAL. All Rights Reserved.
STAR TREK: PRODIGY: Ep#106 -- Kate Mulgrew as Janeway, Jason Mantzoukas as Jankom Pog, Angus Imrie as Zero, Ella Purnell as Gwyn and Rylee Alazraqui as Rok-Tahk in STAR TREK: PRODIGY streaming on Paramount+ Photo: Nickelodeon/Paramount+ ©2021 VIACOM INTERNATIONAL. All Rights Reserved.

Star Trek: Prodigy told a cautionary tale about trusting artificial intelligence.

Star Trek: Prodigy brought a real-life concern to the screens of Star Trek: Prodigy. In the 17th and the latest episode of the show, “Ghost in the Machine”, the crew ends up stuck in the holodeck after running simulation after simulation on how to deal with the real Vice Admiral Kathryn Janeway without coming in contact with the ship or opening communication; both of which would put Janeway’s ship (and the greater Starfleet) at risk for contamination.

The Protostar, the ship of the series, is outfitted with a machine that can take over any and every Federation ship or space station, and have it turn against itself. So the crew of the Protostar is trying to figure out ways to communicate with Janeway without being destroyed or infecting the ship.

It doesn’t go well, so the crew decides that not going to Starfleet may be the way to go. This causes them to get stuck in the holodeck, however. After some hypothesizing from Zero, it’s revealed that the Protostar’s Emergency Command Hologram of Kathryn Janeway was the one who trapped the kids in the holodeck.

Why? The hologram was just doing what it was programmed to do; return the ship to Starfleet. Of course, Hologram Janeway didn’t know she was doing this. It was a buried sub-routine that took over, throwing everyone off guard and ending the episode on a cliffhanger.

Highlighting the ability for artificial intelligence to go rogue against the will of its users is not new to Star Trek, nor is it unrealistic.

Out-of-control artificial intelligence is a real concern

Artificial Intelligence may be a step too far for a lot of people. It’s becoming more and more mainstream as those that want to push the bounds of what should be and what it is continues to grow. Right now there is debate over AI-published art, which steals images and concepts from users all over the world to create new mash-up pieces of art. There’s also a new AI that can create short stories, stealing concepts and ideas from actual authors.

Some believe that the AI is just doing what other artists do, seeing what was and being inspired by it, but others, rightfully, point out that the AI is just stealing images to use.

In this situation, the AI can’t differentiate between what it should and shouldn’t do, thus making it rather unreliable to be used in a way that is ethical. It shouldn’t be using images and photos that are trademarked, yet it is.

We’re seeing more of this concept in the upcoming movie M3gan. The premise of that movie sees a well-meaning relative create an android, powered by AI, to protect her niece both physically and emotionally. The niece had just lost her parents, so clearly, the child needs comfort and this android was designed to help her in the healing process. But in the film, the android goes beyond the normal and expected limits a person would have, and becomes a killer robot. After all, if someone isn’t alive, they can’t cause the child any issues.

Other stories of out-of-control AIs serve as cautionary tales taken to the extreme. Like “I Have No Mouth and I Must Scream”, or “I, Robot”. Then you have stories within stories, like the X-Men’s use of the Sentinels and their creator, Master Mold, who begin hunting down humans instead of mutants because to the AI, they are the same thing.

AIs are a terrifying concept and Prodigy did a great job of showing how even the most well-meaning of them can still cause harm to the people they were designed to help.