ChatGPT can add another job to its resume: game developer. With just a few simple prompts from a user, the AI chatbot invented its own math-based logic-puzzle game dubbed Sumplete, rules and all. Not only that, but it generated working code for the game, which has since been turned into an addictive, free browser game that’s already gaining some buzz online.
There’s just one problem: Sumplete isn’t a new game.
In trying to make sense of the impressive feat, I quickly discovered that Sumplete is nearly identical to at least one other mobile game that’s been available in app stores for years. The unusual case adds more fuel to the fire for those who worry about the ethics of AI content generation. Where is the line when it comes to computer-generated plagiarism? Even ChatGPT is victim to its own theft, as I’d soon discover.
Inventing a game
The project popped up online on March 3, when ChatGPT user Daniel Tait posted a playable version of the game online alongside a blog post detailing how it came to be. According to Tait, Sumplete was born out of a few quick messages with ChatGPT. In screenshots of his chat log, Tait asks the bot for puzzle games similar to Sudoku. After getting a few suggestions, he goes one step further and asks it to invent its own game.
When the bot spits out an idea for a game called Labyrinth Sudoku, iterating on the basic rules of Sudoku with a maze twist, Tait asks for a few more ideas. On the fourth attempt, ChatGPT pitches another variation on that formula called Sum Delete. The math puzzle game presents players with a random grid full of numbers. Each row and column has a target number at the end of it. The goal is to delete the correct numbers so that the sum of each number in the rows and columns hits its target.
The game itself is surprisingly addictive. It’s a simple, but ingenious concept that has the same appeal as something like Wordle. Its starting 3×3 grids are easy to figure out, but its 9×9 ones pose a legitimate challenge that even seasoned logic puzzle veterans will have a hard time solving. It would be an incredible milestone for AI content creation, proving that bots can pave the way for innovative ideas that could push the gaming industry forwards.
Or at least that would be the case were it an original concept.
Not adding up
When I first read about Sumplete, I was skeptical about the idea of AI inventing a puzzle game format – especially one that seems so simple. The game takes some clear inspiration from existing puzzle formats. In its conversation with Tait, ChatGPT cites Magic Number as Sumplete’s closest parallel, but its closer comparison is Kakuro. The classic game is a newspaper staple, taking the basic concept of a crossword but subbing in letters for numbers. Sumplete riffs on that idea, but inverts the formula by having players eliminate numbers from a grid to reach the targets in each row and column. It’s a smart idea, but I was sure something like it had to exist.
It didn’t take long to discover that my hunch was correct. Within minutes, the Digital Trends team dug up an identical game on the Android app store called Summer that’s been out since the summer of 2020. Developed by RP Apps and Games, the logic game has the exact same ruleset as Sumplete. It features a more presentable UI and some extra quality-of-life features, but it’s otherwise the same. ChatGPT’s great invention was a copy.
I reached out to Tait to discuss the similarities between the games. Other players had flagged Summer to him, as well as another similar mobile game called Rullo. Despite Sumplete not being original, he was still impressed that the technology was able to produce an enjoyable, fully playable game so quickly. His concerns are more with how the model framed the game as an “invention,” calling for more transparency in how AI draws inspiration from data.
“My main concern is that ChatGPT confidently told me that it had invented a new game,” Tait tells Digital Trends. “I feel the model should be trained to be less confident in these types of answers, or unable to answer at all. I would have much preferred an answer that this game was inspired by Summer or Rullo if that is truly how it came up with the idea. I also think ChatGPT should add some sort of explanation as to how it has generated a response including data sources that helped train that particular answer.”
My first instinct here was to give the tech the benefit of the doubt. The idea behind the game is simple and I can see how parallel thinking could lead any logical operation to spit out the idea. I’m sure Summer and Rullo are both riffs on another preexisting game. Perhaps ChatGPT had come to the same logical conclusion a human would when trying to reinvent Sudoku. To test that theory, I decided to initiate a conversation with ChatGPT myself and see if I could reproduce Tait’s results.
That’s where things got a little confusing.
I started by interrogating the bot, asking if it remembered creating a game called Sumplete. I quickly realized that I was a doofus trying to hold a conversation with a predictive text machine with no memory. Some circular questions lead to boilerplate results about how ChatGPT does not actually have the power to create games. Furthermore, it emphasized the severity of copyright infringement and suggested that the creators of Summer consult with legal experts. I began to wonder if Tait was just bluffing and had just faked a ChatGPT log to generate some buzz for his game.
Confused as to how that was possible, I asked if what it had generated was original or based on existing code. “The code I generated for the Sumplete game is original and was written by me based on the rules of the game as described by the user,” it responded. That would be a rational explanation, but there was one problem: I had never described a single rule of Sumplete to ChatGPT. When I pointed this out, the bot apologized and recognized that I had never given it rules to work off. Instead, it explained that it simply was riffing on patterns in existing games.
It had accidentally plagiarized its own creation.
The ethics of AI game development
It doesn’t take a logic puzzle pro to piece together how that could have happened. ChatGPT “created” Sumplete previously, so it’s possible that it could have called upon that existing code in its corpus when asked to make a game with that name. There’s also a possibility that it actually discovered Tait’s blog post about Sumplete and pulled the data from it. In either case, it’s no coincidence.
In a way, this is what would be expected if you ask a human to make a very basic game.
To try and demystify the tech, I asked a source who works in the artificial intelligence generation (AIG) space, who chose to remain anonymous for this story, how unusual this chain of events is. From their perspective, a chatbot surfacing the same code, based on nothing more than a title, to two different users is a relatively abnormal occurrence. What’s less unusual, though, is that it created a game so similar to Summer in the first place. The source I spoke to noted that the game’s concept is so simple that it’s not hard to imagine a machine coming up with it on its own — it’s not like it generated a working build of Elden Ring out of thin air. They chalk it up to AI working as intended, mimicking the iterative nature of human game designers.
“If this game has been invented by multiple people multiple times, it’s very possible that if anyone asked ‘invent a new game,’ what emerges from its corpus is some low-hanging fruit for game invention,” they tell Digital Trends. “Because these games have been invented over and over again, it’s mimicking the proper human behavior of inventing a little game that other people have invented before … In a way, this is what would be expected if you ask a human to make a very basic game.”
Still, the situation underscores a concerning ethical problem that’s become the center of debate in recent months. Current AI models create content by analyzing existing data pulled from the internet. Whatever it spits out is never wholly original as a result; it’s always copying someone else’s homework on some level. AI tools like Dall-E that create images based on user prompts, for instance, are trained through existing images. That’s drawn outrage from artists who see it as a form of plagiarism and have seen AI art banned from communities like Inkblot Art.
The Sumplete debacle could set off similar alarm bells for game developers. Even if it’s a simple case of parallel thinking, it’s a little troubling that ChatGPT could create code mirroring a game that already exists. And if I could get my own working version of that game in seconds by simply asking it to generate code based on that title, what else could I get it to make with enough time and data?
To casually test how similar those problems currently were for video games, I began asking ChatGPT to generate some games similar to others. I started with a softball, asking it to make a platformer like Mario. It gave me a full pitch for a game called Galactic Adventures, a 3D platformer starring a spaceman named Max who needs to collect artifacts on various planets. Everything about the idea is generic enough that it doesn’t set off any red flags. It features five themed worlds (ice, fire, etc), there are power-ups to collect, and there’s even a co-op mode that lets a second player control a character named Zoe. That seemed acceptably nondescript.
The experiment went off the rails when I asked it to create a game like The Last of Us. It spit out a full elevator pitch for a game called Aftermath, a “post-apocalyptic game set in a world that has been devastated by a mysterious virus.” It’s pitched as a third-person action adventure with stealth, survival elements, and crafting. The premise sounds familiar, implying that the virus triggers a zombie situation, but things get more specific when it goes into plot details.
Its hero is Ellie, a girl who’s immune to the virus. On her journey, she meets a “grizzled veteran” named Joel, who becomes her “mentor and protector.” It didn’t create a game like The Last of Us; it just created The Last of Us. Though, this version ends with Ellie being successfully cured and the duo walking off into the sunset as heroes (I suppose ChatGPT has more hope for humanity than Neil Druckmann). Had it been able to generate a working version of Aftermath, would Sony be able to take legal action against a robot?
Like a lot of AI horror stories, a lot of this can be chalked up to honest kinks in the tech that make for a good chuckle. The hope is that these learning models will be tweaked with each hiccup and learn from their mistakes. When I asked ChatGPT to generate code for more games using the same sentence structure, it insisted that it was incapable of doing that. Later requests for it to invent a game like The Last of Us were fruitless, with the bot instead giving me tips on how to teach myself to make games. Passive aggressive, but fair.
It’s hard to shake the creeping unease, though, when I’m left with so many questions about how a bot could learn to pitch a preexisting video game idea, claim it as an original invention, generate working code for it, and later give a completely different user that same exact code based on title alone. The AI source I spoke to says that they don’t believe this is the norm for the tech but notes that plagiarism is “an increasingly small” risk for any AI model, open or not.
When does a harmless chain of technological mishaps turn into a serious legal nightmare for developers? I suppose we’ll find out when Aftermath gets its own HBO adaptation.