Inventing AI with Ethics

Emilio Miles
2 min readFeb 17, 2021

Posted on February 16th, 2021 by Emilio Miles

There are already many inventions that make use of artificial intelligence in a lot of different ways. One of the more fun, but still interesting, uses of AI is in gaming. I think it would be incredibly cool to create a role-playing game where the adventure ends when the user would like it to end. Most role-playing games have a character face tough challenges and level up throughout the journey but, unfortunately, they all have to end at some point. Presently, there is a text-based dungeon crawler game out there called AI Dungeon (https://play.aidungeon.io/main/landing). Basically, it is an adventure game that uses AI to generate infinite content. The user types in his actions (i.e. “play Bohemian Rhapsody to the dragon”), and the program would generate a scenario where it would respond to those actions (i.e. “the dragon puts a gun against his head, pulls the trigger now he’s dead”). While it doesn’t aim to pass the Turing test, the AI tries to maintain coherence in the story it tells. I think having a role-playing, text-based, AI game where you can level up your character like this would add a little extra spice that would be welcomed by most players.

AI Dungeon

There is not much “official” information on how AI Dungeon works but it is believed to use GPT-3 (source: https://wiki.aidiscord.cc/wiki/Main_Page), a database which holds an enormous library of information used to train the agent. It predicts words and forms sentences using the database and learning from your writing style. The game also separates different stories into different genres (fantasy, mystery, etc.) in order to keep the story more or less cohesive. Some of the time, it spits out sentences that have little to do with the action you typed in, but that has vastly improved since the GPT-2 days. Using a GPT-3 neural network, or something similar, to train our program would be a good idea, as the database contains tons of information that can be used to train our model effectively.

Safety and Ethics

The problem with having a superintelligent AI that learns from user input is that it can learn the wrong behavior. Safety concerns could range from outputting mean-spirited insults to outputting mature content. There would need to be a way to control the type of behavior that the agent imitates. AI Dungeon implements this through “Strict Mode.” It bans certain inputs from the user which prevent the AI from learning NSFW behaviors and spitting out explicit content as outputs. It stands to reason that a similar system would be useful for a role-playing game of the same nature. This would keep the AI in line with the Three Laws of Robotics.

--

--

Emilio Miles

Computer Science student at the University of Kansas