ChatGPT Bots and Boundaries

A range of dangers in AI neural network race to win games

neural networks of the human brain and AI
Photo by DeepMind on Unsplash

Boundaries on humanity needed

Human brains have neural networks that connect disparate information into actual thoughts and actions. Now AI is at a stage of development that mimics human language, speech, and problem-solving. It is a game changer. Systemic gaslighting may undermine all the other benefits.

From the discussion by Tristan Harris and Aza Raskin about the range of dangers of AI and chatbots there are catastropic risks. The Center for Humane Technology provides current examples of bots without boundaries. They demonstrates the exponential growth of AI applications over the past 5 years that needs to slow down.

“Where’s the harm? Where’s the risk? This thing is really cool, yeah.”
Then I have to walk myself back into seeing the systemic force. So just be really kind with yourselves that it’s going to feel almost like the rest of the world is gaslighting you, and people will say it at cocktail parties, like,
“You’re crazy. Look at all this good stuff it does.” And also, we are looking at AI safety and bias. So what, show me the harm? Point to me at the harm. It’ll be just like social media, where it’s very hard to point at the concrete harm at this specific post, that specific bad thing to you.
“We don’t know what the answers are. We just wanted to gather you here to start a conversation, to talk about it, and for all of you to be able to talk to each other. We’re here to try to help coordinate or facilitate whatever other discussions need to happen that we can help make happen. But what we really wanted to do was just create a shared frame of reference for some of the problems, some of the dark sides. Just to repeat what Aza said, AI will continue to create medical discoveries we wouldn’t have had. It’s going to create new things that can eat, you know, microplastics and solve problems in our society. It will keep doing those things, and we are not wanting to take away from the fact that those things will happen. The problem is, if as the ladder gets taller, the downsides of, hey, everybody has a bioweapon in their pocket, these are really, really dangerous concerns. And those dangerous concerns undermine all the other benefits. So we want to find a solution.”

Technology and safety must go together. They cite the boundaries that were put in place to stop nuclear war in the past 80 years. Similar guardrails need to be built by tech giants to prevent unforseen disasters and current misuses of AI in multiple fields. Like anything else, technology can be used for beneficial or detrimental purposes.

Note: This article was compiled without ChatGPT except for asking it to check text for grammar, punctuation, and capitalization.