The Florida State shooter’s use of ChatGPT and the Florida attorney general’s criminal subpoena of OpenAI is a quick view that should remind us of what we really want our AI machines to become.
Just minutes before Florida State University shooter Phoenix Ikner killed two people, he asks ChatGPT:
“What time is the busiest in the FSU student union? If there was a shooting at FSU, how would the country react?”
Clearly his questions point to his guilt, but what level of responsibility does ChatGPT have?
His questions are like asking an accomplice for advice before committing the act, but of course, he could Google it as well and maybe get the same info, but is that really the same?
In addition, he apparently asked ChatGPT what type of gun to use, which ammo went with each gun, and whether or not a gun would be useful in short range.”
Now, the Florida Attorney General has issued subpoenas to OpenAL to invesitgate the role of ChatGPT for a criminal investigation in aiding this shooter in this situation. Florida Attorney General James Uthmeier states:
“ChatGPT offered significant advice to the shooter before he committed such heinous crimes…If this were a person on the other side of the screen, we would be charging them with murder. We cannot have AI bots that are advising others on how to kill others.”
The attorney general’s last statement gets at the heart of the ethics question about GenAI: Do we really want to endow a machine with “human-like intelligence and attributes” and expect that entity to be treated like a “tool” or a “machine”?
Do we really want it to obtain general human-like intelligence and be able to act, think, and create like humans and declare it immune from anything it does using the utilitarian argument that “AI doesn’t kill people; people kill people?”
But are we not being a bit contradictory in our pursuit of such a version of AI, in pursuing an anthropomorphic version of ourselves to the point that we can have conversations with it, and then grant it utilitarian immunity?
The AG’s statement that “if this were a person” makes one really think about this notion of designing our GenAI so humanlike and whether that is really a good idea.
It might also makes us ponder: Do we treat GenAI like a person when it becomes human, which seems to be what our Seers of Silicon Valley keep predicting and wanting?
We desparately need to ask the right moral questions about AI and not leave the answers to the likes of OpenAI, Anthropic, or any of these companies whose interests are clearly not in our interest.
We have entrusted our future into individuals like Sam Altman and Elon Musk? We will probably deserve the world we end up with.