"...we, all of us, have made the world too much into a computer, and that this remaking of the world in the image of the computer started long before there were any electronic computers. Now that we have computers, it becomes somewhat easier to see this imaginative transformation we have worked on the world. Now we can use computers itself--that is the idea of the computer--as a metaphor to help us understand what we have done and are doing." Joseph Weizenbaum, Computer Scientist and Professor at MIT, in his book Computer Power and Human Reason
Weizenbaum is perhaps best known for his creation of ELIZA in the 1960s, an earlier iteration of an AI-like program that was designed to play the role of a psychotherapist. The responses this program were so uncanny, that mental health professionals and many others started talking about it as a therapeutic tool for application in the mental health field. Weizenbaum could not believe that actual professionals seriously thought his program could automate psychotherapy. He was also "startled," as he put it, that people "became emotionally involved with the computer and how unequivocally they anthropomorphized it" (p. 6). He asked a very relevant question that should be asked today:
"What can the psychiatrist's image of his patient be when he sees himself, as therapist, not as an engaged human being acting as a healer, but as an information processor following rules, etc.?"
That question gets at the heart of some of the moral questions at the heart of when someone chooses the use AI to generate content. When an individual chooses to use AI to generate a Thank-You letter or generate work feedback for employees, what is the user's image of the person to whom they are writing? In using generative AI, what is the user's image of the recipient of the thank-you letter? Do they see that person as not important enough for the crafting of a personalized thank-you? What about the employee work feedback? Does the supervisor user not see the employee as a human being worthy of being engaged with? These are the moral and ethical questions that Weizenbaum thought about back in the 60s and 70s concerning the turning of certain things we do over to computers, but it seems we have forgotten his wisdom today in our race turn over tasks to AI.
"If I choose to use ChatGPT to generate a thank-letter, what is the harm?" an individual might say. Sure there is no physical harm, but there is a level of subterfuge in both these cases that is questionable. Sincerity can really be questioned in these instances because the user of generative AI has not thoughtfully engaged in word selection, syntax, and the craft of the language, so, can they say they are truly thankful, and that they really mean the feedback provided both positive and negative?
Weizenbaum captured at the core what is wrong with relying on computer technologies to sometimes do the things we humans should do for ourselves. That core idea is that the choice of using AI technologies is a political choice. Your simple turning to it to craft a letter, email, or evaluation can be an indication of what you really think of the recipient. Or it can communicate a conception of that individual as not worthy of attention. Weizenbaum also wrote in Computer Power and Human Reason:
"...there are certain tasks which computers ought not be made to do, independent of whether computers can be made to do them" (p. x).
In the Age of AI Hype we would do well to listen and act introspectively when we ask our computer tools to do things for us. It just might be the right thing to do even if it takes more of our time to do it.
Weizenbaum, J. (1976). Computer Power and Human Reason. W H Freeman and Company; New York, NY