Sunday, August 17, 2025

Just Because Generative AI Can Do It, Doesn't Mean Use It: Thoughts on Ethical Use

 

 "...we, all of us, have made the world too much into a computer, and that this remaking of the world in the image of the computer started long before there were any electronic computers. Now that we have computers, it becomes somewhat easier to see this imaginative transformation we have worked on the world. Now we can use computers itself--that is the idea of the computer--as a metaphor to help us understand what we have done and are doing." Joseph Weizenbaum, Computer Scientist and Professor at MIT, in his book Computer Power and Human Reason

Weizenbaum is perhaps best known for his creation of ELIZA in the 1960s, an earlier iteration of an AI-like program that was designed to play the role of a psychotherapist. The responses this program were so uncanny, that mental health professionals and many others started talking about it as a therapeutic tool for application in the mental health field. Weizenbaum could not believe that actual professionals seriously thought his program could automate psychotherapy. He was also "startled," as he put it, that people "became emotionally involved with the computer and how unequivocally they anthropomorphized it" (p. 6). He asked a very relevant question that should be asked today:

"What can the psychiatrist's image of his patient be when he sees himself, as therapist, not as an engaged human being acting as a healer, but as an information processor following rules, etc.?"

That question gets at the heart of some of the moral questions at the heart of when someone chooses the use AI to generate content. When an individual chooses to use AI to generate a Thank-You letter or generate work feedback for employees, what is the user's image of the person to whom they are writing? In using generative AI, what is the user's image of the recipient of the thank-you letter? Do they see that person as not important enough for the crafting of a personalized thank-you? What about the employee work feedback? Does the supervisor user not see the employee as a human being worthy of being engaged with? These are the moral and ethical questions that Weizenbaum thought about back in the 60s and 70s concerning the turning of certain things we do over to computers, but it seems we have forgotten his wisdom today in our race turn over tasks to AI.

"If I choose to use ChatGPT to generate a thank-letter, what is the harm?" an individual might say. Sure there is no physical harm, but there is a level of subterfuge in both these cases that is questionable. Sincerity can really be questioned in these instances because the user of generative AI has not thoughtfully engaged in word selection, syntax, and the craft of the language, so, can they say they are truly thankful, and that they really mean the feedback provided both positive and negative?

Weizenbaum captured at the core what is wrong with relying on computer technologies to sometimes do the things we humans should do for ourselves. That core idea is that the choice of using AI technologies is a political choice. Your simple turning to it to craft a letter, email, or evaluation can be an indication of what you really think of the recipient. Or it can communicate a conception of that individual as not worthy of attention. Weizenbaum also wrote in Computer Power and Human Reason:

"...there are certain tasks which computers ought not be made to do, independent of whether computers can be made to do them" (p. x).

In the Age of AI Hype we would do well to listen and act introspectively when we ask our computer tools to do things for us. It just might be the right thing to do even if it takes more of our time to do it.

Weizenbaum, J. (1976). Computer Power and Human Reason. W H Freeman and Company; New York, NY

Sunday, August 10, 2025

Is it the Responsibility of Educators to Teach Students How to Use AI Effectively and Ethically? Maybe Not

I realize the AI-in-Education-Evangelical Movement and faithful will find that which I post here, questions posed on the edge of blasphemy, but this uncritical acceptance of Silicon Valley’s latest begs some serious questions and needs a counter-viewpoint.

Why is it every time Silicon Valley dumps one of its new inventions on the world it is suddenly the . ' responsibility of educators to sanitize that technology’s use by teaching students how to use it ethically? Why is there absolutely no responsibility for these companies to engineer safe-to-use products to begin with? Perhaps, because Silicon Valley keeps dumping these products on us, they should pay a special education tax to pay for all the work educators have to engage in to put out the fires generated by these so-called “innovative” products.

In the mid-1990s, at the emergence of the internet, teachers were tasked and admonished to adopt and teach students how to use the internet ethically and effectively. With all the garbage on there now, it can rather be easy. Just treat it like teachers treated the National Enquirer in the 70s: they told us to simply don’t believe what you read in those things.

Fast forward to the advent of social media. Again, we as educators were told that we have to adopt social media in order to better communicate and teach students how to use it ethically and correctly. This quickly became impossible because these companies, like Facebook, engineered their products through their algorithms to promote the bull - - - -, and the most outrageous over anything else like the boring truth, all for the sake of click-profits. 

Has anyone stopped to think that perhaps it is time to hold these Silicon Valley companies responsible for the products they foist upon the world? Europe is starting to get it. I realize in a country that has the “guns-don’t-kill-people-people-kill-people” mentality this is blasphemy. After all social media products don’t spew misinformation and nonsense, people do, even though social media is an engineered megaphone for that.

Now, AI is foisted on the world, with its engineered flaws for potential abuse and ability to generate  nonsense, and we are allowing companies like OpenAI to shift responsibility to educators in a sanitation project to make their products more useful, ethical and safer.

In my opinion, that is twisted ethical thinking. Why not make these tech companies responsible for their flawed products to begin with instead of letting them focus on padding profits and CEO salaries?

Maybe instead of AI sanitation for these companies we need to choose purposely to not engage or participate until they clean up their act. That is also a perfectly ethical choice too.

#AIEducation #AILeadershio #AI #ChatGPT #OpenAI



Saturday, August 2, 2025

Important Questions to Ponder in AI Adoption

"The question of who demonized work is an important one, for as anyone knows, it can give great joy and be immensely satisfying when it is something one genuinely likes doing. Not much mechanized labor today falls into that category, whether it is sitting in front of a computer or tending a machine in a factory."  Nichols Fox, Against the Machine


Much of the AI marketing propaganda actually seeks to demonize work whether it intends to or not in order to sell its products. All the talk about "automation" of tasks is about turning over to a machine work that currently a person is doing. "AI will take away all the tasks you do not enjoy doing," is the promo-rhetoric. That is certainly appetizing, for who certainly does not wish to get rid of those annoying, monotonous tasks that are a part of our everyday lives?

But historically, machinery like AI has done much more than rid humans of those less desirous tasks; it has also replaced genuine, enjoyable, fulfilling work for some. Think about the craftsmen who made things for us before machines took over. Now, don't get me wrong, I am not advocating totally against machines and manufacturing, for unquestionably, a lot of good has come from those things, and I am a person who has benefited from them. I will also add that even craftsmen adopted machines and tools that either replaced undesirable tasks or made their work more timely. But what I am speaking of is just for a bit of sober-minded, critical thinking in the midst of all this AI Hype. I am suggesting that while AI can certainly assist in our work, the central question for me is: "Just what am I willing to outsource to mechanical systems?" "What tasks am I unwilling to give up because it is necessary for me to carry out my work?"

The sales pitch I hear from AI salespersons and AI evangelists is often that "It will make you more efficient. It will save you time" but is the tasks I am engaged in meant to be efficient? That is an important question because not everything we do fits the mechanical, business definition of efficiency. Take writing this: it is far from efficient to compose this writing. It certainly does not often lend itself to a time-schedule. And, I will add that the criteria of efficiency has no place for task-fulfillment personally unless of course one of your obsessions is be efficient in all things. Which reminds me of a poem I used to teach in high school English called, "Mr. Artesian's Conscientiousness" by Ogden Nash.

In this poem, an office worker, Mr. Artesian is so obsessed with time that his sole objective becomes "saving oodles of time" which is a hallmark of the efficiency obsession. He then takes on a program to reduce the time he spends engaged in daily activities like sleeping, eating, shaving, etc., until the end result he does not like the time he has to waste traveling in an elevator, so he jumps out his fiftieth floor window to save time. No doubt there are truly ridiculous things sometimes undertaken for the sake of efficiency, so certainly outsourcing a task to AI could be one of those. This means thinking critically for yourself about just what can be outsourced to this new-fangled technology. In addition, there just might be some personally fulfilling reasons to not outsource.

Ultimately, I am being cynical here, this uncritical hype about outsourcing our work and lives to technology just might be another way for Silicon Valley tech companies to get even more of our personal data for economic exploitation. 


The AI companies and evangelists certainly have an interest as well in getting us to think we hate our work, after all, if they can get us to see our work or parts of it as undesirable, they stand a better chance to make bundles. They want us, you and me, to be AI users, consumers.


In the end, it is OK to be sober, slow-moving, and not in a hurry in your decisions about whether or not you will use AI to outsource your work. It is an old marketing tactic to get your customer to think you have to buy their product now.