Wednesday, September 3, 2025

There's a Social Media Platform For You: Will Artificial Intelligence Follow Suit?

Each form of social media has its own identity for sure.

X, formally known as Twitter, is the Kingdom of the Trolls led by the Self-Appointed Grand Pooba of Trolls, Elon Musk. It is a place where you can be abused to your heart's content. Nothing of substance here anymore. Keep up with latest conspiracy theories.


Facebook is certainly a place to connect with old friends, but it is also a place where you share a bit about yourself and Zuckerberg et al. will capture you and sell you to the highest bidder. Join and be exploited is its new motto. It's also a place to live for the Likes, but I think it has also become the nursing home for users because I'm told only old people hang out there now.


Next, there's LinkedIn, where these words are posted. It is a place where you can expose yourself to the leadership and business greatness of others who post in self-glorification daily and sometimes hourly. You can also brag and boast about your own greatness without guilt. And, if you received one of those "I want to connect" messages, you can connect and suddenly your Messaging Box lights up with sales pitches, which is the real reason they wanted to connect.


Finally, I've experienced Bluesky. There, if you are liberal, you will feel at home; but do not disagree with the liberal line, or suddenly you get trolled and if you don't come around, you find yourself blocked. There is some discussion that is thoughtful there, but you must participate in accepted ways, or you sit there by yourself forever.


Now some may ask, "Why not TruthSocial?" Well, I doubt very seriously it's a place for the total objective news experience. I also suspect like the other "Twitter-Like" social media places, you simply get trolled by those of an entirely differently persuasion than I found on Bluesky. I'm sure you can also join the conspiracy theorists there as well.


Where does this leave one regarding social media? I can't take any of these places seriously anymore.


One can only hope that Generative AI platforms don't end up in the same Land of Frivolity, but I'm not hopeful, because the same Silicon Valley crowd invented it too. The word "artificial" in this situation makes perfect sense, but I am not sure about the other word.

Saturday, August 23, 2025

Uncritical Promotion of AI: Educators Should Know Better

 "...the critique of artificial intelligence is not a Luddite call to break machines, but a call to break harmful assumptions about machines and thus enable the construction of better tools and, more important, better societies." Joy Buolamwini, Umasking AI: My Mission to Protect What Is Human in a World of Machines

In the early 1800s the Luddites were laborers who took to breaking factory machinery when their jobs were dissolved when factory owners adopted machines such as mechanized looms and weaving equipment. They received the "anti-progress" label due to that opposition, but they ultimately engaged in a protest that would be natural for anyone whose lives have been upended by the total loss of their livelihood.


Like the AI equity researcher, Joy Buolamwini, I want to redefine the "Luddite Call to Break Machinery" to a call to "Break the False Marketing and Promo-Rhetoric" being circulated by those who should know better, K-12 and post-secondary educators. Of all people, these educators should not be leading the way to follow the marketing and promotional rhetoric of AI companies who stand to make a bundle off users and their data, as well as AI evangelists an, experts and promoters who stand to make tons of money and advance their careers off coaxing naive educators and educational leaders into hiring them for training and keynote speaking opportunities. Instead, educators should be leading the way in critically interrogating AI and its promo-rhetoric by arming themselves with critical thought.


Asking the tough questions is vital when these AI companies and their evangelists start spouting this rhetoric that presents the utilization of AI in the form of an "either-or" propositions. "You either teach students AI and use AI in your classrooms or become guilty of educational malpractice for not preparing your students for the future," they say. "Because AI is inevitable, you either use AI or become irrelevant," goes another of their so-called reasons...never mind there's absolutely no evidence of either, nor can there be because these are unsupportable predictions of a future no one can see to begin with.


Armed with critical thought, the educator can see for themselves that choosing to employ AI in the classroom is a choice and not an imperative. As a choice, it is perfectly legitimate and logical to abstain from using AI in one's teaching or life.


There is no reliable crystal ball saying that AI is going to be "revolutionary" and "disruptive" except the one the seers the Silicon Valley use, who make that claim for every new creation they spring upon the world.


There are logical reasons for choosing to abstain or to use, but to frame the whole argument as inevitable or imperative is deceptive and educators who do that should know better as well.


As for the relevancy argument? If it takes a machine to make you and your teaching relevant, I would ask a simple question: "Have you ever thought about whether what you are teaching in the first place is relevant to begin with?"

Sunday, August 17, 2025

Just Because Generative AI Can Do It, Doesn't Mean Use It: Thoughts on Ethical Use

 

 "...we, all of us, have made the world too much into a computer, and that this remaking of the world in the image of the computer started long before there were any electronic computers. Now that we have computers, it becomes somewhat easier to see this imaginative transformation we have worked on the world. Now we can use computers itself--that is the idea of the computer--as a metaphor to help us understand what we have done and are doing." Joseph Weizenbaum, Computer Scientist and Professor at MIT, in his book Computer Power and Human Reason

Weizenbaum is perhaps best known for his creation of ELIZA in the 1960s, an earlier iteration of an AI-like program that was designed to play the role of a psychotherapist. The responses this program were so uncanny, that mental health professionals and many others started talking about it as a therapeutic tool for application in the mental health field. Weizenbaum could not believe that actual professionals seriously thought his program could automate psychotherapy. He was also "startled," as he put it, that people "became emotionally involved with the computer and how unequivocally they anthropomorphized it" (p. 6). He asked a very relevant question that should be asked today:

"What can the psychiatrist's image of his patient be when he sees himself, as therapist, not as an engaged human being acting as a healer, but as an information processor following rules, etc.?"

That question gets at the heart of some of the moral questions at the heart of when someone chooses the use AI to generate content. When an individual chooses to use AI to generate a Thank-You letter or generate work feedback for employees, what is the user's image of the person to whom they are writing? In using generative AI, what is the user's image of the recipient of the thank-you letter? Do they see that person as not important enough for the crafting of a personalized thank-you? What about the employee work feedback? Does the supervisor user not see the employee as a human being worthy of being engaged with? These are the moral and ethical questions that Weizenbaum thought about back in the 60s and 70s concerning the turning of certain things we do over to computers, but it seems we have forgotten his wisdom today in our race turn over tasks to AI.

"If I choose to use ChatGPT to generate a thank-letter, what is the harm?" an individual might say. Sure there is no physical harm, but there is a level of subterfuge in both these cases that is questionable. Sincerity can really be questioned in these instances because the user of generative AI has not thoughtfully engaged in word selection, syntax, and the craft of the language, so, can they say they are truly thankful, and that they really mean the feedback provided both positive and negative?

Weizenbaum captured at the core what is wrong with relying on computer technologies to sometimes do the things we humans should do for ourselves. That core idea is that the choice of using AI technologies is a political choice. Your simple turning to it to craft a letter, email, or evaluation can be an indication of what you really think of the recipient. Or it can communicate a conception of that individual as not worthy of attention. Weizenbaum also wrote in Computer Power and Human Reason:

"...there are certain tasks which computers ought not be made to do, independent of whether computers can be made to do them" (p. x).

In the Age of AI Hype we would do well to listen and act introspectively when we ask our computer tools to do things for us. It just might be the right thing to do even if it takes more of our time to do it.

Weizenbaum, J. (1976). Computer Power and Human Reason. W H Freeman and Company; New York, NY

Sunday, August 10, 2025

Is it the Responsibility of Educators to Teach Students How to Use AI Effectively and Ethically? Maybe Not

I realize the AI-in-Education-Evangelical Movement and faithful will find that which I post here, questions posed on the edge of blasphemy, but this uncritical acceptance of Silicon Valley’s latest begs some serious questions and needs a counter-viewpoint.

Why is it every time Silicon Valley dumps one of its new inventions on the world it is suddenly the . ' responsibility of educators to sanitize that technology’s use by teaching students how to use it ethically? Why is there absolutely no responsibility for these companies to engineer safe-to-use products to begin with? Perhaps, because Silicon Valley keeps dumping these products on us, they should pay a special education tax to pay for all the work educators have to engage in to put out the fires generated by these so-called “innovative” products.

In the mid-1990s, at the emergence of the internet, teachers were tasked and admonished to adopt and teach students how to use the internet ethically and effectively. With all the garbage on there now, it can rather be easy. Just treat it like teachers treated the National Enquirer in the 70s: they told us to simply don’t believe what you read in those things.

Fast forward to the advent of social media. Again, we as educators were told that we have to adopt social media in order to better communicate and teach students how to use it ethically and correctly. This quickly became impossible because these companies, like Facebook, engineered their products through their algorithms to promote the bull - - - -, and the most outrageous over anything else like the boring truth, all for the sake of click-profits. 

Has anyone stopped to think that perhaps it is time to hold these Silicon Valley companies responsible for the products they foist upon the world? Europe is starting to get it. I realize in a country that has the “guns-don’t-kill-people-people-kill-people” mentality this is blasphemy. After all social media products don’t spew misinformation and nonsense, people do, even though social media is an engineered megaphone for that.

Now, AI is foisted on the world, with its engineered flaws for potential abuse and ability to generate  nonsense, and we are allowing companies like OpenAI to shift responsibility to educators in a sanitation project to make their products more useful, ethical and safer.

In my opinion, that is twisted ethical thinking. Why not make these tech companies responsible for their flawed products to begin with instead of letting them focus on padding profits and CEO salaries?

Maybe instead of AI sanitation for these companies we need to choose purposely to not engage or participate until they clean up their act. That is also a perfectly ethical choice too.

#AIEducation #AILeadershio #AI #ChatGPT #OpenAI



Saturday, August 2, 2025

Important Questions to Ponder in AI Adoption

"The question of who demonized work is an important one, for as anyone knows, it can give great joy and be immensely satisfying when it is something one genuinely likes doing. Not much mechanized labor today falls into that category, whether it is sitting in front of a computer or tending a machine in a factory."  Nichols Fox, Against the Machine


Much of the AI marketing propaganda actually seeks to demonize work whether it intends to or not in order to sell its products. All the talk about "automation" of tasks is about turning over to a machine work that currently a person is doing. "AI will take away all the tasks you do not enjoy doing," is the promo-rhetoric. That is certainly appetizing, for who certainly does not wish to get rid of those annoying, monotonous tasks that are a part of our everyday lives?

But historically, machinery like AI has done much more than rid humans of those less desirous tasks; it has also replaced genuine, enjoyable, fulfilling work for some. Think about the craftsmen who made things for us before machines took over. Now, don't get me wrong, I am not advocating totally against machines and manufacturing, for unquestionably, a lot of good has come from those things, and I am a person who has benefited from them. I will also add that even craftsmen adopted machines and tools that either replaced undesirable tasks or made their work more timely. But what I am speaking of is just for a bit of sober-minded, critical thinking in the midst of all this AI Hype. I am suggesting that while AI can certainly assist in our work, the central question for me is: "Just what am I willing to outsource to mechanical systems?" "What tasks am I unwilling to give up because it is necessary for me to carry out my work?"

The sales pitch I hear from AI salespersons and AI evangelists is often that "It will make you more efficient. It will save you time" but is the tasks I am engaged in meant to be efficient? That is an important question because not everything we do fits the mechanical, business definition of efficiency. Take writing this: it is far from efficient to compose this writing. It certainly does not often lend itself to a time-schedule. And, I will add that the criteria of efficiency has no place for task-fulfillment personally unless of course one of your obsessions is be efficient in all things. Which reminds me of a poem I used to teach in high school English called, "Mr. Artesian's Conscientiousness" by Ogden Nash.

In this poem, an office worker, Mr. Artesian is so obsessed with time that his sole objective becomes "saving oodles of time" which is a hallmark of the efficiency obsession. He then takes on a program to reduce the time he spends engaged in daily activities like sleeping, eating, shaving, etc., until the end result he does not like the time he has to waste traveling in an elevator, so he jumps out his fiftieth floor window to save time. No doubt there are truly ridiculous things sometimes undertaken for the sake of efficiency, so certainly outsourcing a task to AI could be one of those. This means thinking critically for yourself about just what can be outsourced to this new-fangled technology. In addition, there just might be some personally fulfilling reasons to not outsource.

Ultimately, I am being cynical here, this uncritical hype about outsourcing our work and lives to technology just might be another way for Silicon Valley tech companies to get even more of our personal data for economic exploitation. 


The AI companies and evangelists certainly have an interest as well in getting us to think we hate our work, after all, if they can get us to see our work or parts of it as undesirable, they stand a better chance to make bundles. They want us, you and me, to be AI users, consumers.


In the end, it is OK to be sober, slow-moving, and not in a hurry in your decisions about whether or not you will use AI to outsource your work. It is an old marketing tactic to get your customer to think you have to buy their product now.

Friday, July 25, 2025

Don't Fall for the "AI Leadership" Rhetoric: Be a Leader Who Thinks First

"Technology was the new religion of the people. There was the sense that there was no problem that technology could not solve." Carl Benedikt Frey, "The Technology Trap"

One major truth about AI...solutionism is very much alive. The belief that technology has all our answers to all our problems is seen in so much of the "promo-rhetoric" around this technology.


We learning nothing from the past because we have blinded ourselves from any views backward, for if we look there, we see that very often, the technologies we develop to solve our problems, sometimes solve their intended targets, but with them, come a host of unintended and "unforeseen" consequences. (Could it be this "unforeseenness is related to the blinders we wear towards the past?) For every tech solution, if you use the lens of history, there are always less than desirable and sometimes catastrophic consequences.


By wearing the lenses of solutionism, leaders are blind to the possibility that the most expedient solution is sometimes not technological, and that there just might be an answer that does not require computer chips and screens.


AI offers some benefits, not doubt, but to religiously and fanatically claim that it will cure a host of our ills, without providing any solid evidence is cultish. That has never ended well for those who unquestioningly follow these individuals.


Instead, THINK...think for yourself. Do not simply accept the latest prognostifications from some tech enthusiast who really has a financial interest in selling your that product, AI products included.


Leading means thinking critically and not always falling immediately for the false promises of solutionistic rhetoric.

Saturday, July 19, 2025

Losing the Smartphone: You Can Choose Who's In Control, You or That Infernal Device of Intrusion and Behavior Modification

Today, I got into the car and on the way to pick up a take-out order, I discovered I had left behind my cell phone. I was reminded of this fact when I looked at the center console screen and it read: "No cell phone found." There was instant, organic feeling of panic for just a moment accompanied by the thought, "Oh no! I've forgotten my phone!"

This discovery and these thoughts were immediately followed by "So what!" and  I began to realize that forgetting it was not a bad thing. Actually, it was a good thing. I had left behind my 21st century intrusion and life modification device, and all was going to be just fine. Why did I react with these notions and the reaction of "Oh No!"?

It was in these moments and all moments like them, I am reminded of just how successful Silicon Valley and the Techno-Optimists have been able to market and condition all of us with the notion that "We must be connected at all times, 24 hours a day."

I was additionally reminded of another equally important fact: "I control my life. I can choose to leave my technological-intrusion device when I CHOOSE to do so. It is not a blasphemous act. It is not an act of the Luddite. The world and my world is not going to end. Despite the techno-optimist babble and blather, I am not irrelevant nor left behind without that infernal mechanism of beeping, chirping, and ringing.

Perhaps, without that thing connecting me to the intruding world outside and beyond my immediate one, I am perhaps at that moment closer to just being and living than ever. 

I have to add a final technological note of disclosure...I have have already turned off all notifications on my cell phone. This has been liberating and has no negative effect on my work or life. I am in charge of my device. I check my messages, email, and even the news when I CHOOSE. What's more, I've learned that this works fine. Rarely do any of these intruders of my life and my thoughts require immediate attention despite what the techno-optimists and marketers say.