Friday, January 30, 2026

EdTech AI Promoters Need a New Argument...They Sound Like a Tired Over-Aired Commercial

 LinkedIn AI promoters need new promo tactics.

The tired, worn-out statement used by AI cheerleaders:


AI isn’t going to replace _____ (insert whatever job title the person is peddling AI to, i.e., teacher, programmer, or school leader); it will replace _____ (insert same from above) who do not use AI.


One would wish that AI cheerleaders would at least come up with some original arguments, instead of using old, tired, and unproven statements like this.


If you want to convince someone of the necessary utility of your pet product at least try to make some new, supported, valid arguments.


AI cheerleaders’ posts on here is like seeing that same, boring commercial that runs every commercial break while watching the news. And, I would add it is about as convincing. One can only impatiently wait until the tired commercial is over.

Perhaps Its Time for a "Screenless" Charter School Focused on Teaching How to Be Human

Perhaps it is time to establish a “screenless” charter school in NC. It could have as its central mission to educate without using EdTech as a crutch for learning and technology as a means of controlling teachers, students and the learning.

The school could make a commitment to teaching students to be critical, independent and responsible citizens not proper consumers of the latest product on offer from Silicon Valley and Big Tech.

The school could still utilize technology, but that technology would be in control of the teachers and parents not Ed Leaders and EdTech consultants.

The goal of this screenless charter school would be to create a space where teachers connect with students and their parents without the constant mediation of impersonal devices whose goals are to addict and capture attention. A place for people not machines.

Students would learn the foundations that would make them critical consumers instead of EdTech fad chasers.


Wednesday, January 28, 2026

Do I Really Care If My Dishwasher Is Silcon Valley Smart?

 

Photo by Author: Implements of Progress

“Smart technologies” are all around us. Walk into an electronic store, and there are gadgets everywhere, from light switches to TVs that proudly wear the label “smart.” But do we really need “smart” devices? Does my overhead light turning on by itself really provide any value? Am I just so lazy and obsessed with efficiency that flipping a light switch won't do? Tech companies to car companies hope you fall for smartness. They have worked overtime to make sure “smart” label is something that sells. In the end, who's the sucker and has really earned the label "dumb"?

What does “smart” mean? I think the simplest definition of this term is offered by Jathan Sadowski in his book Too Smart: How Digital Capitalism Is Extracting Data, Controlling Our Lives, and Taking Over the World. He writes:

...’smart’ means a thing is embedded with digital technology for data collection, network connectivity, and enhanced control.”

The question is, Do I really care whether my TV collects data, is connected to network, and provides enhanced control? Perhaps I do for some of these, I like being able to stream to my TV, because it is much easier way of getting programming than by antenna or cable, so network connectivity I care about. The other two, well, not so much.

I really do not want my devices collecting data about me and my usage. Someone out there knowing which shows I watch is not something I value at all. I don’t even like the recommendations that pop up in Netflix and would prefer the old fashioned way of reading descriptions and then deciding what I want to watch. And, the “control?” Whose control I would ask? If means I have more control, I thought I was already in control. If it means someone else's, that's creepy.

I once purchased a dishwasher that self-declared to be “smart.” Admittedly, hearing that it had that feature seemed to be an added plus at first. Then, I realized, I bought a dishwasher to wash the dishes, not to send back my usage data to some company in the cloud so that they can profit from it. I disabled the smartness, and the dishwasher continues to do what I bought it for in the first place: wash the dishes.

"Smartness" has been peddled by companies as innovative and must-have. When Big Tech throws around its “innovative” products and ideas, we would do well to ask: “For whom is this product really innovative?” “Who is really gaining the most from this “smartness” thing?

Chances are, the answers to those questions are not "me" or at least entirely me. The whole industry of Big Tech kind of reminds me of Leroy, the used car salesman. He’d tell you that the car could fly if it means you would buy it. Silicon Valley Tech Companies have earned that same label, slimy salesmanship.


Wednesday, September 3, 2025

There's a Social Media Platform For You: Will Artificial Intelligence Follow Suit?

Each form of social media has its own identity for sure.

X, formally known as Twitter, is the Kingdom of the Trolls led by the Self-Appointed Grand Pooba of Trolls, Elon Musk. It is a place where you can be abused to your heart's content. Nothing of substance here anymore. Keep up with latest conspiracy theories.


Facebook is certainly a place to connect with old friends, but it is also a place where you share a bit about yourself and Zuckerberg et al. will capture you and sell you to the highest bidder. Join and be exploited is its new motto. It's also a place to live for the Likes, but I think it has also become the nursing home for users because I'm told only old people hang out there now.


Next, there's LinkedIn, where these words are posted. It is a place where you can expose yourself to the leadership and business greatness of others who post in self-glorification daily and sometimes hourly. You can also brag and boast about your own greatness without guilt. And, if you received one of those "I want to connect" messages, you can connect and suddenly your Messaging Box lights up with sales pitches, which is the real reason they wanted to connect.


Finally, I've experienced Bluesky. There, if you are liberal, you will feel at home; but do not disagree with the liberal line, or suddenly you get trolled and if you don't come around, you find yourself blocked. There is some discussion that is thoughtful there, but you must participate in accepted ways, or you sit there by yourself forever.


Now some may ask, "Why not TruthSocial?" Well, I doubt very seriously it's a place for the total objective news experience. I also suspect like the other "Twitter-Like" social media places, you simply get trolled by those of an entirely differently persuasion than I found on Bluesky. I'm sure you can also join the conspiracy theorists there as well.


Where does this leave one regarding social media? I can't take any of these places seriously anymore.


One can only hope that Generative AI platforms don't end up in the same Land of Frivolity, but I'm not hopeful, because the same Silicon Valley crowd invented it too. The word "artificial" in this situation makes perfect sense, but I am not sure about the other word.

Saturday, August 23, 2025

Uncritical Promotion of AI: Educators Should Know Better

 "...the critique of artificial intelligence is not a Luddite call to break machines, but a call to break harmful assumptions about machines and thus enable the construction of better tools and, more important, better societies." Joy Buolamwini, Umasking AI: My Mission to Protect What Is Human in a World of Machines

In the early 1800s the Luddites were laborers who took to breaking factory machinery when their jobs were dissolved when factory owners adopted machines such as mechanized looms and weaving equipment. They received the "anti-progress" label due to that opposition, but they ultimately engaged in a protest that would be natural for anyone whose lives have been upended by the total loss of their livelihood.


Like the AI equity researcher, Joy Buolamwini, I want to redefine the "Luddite Call to Break Machinery" to a call to "Break the False Marketing and Promo-Rhetoric" being circulated by those who should know better, K-12 and post-secondary educators. Of all people, these educators should not be leading the way to follow the marketing and promotional rhetoric of AI companies who stand to make a bundle off users and their data, as well as AI evangelists an, experts and promoters who stand to make tons of money and advance their careers off coaxing naive educators and educational leaders into hiring them for training and keynote speaking opportunities. Instead, educators should be leading the way in critically interrogating AI and its promo-rhetoric by arming themselves with critical thought.


Asking the tough questions is vital when these AI companies and their evangelists start spouting this rhetoric that presents the utilization of AI in the form of an "either-or" propositions. "You either teach students AI and use AI in your classrooms or become guilty of educational malpractice for not preparing your students for the future," they say. "Because AI is inevitable, you either use AI or become irrelevant," goes another of their so-called reasons...never mind there's absolutely no evidence of either, nor can there be because these are unsupportable predictions of a future no one can see to begin with.


Armed with critical thought, the educator can see for themselves that choosing to employ AI in the classroom is a choice and not an imperative. As a choice, it is perfectly legitimate and logical to abstain from using AI in one's teaching or life.


There is no reliable crystal ball saying that AI is going to be "revolutionary" and "disruptive" except the one the seers the Silicon Valley use, who make that claim for every new creation they spring upon the world.


There are logical reasons for choosing to abstain or to use, but to frame the whole argument as inevitable or imperative is deceptive and educators who do that should know better as well.


As for the relevancy argument? If it takes a machine to make you and your teaching relevant, I would ask a simple question: "Have you ever thought about whether what you are teaching in the first place is relevant to begin with?"

Sunday, August 17, 2025

Just Because Generative AI Can Do It, Doesn't Mean Use It: Thoughts on Ethical Use

 

 "...we, all of us, have made the world too much into a computer, and that this remaking of the world in the image of the computer started long before there were any electronic computers. Now that we have computers, it becomes somewhat easier to see this imaginative transformation we have worked on the world. Now we can use computers itself--that is the idea of the computer--as a metaphor to help us understand what we have done and are doing." Joseph Weizenbaum, Computer Scientist and Professor at MIT, in his book Computer Power and Human Reason

Weizenbaum is perhaps best known for his creation of ELIZA in the 1960s, an earlier iteration of an AI-like program that was designed to play the role of a psychotherapist. The responses this program were so uncanny, that mental health professionals and many others started talking about it as a therapeutic tool for application in the mental health field. Weizenbaum could not believe that actual professionals seriously thought his program could automate psychotherapy. He was also "startled," as he put it, that people "became emotionally involved with the computer and how unequivocally they anthropomorphized it" (p. 6). He asked a very relevant question that should be asked today:

"What can the psychiatrist's image of his patient be when he sees himself, as therapist, not as an engaged human being acting as a healer, but as an information processor following rules, etc.?"

That question gets at the heart of some of the moral questions at the heart of when someone chooses the use AI to generate content. When an individual chooses to use AI to generate a Thank-You letter or generate work feedback for employees, what is the user's image of the person to whom they are writing? In using generative AI, what is the user's image of the recipient of the thank-you letter? Do they see that person as not important enough for the crafting of a personalized thank-you? What about the employee work feedback? Does the supervisor user not see the employee as a human being worthy of being engaged with? These are the moral and ethical questions that Weizenbaum thought about back in the 60s and 70s concerning the turning of certain things we do over to computers, but it seems we have forgotten his wisdom today in our race turn over tasks to AI.

"If I choose to use ChatGPT to generate a thank-letter, what is the harm?" an individual might say. Sure there is no physical harm, but there is a level of subterfuge in both these cases that is questionable. Sincerity can really be questioned in these instances because the user of generative AI has not thoughtfully engaged in word selection, syntax, and the craft of the language, so, can they say they are truly thankful, and that they really mean the feedback provided both positive and negative?

Weizenbaum captured at the core what is wrong with relying on computer technologies to sometimes do the things we humans should do for ourselves. That core idea is that the choice of using AI technologies is a political choice. Your simple turning to it to craft a letter, email, or evaluation can be an indication of what you really think of the recipient. Or it can communicate a conception of that individual as not worthy of attention. Weizenbaum also wrote in Computer Power and Human Reason:

"...there are certain tasks which computers ought not be made to do, independent of whether computers can be made to do them" (p. x).

In the Age of AI Hype we would do well to listen and act introspectively when we ask our computer tools to do things for us. It just might be the right thing to do even if it takes more of our time to do it.

Weizenbaum, J. (1976). Computer Power and Human Reason. W H Freeman and Company; New York, NY

Sunday, August 10, 2025

Is it the Responsibility of Educators to Teach Students How to Use AI Effectively and Ethically? Maybe Not

I realize the AI-in-Education-Evangelical Movement and faithful will find that which I post here, questions posed on the edge of blasphemy, but this uncritical acceptance of Silicon Valley’s latest begs some serious questions and needs a counter-viewpoint.

Why is it every time Silicon Valley dumps one of its new inventions on the world it is suddenly the . ' responsibility of educators to sanitize that technology’s use by teaching students how to use it ethically? Why is there absolutely no responsibility for these companies to engineer safe-to-use products to begin with? Perhaps, because Silicon Valley keeps dumping these products on us, they should pay a special education tax to pay for all the work educators have to engage in to put out the fires generated by these so-called “innovative” products.

In the mid-1990s, at the emergence of the internet, teachers were tasked and admonished to adopt and teach students how to use the internet ethically and effectively. With all the garbage on there now, it can rather be easy. Just treat it like teachers treated the National Enquirer in the 70s: they told us to simply don’t believe what you read in those things.

Fast forward to the advent of social media. Again, we as educators were told that we have to adopt social media in order to better communicate and teach students how to use it ethically and correctly. This quickly became impossible because these companies, like Facebook, engineered their products through their algorithms to promote the bull - - - -, and the most outrageous over anything else like the boring truth, all for the sake of click-profits. 

Has anyone stopped to think that perhaps it is time to hold these Silicon Valley companies responsible for the products they foist upon the world? Europe is starting to get it. I realize in a country that has the “guns-don’t-kill-people-people-kill-people” mentality this is blasphemy. After all social media products don’t spew misinformation and nonsense, people do, even though social media is an engineered megaphone for that.

Now, AI is foisted on the world, with its engineered flaws for potential abuse and ability to generate  nonsense, and we are allowing companies like OpenAI to shift responsibility to educators in a sanitation project to make their products more useful, ethical and safer.

In my opinion, that is twisted ethical thinking. Why not make these tech companies responsible for their flawed products to begin with instead of letting them focus on padding profits and CEO salaries?

Maybe instead of AI sanitation for these companies we need to choose purposely to not engage or participate until they clean up their act. That is also a perfectly ethical choice too.

#AIEducation #AILeadershio #AI #ChatGPT #OpenAI