Friday, February 6, 2026

Don't Believe the Silicon Valley Marketing Tactic: AI Is Not Inevitable

Silicon Valley Tech companies have taken advantage of clever marketing, favorable public opinion, and shiny-magic gadgets to ensnare us with tech designed to be addictive, invasively surveilliant, and exploitative.

It is acceptable to question the reality that these techno-oligarchs and digital capitalists claim to be making and see that they aren’t actually making our world better. They only make themselves richer, which is evident from the homes they buy, the cars they drive, and even the clothes they wear. They are prospering at the expense of all their users.

Here’s just some of the examples of their past promises and what they’ve done instead.

The web was to bring glorious access to content that was free, current, and reliable. Instead, we have a internet garbage dump and sewer of nonsense. Search and you don’t know what excrement you will get next, and the stench only increases.

Next, social media was supposed to bring us closer together and connect the globe. Instead, we have never been more polarized and divided. Facebook and Twitter have proven to be misinformation machines and BS spreaders. Even Linked-In is a BS-marketing platform where if you can package it and sell it to get clicks, you become an “influencer.” Tik-Tok, YouTube, all are platforms that allow you spread excrement and get paid for it.

Then there was cell phones which were supposed to provide us constant access to all of this—the web, social media, etc. We could always be connected. Instead, it offers always-on-demand addiction and isolation. It even makes us less social…just watch a family sitting in a restaurant, all engaged with screens instead of each other. There’s connection, but it is to what these tech companies want us connected to so they can sell ads and make money from our addictions and data.

Now it is AI. It is here and it has its promises of taking away all the dirty, distasteful work we don’t like doing. It is going to solve all our problems. It promises to make us even more “efficent and free.” What will its “instead” be? Even today there are hints.

Instead of fulfilling its promises, AI will bring us a more polluted world because of its increased demands for power needed for their server farms. Coal plants that were going to be decommissioned are being kept online, furthering polluting the environment. There is even talk of restarting the use of a nuclear plant on the East Coast that almost made a big swath of Pennsylvania into the American version of Chernobyl. 

In addition, instead of fulfilling its promises, AI is causing tech companies to consume even more scarce fresh water resources to cool their massive server farms  in many areas of the country at a time it is becoming harder and harder to provide safe drinking water to populations. 

Finally, instead of fulfiling its promises, AI is adding more garbage and sewage to the Internet garbage dump with its growing pile of AI slop. The web will become more and more a place of misinformation and nonsense. One can only imagine what the web will be in 15 or 20 years!

As these AI companies and those that keep peddling their products as a replacement for human workers, we seem to be getting closer to the utopia of machines that Kurt Vonnegut describes in his novel Player Piano where people who have no purpose in life live in cities with no future and no hope.

Here’s the lesson: NOTHING BIG TECH INVENTS WAS AND IS INEVITABLE. Our purpose in life is not to use their products or adapt our lives to use their products. We can, with leadership and vision, demand they create products that serve our ends and not just theirs.

Educators who are scrambling to “adapt to AI” because they’ve been sold on its inevitability are misguided. There is no evidence that it has to be inevitable in its current form or any form. Choices can be made, and we do not have to surrender to make these products successful.


Friday, January 30, 2026

EdTech AI Promoters Need a New Argument...They Sound Like a Tired Over-Aired Commercial

 LinkedIn AI promoters need new promo tactics.

The tired, worn-out statement used by AI cheerleaders:


AI isn’t going to replace _____ (insert whatever job title the person is peddling AI to, i.e., teacher, programmer, or school leader); it will replace _____ (insert same from above) who do not use AI.


One would wish that AI cheerleaders would at least come up with some original arguments, instead of using old, tired, and unproven statements like this.


If you want to convince someone of the necessary utility of your pet product at least try to make some new, supported, valid arguments.


AI cheerleaders’ posts on here is like seeing that same, boring commercial that runs every commercial break while watching the news. And, I would add it is about as convincing. One can only impatiently wait until the tired commercial is over.

Perhaps Its Time for a "Screenless" Charter School Focused on Teaching How to Be Human

Perhaps it is time to establish a “screenless” charter school in NC. It could have as its central mission to educate without using EdTech as a crutch for learning and technology as a means of controlling teachers, students and the learning.

The school could make a commitment to teaching students to be critical, independent and responsible citizens not proper consumers of the latest product on offer from Silicon Valley and Big Tech.

The school could still utilize technology, but that technology would be in control of the teachers and parents not Ed Leaders and EdTech consultants.

The goal of this screenless charter school would be to create a space where teachers connect with students and their parents without the constant mediation of impersonal devices whose goals are to addict and capture attention. A place for people not machines.

Students would learn the foundations that would make them critical consumers instead of EdTech fad chasers.


Wednesday, January 28, 2026

Do I Really Care If My Dishwasher Is Silcon Valley Smart?

 

Photo by Author: Implements of Progress

“Smart technologies” are all around us. Walk into an electronic store, and there are gadgets everywhere, from light switches to TVs that proudly wear the label “smart.” But do we really need “smart” devices? Does my overhead light turning on by itself really provide any value? Am I just so lazy and obsessed with efficiency that flipping a light switch won't do? Tech companies to car companies hope you fall for smartness. They have worked overtime to make sure “smart” label is something that sells. In the end, who's the sucker and has really earned the label "dumb"?

What does “smart” mean? I think the simplest definition of this term is offered by Jathan Sadowski in his book Too Smart: How Digital Capitalism Is Extracting Data, Controlling Our Lives, and Taking Over the World. He writes:

...’smart’ means a thing is embedded with digital technology for data collection, network connectivity, and enhanced control.”

The question is, Do I really care whether my TV collects data, is connected to network, and provides enhanced control? Perhaps I do for some of these, I like being able to stream to my TV, because it is much easier way of getting programming than by antenna or cable, so network connectivity I care about. The other two, well, not so much.

I really do not want my devices collecting data about me and my usage. Someone out there knowing which shows I watch is not something I value at all. I don’t even like the recommendations that pop up in Netflix and would prefer the old fashioned way of reading descriptions and then deciding what I want to watch. And, the “control?” Whose control I would ask? If means I have more control, I thought I was already in control. If it means someone else's, that's creepy.

I once purchased a dishwasher that self-declared to be “smart.” Admittedly, hearing that it had that feature seemed to be an added plus at first. Then, I realized, I bought a dishwasher to wash the dishes, not to send back my usage data to some company in the cloud so that they can profit from it. I disabled the smartness, and the dishwasher continues to do what I bought it for in the first place: wash the dishes.

"Smartness" has been peddled by companies as innovative and must-have. When Big Tech throws around its “innovative” products and ideas, we would do well to ask: “For whom is this product really innovative?” “Who is really gaining the most from this “smartness” thing?

Chances are, the answers to those questions are not "me" or at least entirely me. The whole industry of Big Tech kind of reminds me of Leroy, the used car salesman. He’d tell you that the car could fly if it means you would buy it. Silicon Valley Tech Companies have earned that same label, slimy salesmanship.


Wednesday, September 3, 2025

There's a Social Media Platform For You: Will Artificial Intelligence Follow Suit?

Each form of social media has its own identity for sure.

X, formally known as Twitter, is the Kingdom of the Trolls led by the Self-Appointed Grand Pooba of Trolls, Elon Musk. It is a place where you can be abused to your heart's content. Nothing of substance here anymore. Keep up with latest conspiracy theories.


Facebook is certainly a place to connect with old friends, but it is also a place where you share a bit about yourself and Zuckerberg et al. will capture you and sell you to the highest bidder. Join and be exploited is its new motto. It's also a place to live for the Likes, but I think it has also become the nursing home for users because I'm told only old people hang out there now.


Next, there's LinkedIn, where these words are posted. It is a place where you can expose yourself to the leadership and business greatness of others who post in self-glorification daily and sometimes hourly. You can also brag and boast about your own greatness without guilt. And, if you received one of those "I want to connect" messages, you can connect and suddenly your Messaging Box lights up with sales pitches, which is the real reason they wanted to connect.


Finally, I've experienced Bluesky. There, if you are liberal, you will feel at home; but do not disagree with the liberal line, or suddenly you get trolled and if you don't come around, you find yourself blocked. There is some discussion that is thoughtful there, but you must participate in accepted ways, or you sit there by yourself forever.


Now some may ask, "Why not TruthSocial?" Well, I doubt very seriously it's a place for the total objective news experience. I also suspect like the other "Twitter-Like" social media places, you simply get trolled by those of an entirely differently persuasion than I found on Bluesky. I'm sure you can also join the conspiracy theorists there as well.


Where does this leave one regarding social media? I can't take any of these places seriously anymore.


One can only hope that Generative AI platforms don't end up in the same Land of Frivolity, but I'm not hopeful, because the same Silicon Valley crowd invented it too. The word "artificial" in this situation makes perfect sense, but I am not sure about the other word.

Saturday, August 23, 2025

Uncritical Promotion of AI: Educators Should Know Better

 "...the critique of artificial intelligence is not a Luddite call to break machines, but a call to break harmful assumptions about machines and thus enable the construction of better tools and, more important, better societies." Joy Buolamwini, Umasking AI: My Mission to Protect What Is Human in a World of Machines

In the early 1800s the Luddites were laborers who took to breaking factory machinery when their jobs were dissolved when factory owners adopted machines such as mechanized looms and weaving equipment. They received the "anti-progress" label due to that opposition, but they ultimately engaged in a protest that would be natural for anyone whose lives have been upended by the total loss of their livelihood.


Like the AI equity researcher, Joy Buolamwini, I want to redefine the "Luddite Call to Break Machinery" to a call to "Break the False Marketing and Promo-Rhetoric" being circulated by those who should know better, K-12 and post-secondary educators. Of all people, these educators should not be leading the way to follow the marketing and promotional rhetoric of AI companies who stand to make a bundle off users and their data, as well as AI evangelists an, experts and promoters who stand to make tons of money and advance their careers off coaxing naive educators and educational leaders into hiring them for training and keynote speaking opportunities. Instead, educators should be leading the way in critically interrogating AI and its promo-rhetoric by arming themselves with critical thought.


Asking the tough questions is vital when these AI companies and their evangelists start spouting this rhetoric that presents the utilization of AI in the form of an "either-or" propositions. "You either teach students AI and use AI in your classrooms or become guilty of educational malpractice for not preparing your students for the future," they say. "Because AI is inevitable, you either use AI or become irrelevant," goes another of their so-called reasons...never mind there's absolutely no evidence of either, nor can there be because these are unsupportable predictions of a future no one can see to begin with.


Armed with critical thought, the educator can see for themselves that choosing to employ AI in the classroom is a choice and not an imperative. As a choice, it is perfectly legitimate and logical to abstain from using AI in one's teaching or life.


There is no reliable crystal ball saying that AI is going to be "revolutionary" and "disruptive" except the one the seers the Silicon Valley use, who make that claim for every new creation they spring upon the world.


There are logical reasons for choosing to abstain or to use, but to frame the whole argument as inevitable or imperative is deceptive and educators who do that should know better as well.


As for the relevancy argument? If it takes a machine to make you and your teaching relevant, I would ask a simple question: "Have you ever thought about whether what you are teaching in the first place is relevant to begin with?"

Sunday, August 17, 2025

Just Because Generative AI Can Do It, Doesn't Mean Use It: Thoughts on Ethical Use

 

 "...we, all of us, have made the world too much into a computer, and that this remaking of the world in the image of the computer started long before there were any electronic computers. Now that we have computers, it becomes somewhat easier to see this imaginative transformation we have worked on the world. Now we can use computers itself--that is the idea of the computer--as a metaphor to help us understand what we have done and are doing." Joseph Weizenbaum, Computer Scientist and Professor at MIT, in his book Computer Power and Human Reason

Weizenbaum is perhaps best known for his creation of ELIZA in the 1960s, an earlier iteration of an AI-like program that was designed to play the role of a psychotherapist. The responses this program were so uncanny, that mental health professionals and many others started talking about it as a therapeutic tool for application in the mental health field. Weizenbaum could not believe that actual professionals seriously thought his program could automate psychotherapy. He was also "startled," as he put it, that people "became emotionally involved with the computer and how unequivocally they anthropomorphized it" (p. 6). He asked a very relevant question that should be asked today:

"What can the psychiatrist's image of his patient be when he sees himself, as therapist, not as an engaged human being acting as a healer, but as an information processor following rules, etc.?"

That question gets at the heart of some of the moral questions at the heart of when someone chooses the use AI to generate content. When an individual chooses to use AI to generate a Thank-You letter or generate work feedback for employees, what is the user's image of the person to whom they are writing? In using generative AI, what is the user's image of the recipient of the thank-you letter? Do they see that person as not important enough for the crafting of a personalized thank-you? What about the employee work feedback? Does the supervisor user not see the employee as a human being worthy of being engaged with? These are the moral and ethical questions that Weizenbaum thought about back in the 60s and 70s concerning the turning of certain things we do over to computers, but it seems we have forgotten his wisdom today in our race turn over tasks to AI.

"If I choose to use ChatGPT to generate a thank-letter, what is the harm?" an individual might say. Sure there is no physical harm, but there is a level of subterfuge in both these cases that is questionable. Sincerity can really be questioned in these instances because the user of generative AI has not thoughtfully engaged in word selection, syntax, and the craft of the language, so, can they say they are truly thankful, and that they really mean the feedback provided both positive and negative?

Weizenbaum captured at the core what is wrong with relying on computer technologies to sometimes do the things we humans should do for ourselves. That core idea is that the choice of using AI technologies is a political choice. Your simple turning to it to craft a letter, email, or evaluation can be an indication of what you really think of the recipient. Or it can communicate a conception of that individual as not worthy of attention. Weizenbaum also wrote in Computer Power and Human Reason:

"...there are certain tasks which computers ought not be made to do, independent of whether computers can be made to do them" (p. x).

In the Age of AI Hype we would do well to listen and act introspectively when we ask our computer tools to do things for us. It just might be the right thing to do even if it takes more of our time to do it.

Weizenbaum, J. (1976). Computer Power and Human Reason. W H Freeman and Company; New York, NY