Thursday, April 2, 2026

The Next Time You Hear a School Leader Say "AI Is Not Going to Replace Teachers, It Will Replace Teachers Not Using AI" Think

 If "AI is not going to replace teachers, but replace teachers who do not use AI," perhaps we should really look at that statement used by many school leaders pushing for this technology in their schools.

 It says a great deal.

1-This school is authoritarian. You must use AI, even if you have proven to be effective without it. If you don't I will replace you.

2-AI is your savior, accept it, or be gone.

3-Keep your opinions to yourself; they don't matter.

4-No room for critical thought or discussion about the use of AI in this school. Just do it.


When school leaders and AI advocates use this language they hide their own authoritarian leadership style behind a statement to generate fear.


I would question whether I would want to even teach in a school operated by such dictatorial tactics.



Educators Need to Teach True AI and Technology Literacy

Should we be afraid of AI? If you listen to the Seers of Silicon Valley, we should be shaking in our boots. AI is going to displace us in our jobs; turn us into Duracel batteries; and turn us into gurgling, nonthinking imbeciles, sitting in our homes with technology waiting on us hand and foot.

Not true. Besides, our Seers have gotten much wrong in the past, so why would we expect the Bill Gateses, Alex Karps, or Sam Altmans of the world to have access to anything that resembles our future? Besides, their wealth and future is entirely dependent upon the fate of their now favorite technology. That has always been the case.

My real concern here is not with their self-serving prognosticating nonsense, but with what we as educators should be doing if we really give a damn about what is being called “AI Literacy.” 

As a part of “AI Literacy” we should be teaching students the real function of these stories and to see them for what they really are and do. For starters:

1-They make it seem like there is only one possible direction for the development of AI, their chosen route. Not so.

2-We are powerless to do anything about it, and must accept the AI they have provided for us. Not really.

3-They purposely hide who is really going to win and benefit from AI; which includes them and all the minions and bottomfeeders gathering the scraps that fall from their table.

4-The Seers prevent any public debate about their version of AI, and curtail any questioning of the goods they are delivering. That’s Silicon Valley marketing tactics at their best.

5-They also prevent any questioning of the massive resource shift (water, power, minerals, human resources) to their benefit at the expense of everyone else. They are stealing resources for their own wealthy gain.

If we are going to teach students anything about AI, it should be to teach them critical thinking instead of turning AI into an object of worship. We did that with the PC, the Web, and social media, and are reaping the results.

All technology literacy needs to teach students about all aspects of every technology we use.

As an educator, our responsibility is not to generate unquestioning users and consumers for the products developed by the Seers of Silicon Valley. 

Our responsibility should transcend making students consumers of technology; it should be empowering them to shape the future with or without technologies. This is done by giving them the gift of critically analyzing what the Seers are saying and not saying.

At least by doing that, we keep our students from becoming the tools of the technologies they use. 

That’s AI literacy, Technology literacy at its best!


Tuesday, March 31, 2026

Empowering Others Through True Technological Literacy

Our handheld devices and technologies are a problem, for ourselves and for our children.

Our worlds are now delivered to us by our devices. Because we insist on a world, fitted to specification, personalized according to our beliefs, tastes, and opinions, this “delivered world” as philosopher of technology Gunther Anders called it in the 1950s, is brought to us by our own technologies.

We now no longer have to venture out into the world for ourselves, so, as Anders points out, we remain “inexperienced.” 

But as our rationale goes, venturing out and experiencing the world is inefficient; it’s inconvenient; it’s messy; it’s complicated; and uncertain. That’s why we prefer its home delivery through out handheld devices.

Once, we had no choice. Life was a “journey of discovery.” We went out into it, because that’s we had to do, and followed its paths wherever they led. We encountered many things, much not anticipated. We experienced for ourselves.

Now, with our gadgets in hand and around us, we allow them to lead us down the paths it has determined for us. Again, this is much easier and efficient because no time is wasted on deliberation or choosing. We also encounter a world of choice.

Our devices present us with what Anders called an “effigy” of the world. This is a crude model, assembled by algorithms, designed to know better than we do what we want.

There is no longer any need to journey out and experience for outselves, because our efficient technologies do all that for us.

This “home-delivered world” described by Anders is where we have now chosen to live.

Perhaps it is possible to disrupt the grip of this home-delivered existence by refusal, by resistance.

Putting down our devices and taking a walk around the neighborhood or reading a novel in the form of a physical book might be a start. Turning off the notification machines in our pockets is another. There are many.

By doing these things, we “dethrone the devices” in our lives and refuse home-delivery.

By dethroning devices in a child’s education we reconnect them to experience and teach them to refuse the home-delivered world. 

That’s technological literacy, empowering others to choose the terms for living themselves.

Friday, March 27, 2026

Being a Moral Leader When It Comes to Technnology Integration and Adoption

The Meta and Youtube lawsuits where their platforms were found to engineer addiction and cause great harm, marks the first time that these Silcon Valley companies have been unable to hide behind the so-called “platform shield.”

In the Book “Possible Minds 25 Ways of Looking at AI”, computer scientist Rodney Brooks suggests that all these dangers we face with our technologies are due to how we have chosen to “engineer computation.”

For example, the constant virus threat we face and the user-data exploitation threats are the result of computational engineering decisions made by individuals with short-term profit and self-gain interests, and not visions of long term.

In other words, Silicon Valley and Big Tech has repeatedly made engineering choices that have provided us with a computational world of nastiness, with threats of all kinds. These choices have brought us computer viruses and data exploitation along with rich Silicon Valley CEOs like Mark Zuckerberg who totally lack any moral leadership qualities.

Facebook and Youtube are just two companies that have been caught with engineered addiction platforms that actually harm users. There are others, and we, including educators and educational leaders are complicit in allowing them to hide behind their platforms. Rodney Brooks writes:

“The computational platforms have become a shield behind which some companies hide in order to inhumanly exploit others.”

These companies manipulate and profit from their engineered platforms of addiction and data exploitation, and yes, we as educators are complicit.

Which makes me want to ask this question: Can we trust Silicon Valley and Big Tech, once again, with their latest invention large language models and all manner of artificial intelligence technologies?”

Their track record sucks. The whole tech industry has transformed into a ghoulish industry, searching for new ways to exploit users.

Among the industry, business leaders, and most especially among educators and educational leaders, there has been a TOTAL LACK OF MORAL LEADERSHIP and restraint when it comes to these technologies.

Let’s face it, Silicon Valley has become the “Sodom and Gomorrah” of our age. No moral leadership seems to exist. “If it makes money, do it, and to hell with any unforeseen consequences,” is the thinking. After all, it was Facebook who touted the adage “Move Fast and Break Things” and they have repeatedly.

But, and educators and educational leaders of all people, who have children in their care, should be the moral leaders in this.

We control these companies access to our schools. We do not need to grant unfettered access to the students we serve, in order to transform them into “good little consumers” of their products.

Instead, we can ensure that students understand the real consequences, and even explore potential future consequences of these technologies. We can teach students about the moral failings of Silicon Valley and Big Tech, because there is certainly enough history there now.

Rodney Brooks wrote: “Moral leadership is the first and biggest challenge” and that is especially true for educators and educational leaders. 

Moral leadership for educators means:

-not accepting the glorious predictions of future technological feats by the Seers of Silicon Valley as gospel, and certainly not reforming what we do based on such drivel.

-not accepting the adopting of their latest gadgets, including AI as a moral imperative, as their promotional marketing says so. There is no moral imperative to adopt these.

-Thoughtfully and critically assessing anything that these tech companies and their promoters say and offer BEFORE subjecting students to their wares. (This is totally lacking among educators and educational leaders.)

-Most of all, calling out the hype and marketing tactics being used to promote these technologies for profit and self-aggrandizement. 

Educators and educational leaders are too trusting of this entire industry. They should not be. They need to step up and take on the moral leadership role, not Silicon Valley Tech Cheerleader.

Monday, March 23, 2026

AI Is Not the Problem in Education: An Unthinking, Uncritical Ed Tech Industry Is the Problem

In education, AI is not the problem...

An EdTech industry and Ed Tech consultants are the problem.

Both of these groups have uncritically accepted the promotional rhetoric of Big Tech and its unsubstantiated promises as gospel, and are working overtime to subject students to a technology that has not been around long enough to prove itself.

For years, Ed Tech consultants have followed unquestioningly every new gadget that comes from Silicon Valley, and immediately engage in the same promotional rhetoric. They did it with the PC, with the web, with Web 2.0 and social media, and what do we have to show for it?

Ed Tech evangelists and consultants try to manipulate educators by framing any refusal of their wares as failure to provide students what they need, as if they have some kind of crystal ball. They don’t. Why would we gamble a child’s future based on the same tired promo-rhetoric Ed Tech uses over and over again?

Ed Tech evangelists and consultants try to manipulate educators by framing any refusal of their tech gadgets as a danger of becoming obsolete or irrelevant. They are wrong. Again, relevance and purpose can be found with or without technology. It is not black and white as they would have it.

Ed Tech evangelists and consultants offer a pathway of ease and efficiency, and any refusal of that path is seen backwards. It’s not. What if ease and efficiency fundamentally deforms what one does? What if the path to ease leads to a distorted world where what is worthwhile takes time, effort, and tedious work? 

AI is not the problem, but Ed Tech evangelists and Ed Tech consultants are making it a problem. They are pushing AI like a cure-all drug without any critical thought about what it will do to us long term. They also ignore the ethical questions and sustainability questions of the technology.

The cure for this problem is to ignore the Ed Tech promotional rhetoric and be sober about the possibilities. If AI survives, it will do so because it truly is useful.

Thursday, March 19, 2026

Should We Subject Our Students to AI Products as They Now Exist? There Are Reasonable Objections

What is most objectionable about the current iterations of AI that we have available? Here’s what’s most objectionable:

AI has been developed by Silicon Valley Companies with questionable motives and with Silicon Valley CEOs who have repeatedly demonstrated that they will sacrifice the well-being of everyone and the world community for profit. Their ethics are aligned with selfish gain. That will lead to an AI that ultimately serves their ends and not anyone else’s—just look at what has happened to the web and social media as well as all smart technologies.

Another objection has to do with the drive to sacrifice the environment and natural resources at all costs in their pursuit of profit. Their push to create massive server farms are depleting water supplies, forcing more fossil fuel use, consuming vast amounts of resources to create a monster with will perpetually consume more and more, pushing human needs aside.

Still another objection is that Silicon Valley and AI creators are pushing full steam ahead in creating a machine that can further pollute the world with misinformation and so-called “AI-Slop” that pushes people further into schizophrenic world where people are lost and unable to experience the world as it is.

Next, AI is also objectionable because it is a misguided effort to re-create human intelligence in a Frankensteinian effort to replicate ourselves. Such efforts rarely end well as history and our own literature tells us, even if it is possible. This recreation of “human intelligence” is being attempted without any clear definition of what such intelligence is. In other words, Silicon Valley is creating intelligence as it thinks it is, which is problematic because they do not share our human values.

Finally, AI offerings today are objectionable because there is an intense lack of trust when it comes to sharing any more data with companies like OpenAI, Anthropic, or Google. Silicon Valley has not been great stewards of what we have shared with them, using our own data to profit while making us more unsafe. These companies would sacrifice your data-well-being in a minute for profit, and they’ve proven it.

When I advocate caution or even resistance to Ed Tech AI evangelism and AI generally, it is usually due to these objections. Silicon Valley has proven untrustworthy most of all, and I would not do anything to be further complicit in connecting them to the even greater data sources of our students freely sharing information with their products.

Monday, March 16, 2026

Social Media and the National Enquirer Condition

Social media sites like Linked-In suffer from what I would call the "National Enquirer Condition” (NEC). That's why the information offered on social media must be read with a highly critical eye. Social media has become the new 21st century tabloid.

The National Enquirer, if you remember is a Tabloid that uses sensational headlines and photo covers to lure and entice grocery shoppers to pick up and purchase their so-called news magazines. 

Content was only as important as to its ability to attract eyeballs. 

Social media suffers immensely from NEC, not because it provides a platform for quality content; but because it provides a platform to spread content that engages, where Truth does not matter, nor does quality content.

What matters is whether or not you focus on eyeball attraction above all else. Quality and truth are secondary.

Post every day, even if you have nothing to say and the machine spread your content like a manure spreader.

The end result of the National Enquirer Condition?

Social media platforms become malarkey megaphones. All content is degraded and tarnished. Promotion is the game not having something worthwhile to say.

And, if you still don’t get enough eyeballs gaming the Enquirer algorithm, you can pay to spread it as well.