Friday, March 27, 2026

Being a Moral Leader When It Comes to Technnology Integration and Adoption

The Meta and Youtube lawsuits where their platforms were found to engineer addiction and cause great harm, marks the first time that these Silcon Valley companies have been unable to hide behind the so-called “platform shield.”

In the Book “Possible Minds 25 Ways of Looking at AI”, computer scientist Rodney Brooks suggests that all these dangers we face with our technologies are due to how we have chosen to “engineer computation.”

For example, the constant virus threat we face and the user-data exploitation threats are the result of computational engineering decisions made by individuals with short-term profit and self-gain interests, and not visions of long term.

In other words, Silicon Valley and Big Tech has repeatedly made engineering choices that have provided us with a computational world of nastiness, with threats of all kinds. These choices have brought us computer viruses and data exploitation along with rich Silicon Valley CEOs like Mark Zuckerberg who totally lack any moral leadership qualities.

Facebook and Youtube are just two companies that have been caught with engineered addiction platforms that actually harm users. There are others, and we, including educators and educational leaders are complicit in allowing them to hide behind their platforms. Rodney Brooks writes:

“The computational platforms have become a shield behind which some companies hide in order to inhumanly exploit others.”

These companies manipulate and profit from their engineered platforms of addiction and data exploitation, and yes, we as educators are complicit.

Which makes me want to ask this question: Can we trust Silicon Valley and Big Tech, once again, with their latest invention large language models and all manner of artificial intelligence technologies?”

Their track record sucks. The whole tech industry has transformed into a ghoulish industry, searching for new ways to exploit users.

Among the industry, business leaders, and most especially among educators and educational leaders, there has been a TOTAL LACK OF MORAL LEADERSHIP and restraint when it comes to these technologies.

Let’s face it, Silicon Valley has become the “Sodom and Gomorrah” of our age. No moral leadership seems to exist. “If it makes money, do it, and to hell with any unforeseen consequences,” is the thinking. After all, it was Facebook who touted the adage “Move Fast and Break Things” and they have repeatedly.

But, and educators and educational leaders of all people, who have children in their care, should be the moral leaders in this.

We control these companies access to our schools. We do not need to grant unfettered access to the students we serve, in order to transform them into “good little consumers” of their products.

Instead, we can ensure that students understand the real consequences, and even explore potential future consequences of these technologies. We can teach students about the moral failings of Silicon Valley and Big Tech, because there is certainly enough history there now.

Rodney Brooks wrote: “Moral leadership is the first and biggest challenge” and that is especially true for educators and educational leaders. 

Moral leadership for educators means:

-not accepting the glorious predictions of future technological feats by the Seers of Silicon Valley as gospel, and certainly not reforming what we do based on such drivel.

-not accepting the adopting of their latest gadgets, including AI as a moral imperative, as their promotional marketing says so. There is no moral imperative to adopt these.

-Thoughtfully and critically assessing anything that these tech companies and their promoters say and offer BEFORE subjecting students to their wares. (This is totally lacking among educators and educational leaders.)

-Most of all, calling out the hype and marketing tactics being used to promote these technologies for profit and self-aggrandizement. 

Educators and educational leaders are too trusting of this entire industry. They should not be. They need to step up and take on the moral leadership role, not Silicon Valley Tech Cheerleader.

Monday, March 23, 2026

AI Is Not the Problem in Education: An Unthinking, Uncritical Ed Tech Industry Is the Problem

In education, AI is not the problem...

An EdTech industry and Ed Tech consultants are the problem.

Both of these groups have uncritically accepted the promotional rhetoric of Big Tech and its unsubstantiated promises as gospel, and are working overtime to subject students to a technology that has not been around long enough to prove itself.

For years, Ed Tech consultants have followed unquestioningly every new gadget that comes from Silicon Valley, and immediately engage in the same promotional rhetoric. They did it with the PC, with the web, with Web 2.0 and social media, and what do we have to show for it?

Ed Tech evangelists and consultants try to manipulate educators by framing any refusal of their wares as failure to provide students what they need, as if they have some kind of crystal ball. They don’t. Why would we gamble a child’s future based on the same tired promo-rhetoric Ed Tech uses over and over again?

Ed Tech evangelists and consultants try to manipulate educators by framing any refusal of their tech gadgets as a danger of becoming obsolete or irrelevant. They are wrong. Again, relevance and purpose can be found with or without technology. It is not black and white as they would have it.

Ed Tech evangelists and consultants offer a pathway of ease and efficiency, and any refusal of that path is seen backwards. It’s not. What if ease and efficiency fundamentally deforms what one does? What if the path to ease leads to a distorted world where what is worthwhile takes time, effort, and tedious work? 

AI is not the problem, but Ed Tech evangelists and Ed Tech consultants are making it a problem. They are pushing AI like a cure-all drug without any critical thought about what it will do to us long term. They also ignore the ethical questions and sustainability questions of the technology.

The cure for this problem is to ignore the Ed Tech promotional rhetoric and be sober about the possibilities. If AI survives, it will do so because it truly is useful.

Thursday, March 19, 2026

Should We Subject Our Students to AI Products as They Now Exist? There Are Reasonable Objections

What is most objectionable about the current iterations of AI that we have available? Here’s what’s most objectionable:

AI has been developed by Silicon Valley Companies with questionable motives and with Silicon Valley CEOs who have repeatedly demonstrated that they will sacrifice the well-being of everyone and the world community for profit. Their ethics are aligned with selfish gain. That will lead to an AI that ultimately serves their ends and not anyone else’s—just look at what has happened to the web and social media as well as all smart technologies.

Another objection has to do with the drive to sacrifice the environment and natural resources at all costs in their pursuit of profit. Their push to create massive server farms are depleting water supplies, forcing more fossil fuel use, consuming vast amounts of resources to create a monster with will perpetually consume more and more, pushing human needs aside.

Still another objection is that Silicon Valley and AI creators are pushing full steam ahead in creating a machine that can further pollute the world with misinformation and so-called “AI-Slop” that pushes people further into schizophrenic world where people are lost and unable to experience the world as it is.

Next, AI is also objectionable because it is a misguided effort to re-create human intelligence in a Frankensteinian effort to replicate ourselves. Such efforts rarely end well as history and our own literature tells us, even if it is possible. This recreation of “human intelligence” is being attempted without any clear definition of what such intelligence is. In other words, Silicon Valley is creating intelligence as it thinks it is, which is problematic because they do not share our human values.

Finally, AI offerings today are objectionable because there is an intense lack of trust when it comes to sharing any more data with companies like OpenAI, Anthropic, or Google. Silicon Valley has not been great stewards of what we have shared with them, using our own data to profit while making us more unsafe. These companies would sacrifice your data-well-being in a minute for profit, and they’ve proven it.

When I advocate caution or even resistance to Ed Tech AI evangelism and AI generally, it is usually due to these objections. Silicon Valley has proven untrustworthy most of all, and I would not do anything to be further complicit in connecting them to the even greater data sources of our students freely sharing information with their products.

Monday, March 16, 2026

Social Media and the National Enquirer Condition

Social media sites like Linked-In suffer from what I would call the "National Enquirer Condition” (NEC). That's why the information offered on social media must be read with a highly critical eye. Social media has become the new 21st century tabloid.

The National Enquirer, if you remember is a Tabloid that uses sensational headlines and photo covers to lure and entice grocery shoppers to pick up and purchase their so-called news magazines. 

Content was only as important as to its ability to attract eyeballs. 

Social media suffers immensely from NEC, not because it provides a platform for quality content; but because it provides a platform to spread content that engages, where Truth does not matter, nor does quality content.

What matters is whether or not you focus on eyeball attraction above all else. Quality and truth are secondary.

Post every day, even if you have nothing to say and the machine spread your content like a manure spreader.

The end result of the National Enquirer Condition?

Social media platforms become malarkey megaphones. All content is degraded and tarnished. Promotion is the game not having something worthwhile to say.

And, if you still don’t get enough eyeballs gaming the Enquirer algorithm, you can pay to spread it as well.

Friday, March 13, 2026

Watch Out for AI Snake Oil Salespersons and This One Tactic

I’ve noticed a recent AI promotional tactic that AI Evangelists have been employing with increasing frequency. (It’s used heavily with other products and technologies too.) It goes like this…

AI is not the problem…

_________ is the problem.


(Insert in the blank whatever object, service, or notion that is being promoted).

For example, if I were selling a consultancy that helps schools develop AI policy, I would say the following:

“AI is not the problem,

Lack of sound AI policy is the problem.”

But there is a deception in this promotional tactic, that the savy leader needs to know about. It involves using the tactic of “inoculating the target against any idea or misgivings that AI has problems.” Immediately the statement “AI is not the problem” tries to place it beyond question. That’s deceptive.

Deceptively, that is not true. AI has plenty of problems technically, morally, and ethically and much has been and is being written about it. There are also problems inherently instilled within these products, but by immediately deflecting attention from the issues, one is prevented from even going there, and focuses on the product being sold.

If a product promotion requires deceptive and manipulative practices to make a sale, is the product really worth it? But perhaps that’s just modern sales for some. 

Watch and be critical at all times. AI snake oil salespersons abound.


Tuesday, March 10, 2026

When It Comes to AI, the Field of Ed Tech Acts Like a Fundamentalist Religion

Has the field of Ed Tech become like a “fundamentalist” religion? In some ways it has.

Ed Tech as a field appears to sometimes take it by faith that there are no instructional and educational problems that can’t be solved by technology.

As a corollary to this solutionistic view of technology, any technological gadget or invention has some kind of application in schools if only it can be found. And, it is responsibility of all educators to integrate these gadgets otherwise they are going to be left behind.

At its core, Ed Tech is in some ways like a fundamentalist religion. It requires that one keep these two principles of faith at all times. It dismisses any questions of technology’s central place in education. 

If one questions a new technology or whether it really has application in schools, that person is declared a heretic or an obstacle to progress. There is no room for dissent.

This is perhaps as Big Tech, Ed Tech, and consultants would have it it seems. What better way to invent, market, and ensure adoption and their prosperity? If critical talk about technologies such as AI are short-circuited from the beginning, then these benefactors of that tech win. 

The problem is, sometimes students lose due to negative consequences, only to be experienced years later.


Monday, March 9, 2026

Why Talk About Ed Tech Integration is a Bad Idea?

Why is all the talk about integrating Tech into education a bad idea? Here’s why?

The issue is the idea of “integrating.” To “integrate means to combine (one thing) with another so that they become a whole.”

This notion of “integrating” implies that teaching and learning and educating are somehow “incomplete” or not whole, and that the tech to be integrated is somehow AUTOMATICALLY going to bring about that wholeness. Not so, as history has shown us many, many times.

To speak of “integrating” a tech is to assume it is whole and sufficiently able to offer a solution to whatever instructional problem ails the teaching act. Often, these technologies are not whole by themselves and they come bundled with a whole host of unintended and sometimes nasty consequences. (That just means the teacher now has to spend inordinate amounts of time addressing these side effects.)

Instead, the Ed Tech conversation should always be about ADOPTION. This immediately reframes the entire Ed Tech conversation. 

Ed Tech companies would help education even more if they designed their products as a solution to specific problems, instead of wasting time trying to get teachers to find ways to make their products useful and legitimate.

Their products should be solutions to specific educational problems, not solutions in search of educational problems to solve.

The reason the whole Ed Tech goal should be adoption instead of integration is because the “act of adopting” places that teacher as a AGENT in the process. No longer are they subjected to Ed Tech; they choose the tech tools they need.

Educators as “adopters” have the power to investigate technologies, ask the tough questions, and if they find it inadequate as a solution; they can veto it.

In the ED TECH ADOPTION model, the teacher is empowered to make decisions about the tools they will use or not use.

In the contrast between Ed Tech integration versus adoption, a tech solution is truely evaluated for its usefulness in specific teaching situations.