Saturday, April 18, 2026

If AI Can Do It,Then Maybe It Doesn't Need to Be Done

Perhaps a new way of thinking about LLMs in the classroom:

"If GenAI can do it, perhaps it doesn't really need to be done."

AI can't doesn't think and can't create. It just regurgitates what other people created and wrote.

Who needs AI vomit anyway?

Wednesday, April 15, 2026

Just Maybe If AI Can Do It, It Might Not Be Needed

Just a thought. If GenAI and LLM can write it, does that writing even need a human writer

It might also be that the writing is not needed at all.

Think about a novel written by AI or a poem written by such. Is it needed? I read novels because of "authors" but I supposed I could read them for other reasons. But I doubt I would ever read one because AI wrote, except out of curiosity.

AI slop by its nature does not even need a human. It might not even need to exist.

The question is to figure out which writing needs to have a human writer.

All these Politicians can send me all the AI generated Text Messages and emails they want. I don't read them anyway.

I received an AI sales phone call yesterday spoofing a real person's name. Once I realized it was AI, I hung up, which was less than five seconds.

Ultimately, AI slop only has any status if we are readers, listeners, or viewers decide that it does.

I Welcome the Death of the Five-Paragraph Essay and All Standardized Deformed Learning

I have to admit that since GenAI can easily generate a five-paragraph essay on a topic, the death of that fake writing format dies a welcome death.

Why did we teach such nonsense? In the 1990s, in all their wisdom, our policymakers and educational leaders decided that we English teachers needed to be teaching writing, (as if we were not), so they developed a writing test with standardized rubrics and all that garbage.

If you are going to measure writing effectiveness, you have to have standards to measure they said. 

But measuring writing is like measuring a sunset or a water fall or a mountain stream. Go ahead and develop your standards, but look at the deformity you create.

Naturally, when you standardize any aspect of writing, you stupefy it and create some kind of monstrosity, and in this case? The five-paragraph mutation essay.

We taught this because our educational leaders demanded it with their accountability assessments, even though in our hearts we knew that true writing can't be standardized. This is because our administrators demanded 'accountability" and wanted "high test scores" for personal boasting. They seem to always have to have those "measures" to prove their necessity.

I will acknowledge this positive outcome of GenAI and LLMs: If AI blows up anything, it can destroy this notion of standardizing educational tasks. It has always been nonsense and it still is, so go ahead GenAI  and blow it all up. 

If AI can do it, then let's finally have students engage in authentic learning tasks that AI is ill-equipped to do completely.

Of course with standardized tasks out the window, our educational leaders can no longer compare outcomes and boast of "getting those scores up" but that is a good thing. The true measure of what we learn has never fit a bubble sheet or a rubric.

Finally, unintentionally, GenAI might just, at least in this sense, make it possible to ask students learn how to do real writing, and educational leaders might have to find some other measure of their own effectiveness.

Lessons Learned: Preventing Companies from Keeping Your Users and Institution as a Locked-In User

Since my decision to discontinue my use of Evernote after Bending Spoons eliminated the plan for Personal Users, I have found my replacement: the Notes app within my Mac OS system.

The Mac OS Notes app successfully captures what I wanted to do with my note taking activities and other tasks I was doing with Evernote. It turns out, with some modifications, I can do all that I was doing with Evernote.

For example, while the Notes App does not have “notebooks” it turns out that its “Folders” feature functions in the same manner. You can gather connected documents into a folder and tag the folder. You can scan documents; insert documents; insert audio recordings, etc.

Basically, Notes appears very much like Evernote used to be before Bending Spoons acquired it and began adding Bloatware to it in order to charge customers more.

I suppose Bending Spoons did me a favor. I was really paying to use Evernote when I did not need it. The simple solution was right there all the time.

Sometimes the solution to our problems is already there, and sometimes, when it comes to tech solutions, it’s not the product expanded with bloated features; it’s the simple solution.

Sometimes the “Keep It Simple” adage is best, and app solution developers would do well to keep that in mind when the adding of features does not always equate to value for your current users. Keep your current users in mind and don’t add features that degrade their experience of your product. That is, if you have any loyalty to your current customers.

Keep adding bloated features that pull your product away from what your legacy and original users want, then expect those users to exit when the costs are too high and your product can be superseded by a solution that captures what they want to do.

On the flip side of things, all users would do well to prevent themselves from getting “locked-in” with apps and tech products. Keep yourself flexible and portable so you can relocate at any point the app developer stops providing the product you want and need.

Figure out a way to transfer those app escape costs back to the app developer where they belong. 

After all, they are trying to engineer their products to keep you “locked in” as a user. With some anti-lock in measures, you can keep that from happening.

Sunday, April 12, 2026

Why Evernote Note Taking App Users Need to Cancel and Delete Their Accounts Now

There are is an important reason why anyone who has a Personal Evernote Note Taking App account should delete their account and find an alternative now.

Bending Spoons, who acquired Evernote in 2023, has recently changed their plan offerings for personal users and both are unacceptable. 

There is the “Starter Plan” that imposes draconian limits on content and device use. This is a problem for someone who has been using Evernote more than 10 years, who can’t use this plan without severely deleting content. It also eliminates one of the major reasons I use the application, which is the ability to use across all the devices I want. This plan limits you to 3 devices. 

The other plan offering, the “Advanced Plan”, is basically what I have now, with unlimited content and devices, but it is over 100 dollars more per year. I'm sorry, but I do not see Evernote's value increasing that much in one year.

Now, I acknowledge that when you go in and start to cancel your subscription=, Bending Spoons offers you a one-time, $100 off subscription which brings it back to $149, but that alone should be a red flag. Why should they offer only two plans, and then offer a one-time discount? Do they want to keep me hooked for one more year to get me further locked in as a user? That’s dishonest business in my thinking, but typical of Silicon Valley and Big Tech.

Why would I spend another year, uploading more content to Evernote, only to find myself in the same situation next year? I would have even more content. Perhaps Bending Spoons is gambling that I would take the additional year, and because I have even more content, I would be so invested that I would be forced to continue using Evernote. Not happening with this user.

Another reason to move on from Evernote is that they are apparently using the “Microsoft Product Design Playbook.” That playbook is “Add a bunch of features to Evernote so you can ultimately charge more because users are locked in as users, they won’t go anywhere.” This notion includes adding a gazillon features that users haven’t even asked for or wanted. Then charge your users more. Microsoft has so bloated Windows with “features” I left their product behind a long time ago, and I am doing the same with Bending Spoons’ Evernote.

I have exported all my content. I have cancelled my subscription. I will delete my account and move on. Bending Spoons could have continued the Personal Plan option, but they gambled and lost with me.

One thing Bending Spoons should learn, just like Microsoft, you can’t treat customers crappy. And, don’t always think that all the added features like AI and Video transcripting is what all your customers want and will pay for. Not all users want new bells and whistles, especially long-time users who found your product versatile and reliable, who now have been dumped on by the company.

Evernote has been deformed beyond use for me by Bending Spoons, and even though they brag on their website that they “Acquire and improve iconic products” they certainly failed in this case. Time to move on and find another solution. 

Saturday, April 11, 2026

Evernote Is History with Me: They Have Lived up to Doctorow's Notion and Have Become Enshitified

The Enshitification of Evernote has come to pass.

I have used Evernote over 10 years, and they have tweaked it well sometimes, and sometimes not so well, but I have used it for years to store my reading and writing notes.


That, unfortunately, ends today.


Evernote changed their plans and recently increased their yearly subscription price by 50% if you keep what you have, or otherwise, choose a crappy plan with draconian limits  placed on your amount of notes, notebooks, and devices to access their product. ENSHITIFICATION AT ITS BEST.


I suppose they have to pay for their AI gamble, which I never used anyway.


Cory Doctorow really got it when he coined this term. The only way out is to delete my account.


What's worse, I put in a ticket to question their plans and increases, and EVERNOTE JUST SENT ME AN EMAIL GIVING ME INSTRUCTIONS ON HOW TO EXPORT MY CONTENT AND CANCEL MY SUBSCRIPTION.


After I do that, I'm out. Evernote is history with me.


UPDATE: After I posted, I downloaded all my content from Evernote. Then I logged into my account to cancel my over 10 year subscription.


Once I clicked the Cancel button, a pop-up comes up “Offering my current options for $149 per year” and not the $249 per year increase. That is Doctorow’s “enshitification” personified!


The email Evernote sent me DECEPTIVELY offered two option: 1) Starter Option (with draconian content and use limits) for $129 per year and 2) Advanced Option for $249 per year that kept all my current features.


That’s poor and unethical business practices in my thinking. 


I cancelled my long time subscription anyway. Who knows how Evernote will treat its users next year now that they have become enshitified!


On an added bit of irony, Bending Spoons, the company that now owns Evernote boasts on its website: “We acquire and improve iconic products.” Perhaps that would better read: “We acquire and enshitify iconic products.”



Thursday, April 2, 2026

The Next Time You Hear a School Leader Say "AI Is Not Going to Replace Teachers, It Will Replace Teachers Not Using AI" Think

 If "AI is not going to replace teachers, but replace teachers who do not use AI," perhaps we should really look at that statement used by many school leaders pushing for this technology in their schools.

 It says a great deal.

1-This school is authoritarian. You must use AI, even if you have proven to be effective without it. If you don't I will replace you.

2-AI is your savior, accept it, or be gone.

3-Keep your opinions to yourself; they don't matter.

4-No room for critical thought or discussion about the use of AI in this school. Just do it.


When school leaders and AI advocates use this language they hide their own authoritarian leadership style behind a statement to generate fear.


I would question whether I would want to even teach in a school operated by such dictatorial tactics.



Educators Need to Teach True AI and Technology Literacy

Should we be afraid of AI? If you listen to the Seers of Silicon Valley, we should be shaking in our boots. AI is going to displace us in our jobs; turn us into Duracel batteries; and turn us into gurgling, nonthinking imbeciles, sitting in our homes with technology waiting on us hand and foot.

Not true. Besides, our Seers have gotten much wrong in the past, so why would we expect the Bill Gateses, Alex Karps, or Sam Altmans of the world to have access to anything that resembles our future? Besides, their wealth and future is entirely dependent upon the fate of their now favorite technology. That has always been the case.

My real concern here is not with their self-serving prognosticating nonsense, but with what we as educators should be doing if we really give a damn about what is being called “AI Literacy.” 

As a part of “AI Literacy” we should be teaching students the real function of these stories and to see them for what they really are and do. For starters:

1-They make it seem like there is only one possible direction for the development of AI, their chosen route. Not so.

2-We are powerless to do anything about it, and must accept the AI they have provided for us. Not really.

3-They purposely hide who is really going to win and benefit from AI; which includes them and all the minions and bottomfeeders gathering the scraps that fall from their table.

4-The Seers prevent any public debate about their version of AI, and curtail any questioning of the goods they are delivering. That’s Silicon Valley marketing tactics at their best.

5-They also prevent any questioning of the massive resource shift (water, power, minerals, human resources) to their benefit at the expense of everyone else. They are stealing resources for their own wealthy gain.

If we are going to teach students anything about AI, it should be to teach them critical thinking instead of turning AI into an object of worship. We did that with the PC, the Web, and social media, and are reaping the results.

All technology literacy needs to teach students about all aspects of every technology we use.

As an educator, our responsibility is not to generate unquestioning users and consumers for the products developed by the Seers of Silicon Valley. 

Our responsibility should transcend making students consumers of technology; it should be empowering them to shape the future with or without technologies. This is done by giving them the gift of critically analyzing what the Seers are saying and not saying.

At least by doing that, we keep our students from becoming the tools of the technologies they use. 

That’s AI literacy, Technology literacy at its best!


Tuesday, March 31, 2026

Empowering Others Through True Technological Literacy

Our handheld devices and technologies are a problem, for ourselves and for our children.

Our worlds are now delivered to us by our devices. Because we insist on a world, fitted to specification, personalized according to our beliefs, tastes, and opinions, this “delivered world” as philosopher of technology Gunther Anders called it in the 1950s, is brought to us by our own technologies.

We now no longer have to venture out into the world for ourselves, so, as Anders points out, we remain “inexperienced.” 

But as our rationale goes, venturing out and experiencing the world is inefficient; it’s inconvenient; it’s messy; it’s complicated; and uncertain. That’s why we prefer its home delivery through out handheld devices.

Once, we had no choice. Life was a “journey of discovery.” We went out into it, because that’s we had to do, and followed its paths wherever they led. We encountered many things, much not anticipated. We experienced for ourselves.

Now, with our gadgets in hand and around us, we allow them to lead us down the paths it has determined for us. Again, this is much easier and efficient because no time is wasted on deliberation or choosing. We also encounter a world of choice.

Our devices present us with what Anders called an “effigy” of the world. This is a crude model, assembled by algorithms, designed to know better than we do what we want.

There is no longer any need to journey out and experience for outselves, because our efficient technologies do all that for us.

This “home-delivered world” described by Anders is where we have now chosen to live.

Perhaps it is possible to disrupt the grip of this home-delivered existence by refusal, by resistance.

Putting down our devices and taking a walk around the neighborhood or reading a novel in the form of a physical book might be a start. Turning off the notification machines in our pockets is another. There are many.

By doing these things, we “dethrone the devices” in our lives and refuse home-delivery.

By dethroning devices in a child’s education we reconnect them to experience and teach them to refuse the home-delivered world. 

That’s technological literacy, empowering others to choose the terms for living themselves.

Friday, March 27, 2026

Being a Moral Leader When It Comes to Technnology Integration and Adoption

The Meta and Youtube lawsuits where their platforms were found to engineer addiction and cause great harm, marks the first time that these Silcon Valley companies have been unable to hide behind the so-called “platform shield.”

In the Book “Possible Minds 25 Ways of Looking at AI”, computer scientist Rodney Brooks suggests that all these dangers we face with our technologies are due to how we have chosen to “engineer computation.”

For example, the constant virus threat we face and the user-data exploitation threats are the result of computational engineering decisions made by individuals with short-term profit and self-gain interests, and not visions of long term.

In other words, Silicon Valley and Big Tech has repeatedly made engineering choices that have provided us with a computational world of nastiness, with threats of all kinds. These choices have brought us computer viruses and data exploitation along with rich Silicon Valley CEOs like Mark Zuckerberg who totally lack any moral leadership qualities.

Facebook and Youtube are just two companies that have been caught with engineered addiction platforms that actually harm users. There are others, and we, including educators and educational leaders are complicit in allowing them to hide behind their platforms. Rodney Brooks writes:

“The computational platforms have become a shield behind which some companies hide in order to inhumanly exploit others.”

These companies manipulate and profit from their engineered platforms of addiction and data exploitation, and yes, we as educators are complicit.

Which makes me want to ask this question: Can we trust Silicon Valley and Big Tech, once again, with their latest invention large language models and all manner of artificial intelligence technologies?”

Their track record sucks. The whole tech industry has transformed into a ghoulish industry, searching for new ways to exploit users.

Among the industry, business leaders, and most especially among educators and educational leaders, there has been a TOTAL LACK OF MORAL LEADERSHIP and restraint when it comes to these technologies.

Let’s face it, Silicon Valley has become the “Sodom and Gomorrah” of our age. No moral leadership seems to exist. “If it makes money, do it, and to hell with any unforeseen consequences,” is the thinking. After all, it was Facebook who touted the adage “Move Fast and Break Things” and they have repeatedly.

But, and educators and educational leaders of all people, who have children in their care, should be the moral leaders in this.

We control these companies access to our schools. We do not need to grant unfettered access to the students we serve, in order to transform them into “good little consumers” of their products.

Instead, we can ensure that students understand the real consequences, and even explore potential future consequences of these technologies. We can teach students about the moral failings of Silicon Valley and Big Tech, because there is certainly enough history there now.

Rodney Brooks wrote: “Moral leadership is the first and biggest challenge” and that is especially true for educators and educational leaders. 

Moral leadership for educators means:

-not accepting the glorious predictions of future technological feats by the Seers of Silicon Valley as gospel, and certainly not reforming what we do based on such drivel.

-not accepting the adopting of their latest gadgets, including AI as a moral imperative, as their promotional marketing says so. There is no moral imperative to adopt these.

-Thoughtfully and critically assessing anything that these tech companies and their promoters say and offer BEFORE subjecting students to their wares. (This is totally lacking among educators and educational leaders.)

-Most of all, calling out the hype and marketing tactics being used to promote these technologies for profit and self-aggrandizement. 

Educators and educational leaders are too trusting of this entire industry. They should not be. They need to step up and take on the moral leadership role, not Silicon Valley Tech Cheerleader.

Monday, March 23, 2026

AI Is Not the Problem in Education: An Unthinking, Uncritical Ed Tech Industry Is the Problem

In education, AI is not the problem...

An EdTech industry and Ed Tech consultants are the problem.

Both of these groups have uncritically accepted the promotional rhetoric of Big Tech and its unsubstantiated promises as gospel, and are working overtime to subject students to a technology that has not been around long enough to prove itself.

For years, Ed Tech consultants have followed unquestioningly every new gadget that comes from Silicon Valley, and immediately engage in the same promotional rhetoric. They did it with the PC, with the web, with Web 2.0 and social media, and what do we have to show for it?

Ed Tech evangelists and consultants try to manipulate educators by framing any refusal of their wares as failure to provide students what they need, as if they have some kind of crystal ball. They don’t. Why would we gamble a child’s future based on the same tired promo-rhetoric Ed Tech uses over and over again?

Ed Tech evangelists and consultants try to manipulate educators by framing any refusal of their tech gadgets as a danger of becoming obsolete or irrelevant. They are wrong. Again, relevance and purpose can be found with or without technology. It is not black and white as they would have it.

Ed Tech evangelists and consultants offer a pathway of ease and efficiency, and any refusal of that path is seen backwards. It’s not. What if ease and efficiency fundamentally deforms what one does? What if the path to ease leads to a distorted world where what is worthwhile takes time, effort, and tedious work? 

AI is not the problem, but Ed Tech evangelists and Ed Tech consultants are making it a problem. They are pushing AI like a cure-all drug without any critical thought about what it will do to us long term. They also ignore the ethical questions and sustainability questions of the technology.

The cure for this problem is to ignore the Ed Tech promotional rhetoric and be sober about the possibilities. If AI survives, it will do so because it truly is useful.

Thursday, March 19, 2026

Should We Subject Our Students to AI Products as They Now Exist? There Are Reasonable Objections

What is most objectionable about the current iterations of AI that we have available? Here’s what’s most objectionable:

AI has been developed by Silicon Valley Companies with questionable motives and with Silicon Valley CEOs who have repeatedly demonstrated that they will sacrifice the well-being of everyone and the world community for profit. Their ethics are aligned with selfish gain. That will lead to an AI that ultimately serves their ends and not anyone else’s—just look at what has happened to the web and social media as well as all smart technologies.

Another objection has to do with the drive to sacrifice the environment and natural resources at all costs in their pursuit of profit. Their push to create massive server farms are depleting water supplies, forcing more fossil fuel use, consuming vast amounts of resources to create a monster with will perpetually consume more and more, pushing human needs aside.

Still another objection is that Silicon Valley and AI creators are pushing full steam ahead in creating a machine that can further pollute the world with misinformation and so-called “AI-Slop” that pushes people further into schizophrenic world where people are lost and unable to experience the world as it is.

Next, AI is also objectionable because it is a misguided effort to re-create human intelligence in a Frankensteinian effort to replicate ourselves. Such efforts rarely end well as history and our own literature tells us, even if it is possible. This recreation of “human intelligence” is being attempted without any clear definition of what such intelligence is. In other words, Silicon Valley is creating intelligence as it thinks it is, which is problematic because they do not share our human values.

Finally, AI offerings today are objectionable because there is an intense lack of trust when it comes to sharing any more data with companies like OpenAI, Anthropic, or Google. Silicon Valley has not been great stewards of what we have shared with them, using our own data to profit while making us more unsafe. These companies would sacrifice your data-well-being in a minute for profit, and they’ve proven it.

When I advocate caution or even resistance to Ed Tech AI evangelism and AI generally, it is usually due to these objections. Silicon Valley has proven untrustworthy most of all, and I would not do anything to be further complicit in connecting them to the even greater data sources of our students freely sharing information with their products.

Monday, March 16, 2026

Social Media and the National Enquirer Condition

Social media sites like Linked-In suffer from what I would call the "National Enquirer Condition” (NEC). That's why the information offered on social media must be read with a highly critical eye. Social media has become the new 21st century tabloid.

The National Enquirer, if you remember is a Tabloid that uses sensational headlines and photo covers to lure and entice grocery shoppers to pick up and purchase their so-called news magazines. 

Content was only as important as to its ability to attract eyeballs. 

Social media suffers immensely from NEC, not because it provides a platform for quality content; but because it provides a platform to spread content that engages, where Truth does not matter, nor does quality content.

What matters is whether or not you focus on eyeball attraction above all else. Quality and truth are secondary.

Post every day, even if you have nothing to say and the machine spread your content like a manure spreader.

The end result of the National Enquirer Condition?

Social media platforms become malarkey megaphones. All content is degraded and tarnished. Promotion is the game not having something worthwhile to say.

And, if you still don’t get enough eyeballs gaming the Enquirer algorithm, you can pay to spread it as well.

Friday, March 13, 2026

Watch Out for AI Snake Oil Salespersons and This One Tactic

I’ve noticed a recent AI promotional tactic that AI Evangelists have been employing with increasing frequency. (It’s used heavily with other products and technologies too.) It goes like this…

AI is not the problem…

_________ is the problem.


(Insert in the blank whatever object, service, or notion that is being promoted).

For example, if I were selling a consultancy that helps schools develop AI policy, I would say the following:

“AI is not the problem,

Lack of sound AI policy is the problem.”

But there is a deception in this promotional tactic, that the savy leader needs to know about. It involves using the tactic of “inoculating the target against any idea or misgivings that AI has problems.” Immediately the statement “AI is not the problem” tries to place it beyond question. That’s deceptive.

Deceptively, that is not true. AI has plenty of problems technically, morally, and ethically and much has been and is being written about it. There are also problems inherently instilled within these products, but by immediately deflecting attention from the issues, one is prevented from even going there, and focuses on the product being sold.

If a product promotion requires deceptive and manipulative practices to make a sale, is the product really worth it? But perhaps that’s just modern sales for some. 

Watch and be critical at all times. AI snake oil salespersons abound.


Tuesday, March 10, 2026

When It Comes to AI, the Field of Ed Tech Acts Like a Fundamentalist Religion

Has the field of Ed Tech become like a “fundamentalist” religion? In some ways it has.

Ed Tech as a field appears to sometimes take it by faith that there are no instructional and educational problems that can’t be solved by technology.

As a corollary to this solutionistic view of technology, any technological gadget or invention has some kind of application in schools if only it can be found. And, it is responsibility of all educators to integrate these gadgets otherwise they are going to be left behind.

At its core, Ed Tech is in some ways like a fundamentalist religion. It requires that one keep these two principles of faith at all times. It dismisses any questions of technology’s central place in education. 

If one questions a new technology or whether it really has application in schools, that person is declared a heretic or an obstacle to progress. There is no room for dissent.

This is perhaps as Big Tech, Ed Tech, and consultants would have it it seems. What better way to invent, market, and ensure adoption and their prosperity? If critical talk about technologies such as AI are short-circuited from the beginning, then these benefactors of that tech win. 

The problem is, sometimes students lose due to negative consequences, only to be experienced years later.


Monday, March 9, 2026

Why Talk About Ed Tech Integration is a Bad Idea?

Why is all the talk about integrating Tech into education a bad idea? Here’s why?

The issue is the idea of “integrating.” To “integrate means to combine (one thing) with another so that they become a whole.”

This notion of “integrating” implies that teaching and learning and educating are somehow “incomplete” or not whole, and that the tech to be integrated is somehow AUTOMATICALLY going to bring about that wholeness. Not so, as history has shown us many, many times.

To speak of “integrating” a tech is to assume it is whole and sufficiently able to offer a solution to whatever instructional problem ails the teaching act. Often, these technologies are not whole by themselves and they come bundled with a whole host of unintended and sometimes nasty consequences. (That just means the teacher now has to spend inordinate amounts of time addressing these side effects.)

Instead, the Ed Tech conversation should always be about ADOPTION. This immediately reframes the entire Ed Tech conversation. 

Ed Tech companies would help education even more if they designed their products as a solution to specific problems, instead of wasting time trying to get teachers to find ways to make their products useful and legitimate.

Their products should be solutions to specific educational problems, not solutions in search of educational problems to solve.

The reason the whole Ed Tech goal should be adoption instead of integration is because the “act of adopting” places that teacher as a AGENT in the process. No longer are they subjected to Ed Tech; they choose the tech tools they need.

Educators as “adopters” have the power to investigate technologies, ask the tough questions, and if they find it inadequate as a solution; they can veto it.

In the ED TECH ADOPTION model, the teacher is empowered to make decisions about the tools they will use or not use.

In the contrast between Ed Tech integration versus adoption, a tech solution is truely evaluated for its usefulness in specific teaching situations.

Thursday, March 5, 2026

It's Time to Rethink the Teacher Shortage Problem and It Does Not Involve Pay

 Perhaps the real problem with the shortage of teachers is that fewer and fewer people want to do the work as it has evolved over the past 30 years or so. 

When I started teaching in 1989 teachers operated in classrooms that allowed for independent creavitity, initiative, and excitement. There were no testing surveillance systems. You could operate without the intrusion of administrative experts and consultants who claimed to know how to teach content better than you. Parents were generally supportive of teachers and were not engaged in antagonistic tactics to what you were doing. They came to you if their were problems usually, and the teacher could work with the parent.

Classrooms have become culture war zones. They are places where the teacher often receives less and less professional deference. Instead, there are so many voices out their saying, “No, you need to do it this way, not that.” In a word, teaching has been transformed into a mechanistic scientific management task where one is surrounded by a troup of experts all telling the her/him how to do the job. 

There is no art to teaching anymore, because the administrators and their cadre of experts have transformed the instructional act into a scientific management work task.

It’s no longer rewarding to be a teacher. So, the answer seems to be in focusing on pay. Certainly you can find someone willing to do this work for the right pay, the idea goes. The problem is apparently you can’t pay enough for someone to do the teaching work today because fewer want to do it.

The reality is, teaching has lost what librarian-researcher Fobazi Ettarh calls “vocational awe.” 

Vocational awe is defined as a set of notions that Librarians have about their institution and themselves. To have vocational awe, the worker has to believe in their institution’s goodness and rightness. Also, the worker has to believe that their profession, the work they do is inherently good and sacred. In other other words, the worker believes their work is a calling, which means they will endure and persevere in the work tasks because of the good, sacred and worthwhile big picture.

Teaching has lost this vocational awe. Schools are constantly labeled failing by everyone. Even administors focus on the negative always in an environment of so-called continuous improvement. In addition, the teacher’s work is no longer seen as sacred, as special because it has been turned into tasks to be carried out scientifically. The teacher’s institution and the teacher’s work is fundamentally degraded by a system paranoiacally obsessed with trying to improve or change, in the worship of constant innovation.

What’s more, administrators and school HR recruiters can no longer capitalize on “vocational awe” to fill teaching positions. That’s because the “awe of teaching” and “being a teacher” is gone. 

The profession of teaching has been destroyed by politicians who want to cut budgets and continuously impose new requirements on teachers. 

It has been decimated by administrators who think they know how to teach so well, they constantly intrude into classrooms with their so-called coaching and feedback, treating teachers as if they don’t know anything. 

The teaching profession has been decimated by a consultant industry made up of experts who say they know teaching better, even though some of them spent less time in the classroom, and sometimes no time there.

The teaching shortage problem will not be solved by pay alone. 

It will certainly not be solved by relying on the vocational awe myth any more because no one is buying it. 

The teaching problem will only be solved if those who have degraded the work of teaching to the point that no one wants to do it, no matter the pay, are convinced to change their ways. 

No one wants to be a teacher anymore because vocational awe no longer exists.

Wednesday, March 4, 2026

Teaching Students About AI or Any Technology Just Might Be Shortsighted and Morally Wrong

Should our schools be focused on training students how to use AI above all else? No. Here’s why…

In the 1990s, I taught at a high school located in an area where 3 major fiber option manufacturers had set up shop, and they partnered with our schools to prepare students for the kinds of jobs they had to offer.

I attended multiple PD sessions, guided by district personnel and trainers from these three manufacturers. The goal was to train teachers to teach students the kinds of skills these manufacturers, and others like them, valued in employees.

I went back to my classroom and dutifully and conscientiously taught those skills because it was my job to teach students for the jobs in their future.

Fast forward 7 or 8 years later…the fiber optic industry tanked when demand fell. These manufacturers closed plants, merged and merged again, and laid off workers and shifted jobs to foreigh countries. Many lost their jobs, perhaps even some that I had dutifully prepared for that future.

The point here is business and manufacturing often live and survive in the short term and the now. They no longer provide lifetime careers. If profits can be made by shifting manufacturing elsewhere, they move. That’s how it is.

As educators, to prepare students for any jobs that exist currently or even hypothetically in the future is also shortsighted and potentially morally wrong. The current job situation will change when companies find the grass greener elsewhere, and trying to teach skills for jobs whose existence we are trying to predict or guess about is gambling our students’ futures. That is wrong.

The Seers of Silicon Valley have gotten much wrong in the past. I bet their predictions about AI will be wrong as well, or at least far off the mark.

As educators, we need to teach students, not for theoretical futures. We need to teach them everything that will allow them to live, adapt, cope, and survive in uncertainty and be decent, critical human beings.

Obsessively focusing on AI or any technology of the day is as shortsighted as most businesses currently operate. Sure, knowing what AI is, its faults, its capabilities, its limitations, its effects on culture and the environment, are all needed, but not placed at the center of all learning.

The point is, we do not need to do Silicon Valley bidding and teach students to be dutiful users of AI or any technology; we need to teach way beyond that to a world where AI has passed into banality and life has moved on to even greater things.

You Don't Have to Believe All Those Predictions About AI Because We've Been Here Before

There is a perfectly rational reason for discounting all the AI predictions of AI Evangelists and Ed Tech Consultants.

In the mid-90s, the internet zealots promoted the idea that the web was somehow “magically” to bring us all together. It was what Vincent Mosco called “the Myth of the Death of Distance.” The web was going to bring us all together. It was the end of geography. Too bad it did not happen.

Even the economists got it wrong. It was Frances Cairncross, economist for the journal “The Economist” who wrote in her book “The Death of Distance”:

With the web people would be “Free to explore different points of view, on the Internet or on the thousands of television and radio channels that will eventually be available. PEOPLE WILL BECOME LESS SUSCEPTIBLE TO PROPAGANDA from politicians who seek to stir up conflicts.” (CAP EMPHASIS MINE)

What’s more she added this now laughable prognostication:

“Bonded together by the invisible strands of global communications, HUMANITY MAY FIND THAT PEACE AND PROSPERITY ARE FOSTERED BY THE DEATH OF DISTANCE.”

Boy did she get it wrong, like so many other Silicon Valley Seers of salvation by technology. The only bonding that has taken place is social media companies and our personal data.

The web and its demon spawn social media, manufactured by Big Tech, more interested in getting extremely rich, has only made us more polarized and divided than we have ever been. Their algorithms are designed to shove into our eyeballs that which divides us, not bring us together.

As far as the wonderful “bonds of community” wrought by the internet and its technologies with the “Death of Distance? The only thing that has died has been what little genuine human connection we had among many other things.

So, when the AI Evangelists speak of the promise of not having to do those things we hate; when they boast that AI is the educational tool that is going to transform our profession; and that AI will some day figure out all our problems, can you undertstand why one should call them on this nonsense?

The best thing to do is to discount all the prediction nonsense, for no one ever provides the evidence. When they give us a massive list of jobs that will be replaced, consider it nonsense. They never provide any evidence for their assertion.

The last thing educators should do is gamble the lives of their students that all these AI prognostications are gospel. You can’t prepare them for a world that does not exist yet, because no one knows what that world we be like, not even the Silicon Valley CEO Seers nor the Ed Tech AI consultants.

Monday, March 2, 2026

Why the Web Has Become a Garbage Dump?

 Evidence that the internet is now a garbage dump?

As an early user of the web, I used to enjoy "surfing the web." This consisted of  typing key words into a search engine (Yes, I am old enough to admit I used AltaVista, Yahoo, etc.) and enjoy reading through the results, and it was a pleasurable experience. If it was a controversial topic, you often had both sides of the argument for your review.

You could enjoy seeing sites that were interested in to conveying INFORMATION and not trying to game the algorithms by trying to get their slop in front of searchers.

Today, surfing the web has become impossible. There's too much pooh, garbage, and sewage floating around that makes it impossible. To use Cory Doctorow's term from "Enshittification"? The entire internet is "enshittified."

The web is a sewer, a big garbage dump where whoever is willing to pay to get their slop in front of eyeballs gets an audience.

The pay-to-get-your-content-viewed ignores whether such content is worthy of eyeball time at all. No wonder the internet slop problem is so bad.

When the web was transformed entirely into a money-making avenue, that was the death of the old web.

What was once touted the "information highway" has become a massive garbage dispensary.

Too bad. Web surfing is a lost sport.


#EdTech #Internet #Education

Sunday, March 1, 2026

AI Educational Utopian Myths Abound: Be Skeptical

Check to be sure that you have not fallen for the utopian dreams of endless prosperity and freedom offered up by AI Evangelists and Ed Tech consultants. Those will turn out to be empty dreams.

Vincent Mosco wrote in his 2005 book "Digital Sublime: Myth, Power, and Cyberspace:

"American history in particular is replete with visions of technological utopia spun by mythmaking optimists." (p. 36)

Mosco captures in 2005 the same spirit of the so-called "Age of AI." Today, we still have an abundance of "mythmaking optimists" who peddle their "visions of technological uptopia" powered by AI. It is a myth.

Those optimists are at it again, as the Silicon Valley mob share their mythical visions of utopia. But it is an old story:

First, they brought promises of a utopian community through social media that has resulted in a world of massive polarization and division. False promise number one.

Second, they promised an internet that would provide us with knowledge at our fingertips, but instead they gave us a deformed web where paywalls and data extraction/exploitation must be the ransom paid before you receive that knowledge.

What Silicon Valley ultimately gives us is a deformed, mutant versions of its utopian promises.

You can bet Silicon Valley's mythical vision of AI utopia will turn into a mutated version that somehow makes us all worse off.


#AI #EdTech #AIEducation #Education

Friday, February 27, 2026

Are Our Screens and Devices Harming the Very Students We Serve? Perhaps, Here's a Book to Spark Critical Thinking about Device Addiction in Schools

 In order to Disrupt the passive, uncritical acceptance of all things technological into schools, I recommend that school leaders and all educators add Jared Coooney Horvath's "The Digital Delusion: How Classroom Technology Harms Our Kids' Learning—And How to Help Them Thrive Again" to their reading list.

It really isn't about "banning all screens" in schools; it's about not allowing devices and tech determine what happens in our classrooms and with our students.

Horvath rightfully captures how we as educators have been complicit in turning the control of education over to companies who have made big promises that have not panned out. In fact, the evidence is growing, despite dismissal by the tech evangelical movement, that there is some actual harm caused by this proliferation of technologies.

Don't forget, the smartphone and its apps, especially social media apps, are designed to be addictive and to "capture eyeballs" and we have invited these into our classrooms with open arms. 

Horvath is correct in his whole premise that we need to wrestle back control of our education system, our schools, our classrooms, and our instruction from devices.

It doesn't mean a complete ban; it means removing tech from its central pedestal on which we have placed it.

I could see using this book as a faculty-wide read with some powerful and lively discussions on the rightful place of technologies in our schools and in our lives.

Horvath even offers many hands-on ideas to implement a EdTech Detoxification Process in schools or even in our lives as parents.

If we are going to foster critical examination of EdTech and the constant flow of gadgets from Silicon Valley this book is a good place to start.




The Label "Smart" Device Might Not Be a Good Thing: Read Jathan Sadowski's "Too Smart"

 Here is a book to add to your critical Edtech and critical thinking about technology list, even though it goes back a bit to 2020.

"Too Smart: How Digital Capitalism Is Extracting Data, Controlling Our Lives, and Taking Over the World" By Jathan Sadowski

Sadowski takes you through a critical overview of how companies are purposefully making their products "smart" products in order to facilitate data extraction for exploitation purposes. 

When a device is labeled "smart" you can bet it is gathering data about you and not always for your benefit. 

  • Free consumer apps companies use this data to sell. 
  • Insurance companies use this data against you in their pricing schemes and to manipulate you in your driving habits.
  • Government entities use it in their surveillance activities.

After reading this book, when a salesperson touts that a TV or a dryer is a "smart" device, you will not automatically see that as a plus. You will know that it is more of a tactic of exploitation at best and manipulation at the worst.

A lot of money has been spent on convincing us as consumers that the quality of being "smart" is a good thing for our devices. It is not.

Sadowski even suggests ideas of how to disrupt and avoid all this, from turning off these features or anything related to them to the idea of purposefully sabotaging the whole smart enterprise.

There is a lot to be said about shading parts of your life out of reach of Big Tech. 




Tuesday, February 24, 2026

Why Promises of EdTech Disruption Fail: What Should Educators Do Instead

One thing educators can expect—a continuous barrage of new product pitches that claim to have disruptive and transformative abilities—and that is happening as Tech Companies churn out their new gadgets.

AI is just the latest iteration of that pitch. This time, the AI evangelists claim, there is finally going to be profound changes in education.

This prediction is wrong.

Schools are conservative institutions. They resist disruptive change because that’s the way they are built, for better or worse. 

If they change, the do so incrementally and slowly and that is purposeful, because if schools radically changed at the arrival of every technological or pedagogical whim, they would be “fad-surfing institions.”

Institutions that surf the lastest fads don’t every really fundamentally change in ways beneficial to anybody. Once the hype and the money has been spent on EdTech and AI consultants and technological hardware, the school is still there, and history shows it is no better or worse mostly.

Schools spend millions on these so-called “disruptive and transformative initiatives and when the hype dies down and has moved on to the next thing, they are left wandering why things are still the same and where all the money has gone.

True incremental educational change does not come from adopting new gadgets and paying off EdTech and AI consultants.

True incremental change happens when educators as a community of teachers sit down and do the hard work of examining where they are and working to find solutions.

You don’t start with a solution looking for a problem to solve which is what AI seems to be. We did that with PCs, the web, social media, online learning, only to discover that our long-time problems were left behind.

Friday, February 20, 2026

Silicon Valley Big Tech Innovation Model and Ed Tech's Role in It

 Silicon Valley Big Tech Innovation Model…

Big Tech engages in the “BIG Search.” This is where the companies search for the next Tech that will capture and enslave and addict users.

Discovery of Next Thing. Big Tech companies find a technology, device that has addiction/enslavement potential. (Variation, sometimes they transform and invention into an addictive technology).

Marketing for Addiction. Tech companies market their product as: a) a must-have tech or you will be left behind/irrelevant, or worse a Luddite, b) everybody is using or will be using, so you will be left out, c) you might as well adopt and adapt because the tech is already changing the world for the better. (NOTE: This is said even if it is not or if its negative consequences are substantial.)

Getting the Ed Institutions On-Board. Tech companies next get educators and Ed Tech involved by getting them “integrate” or “usage-promote” for students. This ensures future and sustainable users and markets for the companies. Also, Ed Tech consultants get a cut of the pie through consultant fees, keynote speaking fees. (NOTE: This is usually done on hearsay and no evidence. Educators who want to do what’s best for students are guilt-tripped until they get on board.)

Maintenance of the Addictive Solution/Technology. Tech companies maintain usage through continued marketing tactics above. They use uncritical acceptance of their product to their advantage. Ed Tech evangelists attack anyone who questions and criticizes. (NOTE: The Luddite Name-Calling Tactic is common.) They market their product as an unequivocal societal good, even as negative consequences stack up.

Big Tech Innovation Cycle Repetition. Tech companies search for more “innovative” addictive tech products. (NOTE: Variation—Big Tech companies buy out other technologies by small new companies and repeat the process above.)

As an educator what is most worrisome is the uncritical, entanglement of Ed Tech with these companies. This forces educators to subject students to these technologies uncritically. 

Educators are expected to sanitize and Tech-wash these products by Big Tech and the Educational Establishment.


Thursday, February 12, 2026

Another AI Company CEO Boasts About AI: Educators Need to Be Aware of a Used Car Salesmen Here

Another AI Company CEO Matt Schumer is promising major disruptions due to his pet technology. His X post hyping up his AI systems are below.

Schumer Something Big Is Happening (and I stand to make a bundle so you need to purchare my Hyperwrite Product)

Those who are sharing this individual's AI braggadocio, have you even asked the critical questions of these claims? 

First of all, have you considered that this individual has a biased interest that would make him say such things? After all, he wants users to sign up for his product and stands to make a bundle.

Educators, use some critical thinking before you buy into this nonsense. This just continues to fuel the AI bubble which is going blow at some point.

These AI CEO Shysters are out for your money and anyone's money and don't really care how their predictions harm others.

Educators should avoid doing anything or subjecting students to any Tech gadgets based on what these CEOs say.

Tuesday, February 10, 2026

EdTech Consultants and Some Educators Suffer from the Borg Complex: They See Resistance to All Technologies as Futile

I think I have found an effective diagnosis of the condition currently suffered by EdTech consultants and evangelists who can't help slobbering over AI: it is called the "Borg Complex."

The Borg Complex is described in an article entitled "Borg Complex: A Primer" in 2013 by L.M. Sacasas.

These EdTech AI boosters suffer from Borg Complex because they "explicitly assert or implicitly assume that resistance to technology is futile. The Borg is a cybernetic alien race in the Star Trek Universe that tells their victims that they will assimilate them biologically and technologically into their own and that "Resistance Is Futile."

Our EdTech consultants and boosters tell us educators that we might as well adopt AI because its here. In other words, "Resistance is futile." 

They might also exhibit some of the other symptoms as well:

For example, in this article, 

Symptom 1: "Makes grandiose, but unsupported claims for technology." How often have we heard that "AI is a gamechanger" or that it is "revolutionizing education" with absolutely NO support? 

Symptom 3: "Pays lip service to, but ultimately dismisses genuine concerns." This is repeatedly done when they are presented with new research that points to cognitive outsourcing issues, or when the environmental costs of all these AI server farms are mentioned.

Symptom 4: "Equates resistance or caution to reactionary nostalgia." If you resist AI, you are simply clinging to old inefficient, unproductive ways.

And Symptom 8: "Refers to historical antecedents to solely dismiss present concerns." How many times have I heard an AI booster nostalgically call resistance to AI like the resistance to calculators when introduced.

Borg Complex Rhetoric is designed to short-circuit any critical thought and critical examination of AI. 


Monday, February 9, 2026

Sometimes All That Technology We Buy Fails and We Need to Admit It

I am not sure EdTech has ever found a technology they did not like or they labeled a failure.

EdTech gurus often say that when a technology initiative or technology program fails, it’s always due to either:

1-Lack of proper training

2-Lack of fidelity of implementation.

Very rarely will you hear, “Well, that technology use was a flop!” It is never the technology that was the problem.

Gun advocates say similar. Its the people who use technology, not the technology. (Notice you can change the word technology to guns here.)

It’s not visionary to hang on to what isn’t working just because it “looks like you’re innovative.” Or, because technology justifies your existence or job or your consulting position.

Cell phones and screens as well as tech are disruptive all right.

Their engineering for addiction works all too well. They demand the students’ focus and attention because that’s what Big Tech wants, their eyeballs glued to devices.

Saturday, February 7, 2026

No Technology Is Inevitable and to Make That Claim Is to Limit Possibilities

“Educators might as well accept AI because it is here to stay” so goes one of the pet EdTech and AI evangelist arguments. That is not necessarily true, but let’s look at it more closely.

This statement actually can’t be proven right because no one can see the future. Tech comes and goes so to say it is here to stay is a prediction, not a fact.

In addition, this statement assumes that educators should accept AI in whatever form being offered. Again not necessarily true. It is possible to demand that AI be safe and that it be subject to regulations that shape it in ways that is less destructive than it is.

There is also always the consumer choice not to be a user. I do not have to have a subscription or AI account.

The problem with this statement is that is authoritarian and totalitarian by nature. It tries to remove choice, and that is a way for Silicon Valley to dictate that their products are accepted.

It is this: “If you limit and direct what people can imagine, you set the parameters of possibility.”

Friday, February 6, 2026

Don't Believe the Silicon Valley Marketing Tactic: AI Is Not Inevitable

Silicon Valley Tech companies have taken advantage of clever marketing, favorable public opinion, and shiny-magic gadgets to ensnare us with tech designed to be addictive, invasively surveilliant, and exploitative.

It is acceptable to question the reality that these techno-oligarchs and digital capitalists claim to be making and see that they aren’t actually making our world better. They only make themselves richer, which is evident from the homes they buy, the cars they drive, and even the clothes they wear. They are prospering at the expense of all their users.

Here’s just some of the examples of their past promises and what they’ve done instead.

The web was to bring glorious access to content that was free, current, and reliable. Instead, we have a internet garbage dump and sewer of nonsense. Search and you don’t know what excrement you will get next, and the stench only increases.

Next, social media was supposed to bring us closer together and connect the globe. Instead, we have never been more polarized and divided. Facebook and Twitter have proven to be misinformation machines and BS spreaders. Even Linked-In is a BS-marketing platform where if you can package it and sell it to get clicks, you become an “influencer.” Tik-Tok, YouTube, all are platforms that allow you spread excrement and get paid for it.

Then there was cell phones which were supposed to provide us constant access to all of this—the web, social media, etc. We could always be connected. Instead, it offers always-on-demand addiction and isolation. It even makes us less social…just watch a family sitting in a restaurant, all engaged with screens instead of each other. There’s connection, but it is to what these tech companies want us connected to so they can sell ads and make money from our addictions and data.

Now it is AI. It is here and it has its promises of taking away all the dirty, distasteful work we don’t like doing. It is going to solve all our problems. It promises to make us even more “efficent and free.” What will its “instead” be? Even today there are hints.

Instead of fulfilling its promises, AI will bring us a more polluted world because of its increased demands for power needed for their server farms. Coal plants that were going to be decommissioned are being kept online, furthering polluting the environment. There is even talk of restarting the use of a nuclear plant on the East Coast that almost made a big swath of Pennsylvania into the American version of Chernobyl. 

In addition, instead of fulfilling its promises, AI is causing tech companies to consume even more scarce fresh water resources to cool their massive server farms  in many areas of the country at a time it is becoming harder and harder to provide safe drinking water to populations. 

Finally, instead of fulfiling its promises, AI is adding more garbage and sewage to the Internet garbage dump with its growing pile of AI slop. The web will become more and more a place of misinformation and nonsense. One can only imagine what the web will be in 15 or 20 years!

As these AI companies and those that keep peddling their products as a replacement for human workers, we seem to be getting closer to the utopia of machines that Kurt Vonnegut describes in his novel Player Piano where people who have no purpose in life live in cities with no future and no hope.

Here’s the lesson: NOTHING BIG TECH INVENTS WAS AND IS INEVITABLE. Our purpose in life is not to use their products or adapt our lives to use their products. We can, with leadership and vision, demand they create products that serve our ends and not just theirs.

Educators who are scrambling to “adapt to AI” because they’ve been sold on its inevitability are misguided. There is no evidence that it has to be inevitable in its current form or any form. Choices can be made, and we do not have to surrender to make these products successful.