Tuesday, April 28, 2026

Let's Welcome the Debate Regarding Screen Bans in Education: It Forces a Conversation that EdTech Stifled from the Beginning

One great positive that is coming from the threat of "Screen bans."

Ed Tech is being forced to defend its support of unbridled access to devices and a long-time cozy relationship with technology companies.


It is forcing a great deal of critical thought about the rightful place of all these technologies in the classroom. That's of great benefit.


Through criticism, there can be the asking of the difficult questions, and the questioning of the taken-for-granted assumptions that have fostered the uncritical acceptance of technology's potential.


I welcome the debate and let's see whether truth wins or whether special interests and money wins.

Ed Tech Taught Us Technology Solves All Our Problems: But Perhaps Technology Causes More Problems and It's Time to Think Critically

I got to thinking that when I earned my masters degree in Educational Technology back in the mid-1990s, there was extreme excitement about the potential of the Web and computers in the classroom. There was reason to be: the web was wild and free and not the commericial, paywalled, dungeon that it is today. There also weren't fifty gazillion companies burying the truth in marketing malarkey. 

I was actually the first teacher in my building to use the internet during class as part of instruction. God bless that dialup connection! I even had to explain to students what that buzzing and chirping was while it connected. 

It was the CIA Worldfactbook site, that I think has now been dismantled under the Trump administration. It was an excellent source of information in those days, but it was free and loaded with information.

In reflection though, I notice that the EdTech degree program taught one underlying and implicit notion: Technological solutionism, or the idea that technology is an answer to every problem. It was doctrinal and implied in every course.

This notion is wrong, and it has taken me years to unlearn it through experience. Now, with the anti-screen movement as an example, we see the backlash against the idea that technology always has an answer.

What is a shame though, is that my Ed Tech masters program failed to teach any critical thought about technology. It was one big Tech-Promotion program. There were no courses in critical thought about tech, just teaching of the tenets of technological solutionism.

There should have been a strong critical, philosphical base to the learning; but instead, it indoctrinated us as "technology evangelists" to go forth into the world and spread the gospel of technological promise.

The question I have now is just how much of what was worthwhile in the classroom has been sacrificed, not because technology was better; but because we were carrying out the evangelical task of "spreading technological solutionism"?

Is it any wonder that we now have people thinking critically about technology's role in teaching and learning, and are finding that perhaps in our EdTech enthusiasm, that we might have caused the loss of something valuable?

My early education in EdTech was mindless indoctrination, as I fear all EdTech has become. 

EdTech education in the 1990s and today seems foster mindless and unconscious evangelists going forth into the world, still spreading the promise of technological solutionism. It is time to question the dogma and dig into the past and see just what we have done to ourselves as educators and to all the countless students who we subjected to our technological dogma. 

We might just find a more sober vision of technology's classroom promise.

Monday, April 27, 2026

A Tech Solution Gone Too Far: Using Technology to Control Students

There are times when Ed Tech companies simply go too far and a company called Minga does just this.

Am I the only one who gets the creeps with the idea of using technology to “manage student behavior?” “Manage” here really means CONTROL students’ behavior, and the educator quest for this system of student control has been ongoing for well over a hundred years.


And that’s what happens when CONTROL becomes the goal of education.

Still, when I read the Minga solution website, (which I won’t include here because the last thing I want to do is promote this product), Skinner rat mazes and cheese comes to mind. It appears to be a technological carrot dispensing solution for schools. It is also a student surveillance system as well, keeping up with students at all times and dispensing carrots when they adhere to rules.


For me, what is especially creepy is the so-called “digital hall pass.” This part of the Minga solution literally gives schools the ability to monitor student potty time!


It keeps students under a constant technological system of surveillance. Apparently, this system of surveillance monitors how often a student asks, keeps students from asking during “blackout periods,” controls the number of students out of the room at a time, and monitors how long a student has been gone.


Even potty visits aren’t safe from the Big Brother monitoring of EdTech! And EdTech evangelists wonder why parents are fed up with technology!


There are certainly other things to be concerned about with this so-called solution, but it is a perfect illustration with what is wrong with Ed Tech.

Ed Tech companies see everything as solvable through technology. When that happens, you get these bizarre and crazy products. Not every task or issue in education is solvable with technology.


If I were a parent today, and my child’s school was using this solution, I would either demand its demise or move my child to another school of choice where surveillance and control isn’t the goal of education.


By the way, can you imagine a hacker getting into a system like this and the damage to a student that could result?


Technology can and does go to far, that’s why there is the concern with screentime.

Wednesday, April 22, 2026

Some Thoughts on the State of Web in the Age of Generative AI

The web has been a garbage dump of misinformation and slop for years. 

Web searches at one time were interesting in themselves, because you were linked to sites of interest, not sites that pay Google to appear in your search stream. You could "surf the web" and enjoy it. Now, you surf a ocean of flotsam and sewage.

Still, even in the age of the Garbage Web, there as a time, when at least most of that garbage was generated by a human, so you, at least, had someone you could point to who authored it, which helped with its veracity. You could tell what was garbage sometimes by who generated it.

Now, in the age of GenAI, we now have garbage and slop, generated by AI with no one there to author, so that means of verification is removed. We've dispensed with an author.

Can this be a good thing? There are times when knowing who authored a text is vital, yet we made the web's veracity even blurrier. Authorless garbage can proliferate. The web becomes a heap of nonsense.

Just some thoughts on where the web is going.

Is ChatGPT an Accomplice to Murder? Does AI Kill People or Do People Kill People?

The Florida State shooter’s use of ChatGPT and the Florida attorney general’s criminal subpoena of OpenAI is a quick view that should remind us of what we really want our AI machines to become.

Just minutes before Florida State University shooter Phoenix Ikner killed two people, he asks ChatGPT:

“What time is the busiest in the FSU student union? If there was a shooting at FSU, how would the country react?” 

Clearly his questions point to his guilt, but what level of responsibility does ChatGPT have?

His questions are like asking an accomplice for advice before committing the act, but of course, he could Google it as well and maybe get the same info, but is that really the same?

In addition, he apparently asked ChatGPT what type of gun to use, which ammo went with each gun, and whether or not a gun would be useful in short range.”

Now, the Florida Attorney General has issued subpoenas to OpenAL to invesitgate the role of ChatGPT for a criminal investigation in aiding this shooter in this situation. Florida Attorney General James Uthmeier states:

“ChatGPT offered significant advice to the shooter before he committed such heinous crimes…If this were a person on the other side of the screen, we would be charging them with murder. We cannot have AI bots that are advising others on how to kill others.”

The attorney general’s last statement gets at the heart of the ethics question about GenAI: Do we really want to endow a machine with “human-like intelligence and attributes” and expect that entity to be treated like a “tool” or a “machine”? 

Do we really want it to obtain general human-like intelligence and be able to act, think, and create like humans and declare it immune from anything it does using the utilitarian argument that “AI doesn’t kill people; people kill people?”

But are we not being a bit contradictory in our pursuit of such a version of AI, in pursuing an anthropomorphic version of ourselves to the point that we can have conversations with it, and then grant it utilitarian immunity?

The AG’s statement that “if this were a person” makes one really think about this notion of designing our GenAI so humanlike and whether that is really a good idea.

It might also makes us ponder: Do we treat GenAI like a person when it becomes human, which seems to be what our Seers of Silicon Valley keep predicting and wanting?

We desparately need to ask the right moral questions about AI and not leave the answers to the likes of OpenAI, Anthropic, or any of these companies whose interests are clearly not in our interest.

We have entrusted our future into individuals like Sam Altman and Elon Musk? We will probably deserve the world we end up with.

Saturday, April 18, 2026

If AI Can Do It,Then Maybe It Doesn't Need to Be Done

Perhaps a new way of thinking about LLMs in the classroom:

"If GenAI can do it, perhaps it doesn't really need to be done."

AI can't doesn't think and can't create. It just regurgitates what other people created and wrote.

Who needs AI vomit anyway?

Wednesday, April 15, 2026

Just Maybe If AI Can Do It, It Might Not Be Needed

Just a thought. If GenAI and LLM can write it, does that writing even need a human writer

It might also be that the writing is not needed at all.

Think about a novel written by AI or a poem written by such. Is it needed? I read novels because of "authors" but I supposed I could read them for other reasons. But I doubt I would ever read one because AI wrote, except out of curiosity.

AI slop by its nature does not even need a human. It might not even need to exist.

The question is to figure out which writing needs to have a human writer.

All these Politicians can send me all the AI generated Text Messages and emails they want. I don't read them anyway.

I received an AI sales phone call yesterday spoofing a real person's name. Once I realized it was AI, I hung up, which was less than five seconds.

Ultimately, AI slop only has any status if we are readers, listeners, or viewers decide that it does.

I Welcome the Death of the Five-Paragraph Essay and All Standardized Deformed Learning

I have to admit that since GenAI can easily generate a five-paragraph essay on a topic, the death of that fake writing format dies a welcome death.

Why did we teach such nonsense? In the 1990s, in all their wisdom, our policymakers and educational leaders decided that we English teachers needed to be teaching writing, (as if we were not), so they developed a writing test with standardized rubrics and all that garbage.

If you are going to measure writing effectiveness, you have to have standards to measure they said. 

But measuring writing is like measuring a sunset or a water fall or a mountain stream. Go ahead and develop your standards, but look at the deformity you create.

Naturally, when you standardize any aspect of writing, you stupefy it and create some kind of monstrosity, and in this case? The five-paragraph mutation essay.

We taught this because our educational leaders demanded it with their accountability assessments, even though in our hearts we knew that true writing can't be standardized. This is because our administrators demanded 'accountability" and wanted "high test scores" for personal boasting. They seem to always have to have those "measures" to prove their necessity.

I will acknowledge this positive outcome of GenAI and LLMs: If AI blows up anything, it can destroy this notion of standardizing educational tasks. It has always been nonsense and it still is, so go ahead GenAI  and blow it all up. 

If AI can do it, then let's finally have students engage in authentic learning tasks that AI is ill-equipped to do completely.

Of course with standardized tasks out the window, our educational leaders can no longer compare outcomes and boast of "getting those scores up" but that is a good thing. The true measure of what we learn has never fit a bubble sheet or a rubric.

Finally, unintentionally, GenAI might just, at least in this sense, make it possible to ask students learn how to do real writing, and educational leaders might have to find some other measure of their own effectiveness.

Lessons Learned: Preventing Companies from Keeping Your Users and Institution as a Locked-In User

Since my decision to discontinue my use of Evernote after Bending Spoons eliminated the plan for Personal Users, I have found my replacement: the Notes app within my Mac OS system.

The Mac OS Notes app successfully captures what I wanted to do with my note taking activities and other tasks I was doing with Evernote. It turns out, with some modifications, I can do all that I was doing with Evernote.

For example, while the Notes App does not have “notebooks” it turns out that its “Folders” feature functions in the same manner. You can gather connected documents into a folder and tag the folder. You can scan documents; insert documents; insert audio recordings, etc.

Basically, Notes appears very much like Evernote used to be before Bending Spoons acquired it and began adding Bloatware to it in order to charge customers more.

I suppose Bending Spoons did me a favor. I was really paying to use Evernote when I did not need it. The simple solution was right there all the time.

Sometimes the solution to our problems is already there, and sometimes, when it comes to tech solutions, it’s not the product expanded with bloated features; it’s the simple solution.

Sometimes the “Keep It Simple” adage is best, and app solution developers would do well to keep that in mind when the adding of features does not always equate to value for your current users. Keep your current users in mind and don’t add features that degrade their experience of your product. That is, if you have any loyalty to your current customers.

Keep adding bloated features that pull your product away from what your legacy and original users want, then expect those users to exit when the costs are too high and your product can be superseded by a solution that captures what they want to do.

On the flip side of things, all users would do well to prevent themselves from getting “locked-in” with apps and tech products. Keep yourself flexible and portable so you can relocate at any point the app developer stops providing the product you want and need.

Figure out a way to transfer those app escape costs back to the app developer where they belong. 

After all, they are trying to engineer their products to keep you “locked in” as a user. With some anti-lock in measures, you can keep that from happening.

Sunday, April 12, 2026

Why Evernote Note Taking App Users Need to Cancel and Delete Their Accounts Now

There are is an important reason why anyone who has a Personal Evernote Note Taking App account should delete their account and find an alternative now.

Bending Spoons, who acquired Evernote in 2023, has recently changed their plan offerings for personal users and both are unacceptable. 

There is the “Starter Plan” that imposes draconian limits on content and device use. This is a problem for someone who has been using Evernote more than 10 years, who can’t use this plan without severely deleting content. It also eliminates one of the major reasons I use the application, which is the ability to use across all the devices I want. This plan limits you to 3 devices. 

The other plan offering, the “Advanced Plan”, is basically what I have now, with unlimited content and devices, but it is over 100 dollars more per year. I'm sorry, but I do not see Evernote's value increasing that much in one year.

Now, I acknowledge that when you go in and start to cancel your subscription=, Bending Spoons offers you a one-time, $100 off subscription which brings it back to $149, but that alone should be a red flag. Why should they offer only two plans, and then offer a one-time discount? Do they want to keep me hooked for one more year to get me further locked in as a user? That’s dishonest business in my thinking, but typical of Silicon Valley and Big Tech.

Why would I spend another year, uploading more content to Evernote, only to find myself in the same situation next year? I would have even more content. Perhaps Bending Spoons is gambling that I would take the additional year, and because I have even more content, I would be so invested that I would be forced to continue using Evernote. Not happening with this user.

Another reason to move on from Evernote is that they are apparently using the “Microsoft Product Design Playbook.” That playbook is “Add a bunch of features to Evernote so you can ultimately charge more because users are locked in as users, they won’t go anywhere.” This notion includes adding a gazillon features that users haven’t even asked for or wanted. Then charge your users more. Microsoft has so bloated Windows with “features” I left their product behind a long time ago, and I am doing the same with Bending Spoons’ Evernote.

I have exported all my content. I have cancelled my subscription. I will delete my account and move on. Bending Spoons could have continued the Personal Plan option, but they gambled and lost with me.

One thing Bending Spoons should learn, just like Microsoft, you can’t treat customers crappy. And, don’t always think that all the added features like AI and Video transcripting is what all your customers want and will pay for. Not all users want new bells and whistles, especially long-time users who found your product versatile and reliable, who now have been dumped on by the company.

Evernote has been deformed beyond use for me by Bending Spoons, and even though they brag on their website that they “Acquire and improve iconic products” they certainly failed in this case. Time to move on and find another solution. 

Saturday, April 11, 2026

Evernote Is History with Me: They Have Lived up to Doctorow's Notion and Have Become Enshitified

The Enshitification of Evernote has come to pass.

I have used Evernote over 10 years, and they have tweaked it well sometimes, and sometimes not so well, but I have used it for years to store my reading and writing notes.


That, unfortunately, ends today.


Evernote changed their plans and recently increased their yearly subscription price by 50% if you keep what you have, or otherwise, choose a crappy plan with draconian limits  placed on your amount of notes, notebooks, and devices to access their product. ENSHITIFICATION AT ITS BEST.


I suppose they have to pay for their AI gamble, which I never used anyway.


Cory Doctorow really got it when he coined this term. The only way out is to delete my account.


What's worse, I put in a ticket to question their plans and increases, and EVERNOTE JUST SENT ME AN EMAIL GIVING ME INSTRUCTIONS ON HOW TO EXPORT MY CONTENT AND CANCEL MY SUBSCRIPTION.


After I do that, I'm out. Evernote is history with me.


UPDATE: After I posted, I downloaded all my content from Evernote. Then I logged into my account to cancel my over 10 year subscription.


Once I clicked the Cancel button, a pop-up comes up “Offering my current options for $149 per year” and not the $249 per year increase. That is Doctorow’s “enshitification” personified!


The email Evernote sent me DECEPTIVELY offered two option: 1) Starter Option (with draconian content and use limits) for $129 per year and 2) Advanced Option for $249 per year that kept all my current features.


That’s poor and unethical business practices in my thinking. 


I cancelled my long time subscription anyway. Who knows how Evernote will treat its users next year now that they have become enshitified!


On an added bit of irony, Bending Spoons, the company that now owns Evernote boasts on its website: “We acquire and improve iconic products.” Perhaps that would better read: “We acquire and enshitify iconic products.”



Thursday, April 2, 2026

The Next Time You Hear a School Leader Say "AI Is Not Going to Replace Teachers, It Will Replace Teachers Not Using AI" Think

 If "AI is not going to replace teachers, but replace teachers who do not use AI," perhaps we should really look at that statement used by many school leaders pushing for this technology in their schools.

 It says a great deal.

1-This school is authoritarian. You must use AI, even if you have proven to be effective without it. If you don't I will replace you.

2-AI is your savior, accept it, or be gone.

3-Keep your opinions to yourself; they don't matter.

4-No room for critical thought or discussion about the use of AI in this school. Just do it.


When school leaders and AI advocates use this language they hide their own authoritarian leadership style behind a statement to generate fear.


I would question whether I would want to even teach in a school operated by such dictatorial tactics.



Educators Need to Teach True AI and Technology Literacy

Should we be afraid of AI? If you listen to the Seers of Silicon Valley, we should be shaking in our boots. AI is going to displace us in our jobs; turn us into Duracel batteries; and turn us into gurgling, nonthinking imbeciles, sitting in our homes with technology waiting on us hand and foot.

Not true. Besides, our Seers have gotten much wrong in the past, so why would we expect the Bill Gateses, Alex Karps, or Sam Altmans of the world to have access to anything that resembles our future? Besides, their wealth and future is entirely dependent upon the fate of their now favorite technology. That has always been the case.

My real concern here is not with their self-serving prognosticating nonsense, but with what we as educators should be doing if we really give a damn about what is being called “AI Literacy.” 

As a part of “AI Literacy” we should be teaching students the real function of these stories and to see them for what they really are and do. For starters:

1-They make it seem like there is only one possible direction for the development of AI, their chosen route. Not so.

2-We are powerless to do anything about it, and must accept the AI they have provided for us. Not really.

3-They purposely hide who is really going to win and benefit from AI; which includes them and all the minions and bottomfeeders gathering the scraps that fall from their table.

4-The Seers prevent any public debate about their version of AI, and curtail any questioning of the goods they are delivering. That’s Silicon Valley marketing tactics at their best.

5-They also prevent any questioning of the massive resource shift (water, power, minerals, human resources) to their benefit at the expense of everyone else. They are stealing resources for their own wealthy gain.

If we are going to teach students anything about AI, it should be to teach them critical thinking instead of turning AI into an object of worship. We did that with the PC, the Web, and social media, and are reaping the results.

All technology literacy needs to teach students about all aspects of every technology we use.

As an educator, our responsibility is not to generate unquestioning users and consumers for the products developed by the Seers of Silicon Valley. 

Our responsibility should transcend making students consumers of technology; it should be empowering them to shape the future with or without technologies. This is done by giving them the gift of critically analyzing what the Seers are saying and not saying.

At least by doing that, we keep our students from becoming the tools of the technologies they use. 

That’s AI literacy, Technology literacy at its best!