Saturday, May 9, 2026

Ed Tech Doesn't Need to Advocate for Technology: It Needs to Shine a Spotlight on Its Flaws Too

One key component of any Digital Literacy Program? Web content is not always there due to its merit; it's there because someone "pays for its spread" and "games the delivery algorithms."

Of course, the most astute users already know this.

But if we want truly digitally literate students, they need to know the games people play to get noticed.

They need to know that all web content, especially that disseminated on platforms, is not necessarily there due to its merit, but because it is like a paid informercial or because someone knows the algorithmic game.

As schools grapple with AI, it is important to include in literacy the games behind its creation as well. Its use of web content, including pirated copyrighted content in its development. Also, its use of exploited labor to train models, and its massive consumption of our natural resources and power.

Too often, educators get caught up in the shiny gleam of technologies as gems and fail to see that most of the time what they really have are rhinestones.

Ed Tech promotes the "gemstones" myth for all technologies.


Thursday, May 7, 2026

Ed Tech Defends Devices Not Students: There Is an Attention Problem and It's Not the Screentime That's the Problem: It's the Products or Screens

Many in Ed Tech are striking back against those wanting to control screen time with bans and restrictions. They rely on the old “utilitarian argument” used by gun advocates. “Devices don’t distract students; students distract themselves” they say.

How ludicrous does that sound when the devices are PURPOSEFULLY DESIGNED AND ENGINEERED by Big Tech to “distract” and “capture attention?”

Sure, it’s not the time spent behind the screen that is the problem. Its the products that Ed Tech uncritically subject students to.

It’s devices and apps that are purposefully engineered for addiction. The product is the problem. It’s how it’s made and that is of concern.

These companies aren’t going to change their money-making products, and their goal is more and more addictive and distracting designs with each new feature.

In the classrooms, teachers are fighting “mech-dealers” (the tech-equivalent of meth dealers), who sell these addictive products and who only want students attention and data so they can make more money.

That is the problem with screens.

Instead of working with these Ed Tech companies and serving students and their attention on a silver platter,  why not join with those who want to address these issues. 

No, like Ed Tech does when their devices flop, it’s the teachers’ fault; it’s the school’s fault.

It’s an implementation problem, they say. Just maybe, the product is the problem.

Come on Ed Tech, advocate for students, not the Tech and those peddling it.


Tuesday, May 5, 2026

Why Abandoing E-Books Restores an Old Experience of Reading

Why do I still buy physical books? Call it reverse digital conversion.

When the Kindle appeared, I was in awe and I have purchased multiple versions over the years, and always used the app across devices. It gave me instant access to book purchases (though under Amazon terms I did not actually own the books but simply pay them for access). It also allowed me to carry a library around with me at all times (though I discovered you still can only read one book at a time.)

That awe e-books invoked has long-since cooled and I buy more physical books than I do e-books. Why?

The experience of reading a physical book for me is different and more suitable to the way I want to experience reading. 

First of all, reading from a device can bring the multitude of distractions that gadgets bring. I can read a physical book on MY TERMS, and truly block out the world. Devices are designed for distraction, so a physical book has none of that.

Secondly, I like to physically underline as I read, make notes in the margins, and I have an old-fashioned journal and pen I can pick up and record quotes and thoughts. I have never been able to get that feature in the Kindle app to work the way I want to work. 

Despite what the Tech evangelists tell us, devices LIMIT sometimes, and the e-book limits my reading experience.

Finally, I honestly like to own my books, and not pay for access. I return again and again to my books and right there in them are my notes and thoughts. I don't have to make sure my device battery is charged or whether there's a wifi connection. Just open it and you're there. That's pretty darn efficient.

I still purchase a e-book every now and then, but it is usually one I read for relaxation or interest. It's the books I usually give away. It is never one that I will re-read or return to. Though, there are times when I purchase an e-book first, but find that I want a physical copy instead. 

There are obvious issues with physical books but one adapts. For example, the photo below is my solution for that stack of books I kept by my chair on the end table. The book tree works well. I can see my titles and pull and replace easily. Of course a single Kindle on a table would replace, would it not? God forbid! Then it brings its distractions, charging issues, etc.

This really is an illustration of "reverse digital conversion." We all need to think outside the silicon box that technology has placed us. Perhaps then, we will rediscover old experiences and invent new ones. The world is not yet encased in chips, and if it is, I don't envy living in such a place.


Monday, May 4, 2026

What If School Administrators Contributed to the Destruction of Teaching by Blindly Accepting Value-Added Teacher Data?

I've been reading Gunther Anders's 1956 book The Obsolescence of the Human and there is some wisdom offered by Anders in our "AI-Machine Worshiping" age. I do wish to avoid getting too "anti-tech" here because others have taken it to extreme, but I think Anders does offer words to cause us to reflect on this uncritical adoption of AI in all areas of our lives, specifically in education.

One particular statement by Anders in his book that stands out was this:

"...humanity used its right hand to rob its left, offering up the loot—its own conscience and freedom to decide—on the altar of machines. With this, humanity proved that it had submitted itself to this manmade calculating robot, was willing to accept this machine as a substitute for its own conscience, and acknowledge it as an oracle machine, and even as the machinic eye of Providence." 

When I read this statement, I could not help but think of all the societal decisions we are handing over to AI systems, decisions such as jail term lengths, car insurance rates, and in education, teacher effectiveness.

We, in effect, turned over the determination of "teacher effectiveness" to algorithms and Anders's "calculating robots" over ten years ago when K-12 educational institutions across the country adopted Value-added models to determine this effectiveness. So little is said in opposition to it now, that this "so-called data" is now assumed to be an actual thing. But we forget that Value-added data is a machine creation. It is a human creation too.

Anders is right on too about the rationale for why educational institutions jumped on the Value-added fad so quickly too.

In thinking of Anders's words, by adopting Value-Added algorithmic teacher effectiveness determination, all these educational leaders who adopted these, "offered up "their own conscience and freedom to decide what a good teacher is" on "an altar of machines." In other words, all these school administrators have given up their conscience and decision-making to these value-added, calculating robots of statistical measures.

Administrators now, without thought, substitute the value-added algorithmic machine as a substitute for their own conscience and look to it as "an oracle machine" to tell them what an effective teacher is. 

It has become for school leaders and administrators, their "Machinic Eye of Providence" dictating to them which teachers in their buildings are good and which are bad.

I have always had a hunch that the reason for such widespread acceptance of Value-added measures as a means to determine teacher effectiveness was due to one simple fact: Teaching has always been a very complex and somewhat artistic activity, so many school administrators simply do not know what a good teacher is when they see one. By allowing this algorithmic, "caculating robot" even the most ignorant school leader can have someone decide for him which teachers in his/her building are effective and which teachers are ineffective.

Judging teachers requires a "conscience" and a "willingness to make the call, or freedom to decide" what good teaching is. It requires a conscience, because of contextual factors a teacher deals with each day. It requires freedom to decide, because judging teaching needs a "connoisseur of pedagogy" not a cold, calculating robot. This means the "experience" of the judging administrator matters, especially their own experience as an effective teacher themselves.

As I said, Value-Added Algorithmic Machines are now just an accepted part of the educational landscape, and that's just too bad. Perhaps the blind use of such devices by mindless administrators to tell them which teachers are effective, have done to teaching what was intended. Teaching is no longer an art, but the following of a recipe. Reduce the complexity of teaching so that even a machine can tell you what is of value and that has simplified that act to simply stupidity. No wonder no one wants to teach anymore.

Sunday, May 3, 2026

"There's an app for that...but should there be?" Teaching Students Answers Sometimes Lie Outside Devices

“There’s an app for that…but should there be?”

That’s the question we should be asking instead of always searching for an app to solve our problems.

To automatically look to technology alone for answers, is “tech solutionism.” That’s narrow-minded and dogmatic thinking.

But that’s the mindset Ed Tech has adopted.

But to always turn to tech for answers narrows the possibility for available solutions. 

It’s not thinking outside the box; it’s boxing up the mind into a silicon container.

To teach students to always search for answers in technology is making students dependent upon devices, which is what Big Tech desires.

Big Tech wants addicted users.

To counter that, educators need to be sure to expand their students’ toolboxes beyond the screen.

Our goal, either intentional or unintentionally, should never be to teach students to be “good little consumers of Big Tech’s latest.”

It should be to teach students to be critical and free users of tech WHEN IT IS THE BEST SOLUTION AND WHEN THEY WANT TO USE IT.

To do that, teach them that the answers sometimes lie outside the world of silicon and microchips.

Ed Tech’s problem has always been its inability to see anything but the gleam of gadgets and devices. It always searches for its answers there.

In this always-search-for-answers-in-devices it actually “imprisons” children in a world where the only real answers are found in screens. That’s not reality.

That’s dogmatic, narrow-minded approaches that will forever have students looking for answers from Silicon Valley and Tech. Sounds like device dependency to me!

That’s not growing critical, adaptable, and creative learners.

If there is an app for it, sometimes we need to ask, “Should there be one in the first place?” And “Should I use an app to do this?”

Teach students that and they will be free human beings and not mindless customers for Big Tech and those peddling devices.

Friday, May 1, 2026

Ed Tech Critical Reflection Needs in a Time of Screen Time Limits

EdTech has some accounting to do now that major questions about the place of technology and screens in are being heavily scrutinized.

It is a time for the Ed Tech field to come to a reckoning.

Instead of acting like dogmatic, fundamentalists defending their technology tenants of faith, those in the field of Ed Tech should be engaging in mass self-criticism and self-examination, focusing on everything they have taken for granted since they first pushed devices into the schools.

Some thoughts on what those should be?

For example, Ed Tech has always had an extremely cozy relationship with those who create and sale the gadgets (and I use that word to broadly cover everything, computers to AI). These companies sponsor Ed Tech conventions, and Ed Tech has allowed them free uncritical access to all the educators attending. At these events they give attendees free gifts and subject them to company delivered or sponsored keynote addresses. They provide “free” training on their products. Not one minute is devoted to critical thinking about the products peddled.

In this way, Ed Tech has allowed the product companies to control the discourse and the discipline. Leaders controlling the budgets who really do not understand the technologies are sold on these, then Ed Tech jumps on board and tries to justify the purchase. This should not be.

Ed Tech needs to develop a conscience. It needs a “critical mind” that looks upon its discipline with skeptical, questioning eyes. 

Instead, we salespeople are allowed to promote unquestioningly their wares, and then, we horrifyingly, subject our students to these. Use now and ask questions later with no regard of the effects on our students is sometimes the thinking.

Is it any wonder, that these devices and gadgets have sometimes caused much harm and little good?

Joseph Weizenbaum, computer scientist and pioneer thinker about AI, once wrote:

“There are certain tasks which computers ought not be made to do, independent of whether computers can be made to do them.”

This statement, the field of Ed Tech does not get. It sees their devices as always the answer. They are most often “pure technology solutionists,” who look for problems to solve with their tools, instead of looking at the problems and then trying to find the tool to solve them. Maybe sometimes, even inventing problems in order to use their gadgets to solve them.

That’s why they always see their devices as the answer to every educational problem.

But here’s the rub: As Weizenbaum points out, just because a computer, a smart phone, or AI can do it, that does not mean we should use them to do it.

In these times, Ed Tech as a field would do well to reflect critically on itself.

Instead of a field that acts as a conduit to pipe gadgets into the classroom and schools marketed to them by tech companies, Ed Tech educators need to begin asking questions like these:

-Is this something I want technology to do?

-Is it something technology should be doing? 

-Is it just possible, that this learning, this teaching, this task would be best achieved through analog means?

Asking such critical questions, and being skeptical and critical of technology would perhaps give this field the beginnings of some kind of conscience. It would upset the uncritical value tech has and decenter it in the field of education, which is what should happen.

If Ed Tech educators had become critical and skeptical about the role of gadgets in the classroom from the beginning, instead of being awestricken by the glow of the devices, this might have also headed off the push to limit screens in schools because educators would have been more discerning before subjecting children to devices in their Ed Tech experiments.

Tuesday, April 28, 2026

Let's Welcome the Debate Regarding Screen Bans in Education: It Forces a Conversation that EdTech Stifled from the Beginning

One great positive that is coming from the threat of "Screen bans."

Ed Tech is being forced to defend its support of unbridled access to devices and a long-time cozy relationship with technology companies.


It is forcing a great deal of critical thought about the rightful place of all these technologies in the classroom. That's of great benefit.


Through criticism, there can be the asking of the difficult questions, and the questioning of the taken-for-granted assumptions that have fostered the uncritical acceptance of technology's potential.


I welcome the debate and let's see whether truth wins or whether special interests and money wins.

Ed Tech Taught Us Technology Solves All Our Problems: But Perhaps Technology Causes More Problems and It's Time to Think Critically

I got to thinking that when I earned my masters degree in Educational Technology back in the mid-1990s, there was extreme excitement about the potential of the Web and computers in the classroom. There was reason to be: the web was wild and free and not the commericial, paywalled, dungeon that it is today. There also weren't fifty gazillion companies burying the truth in marketing malarkey. 

I was actually the first teacher in my building to use the internet during class as part of instruction. God bless that dialup connection! I even had to explain to students what that buzzing and chirping was while it connected. 

It was the CIA Worldfactbook site, that I think has now been dismantled under the Trump administration. It was an excellent source of information in those days, but it was free and loaded with information.

In reflection though, I notice that the EdTech degree program taught one underlying and implicit notion: Technological solutionism, or the idea that technology is an answer to every problem. It was doctrinal and implied in every course.

This notion is wrong, and it has taken me years to unlearn it through experience. Now, with the anti-screen movement as an example, we see the backlash against the idea that technology always has an answer.

What is a shame though, is that my Ed Tech masters program failed to teach any critical thought about technology. It was one big Tech-Promotion program. There were no courses in critical thought about tech, just teaching of the tenets of technological solutionism.

There should have been a strong critical, philosphical base to the learning; but instead, it indoctrinated us as "technology evangelists" to go forth into the world and spread the gospel of technological promise.

The question I have now is just how much of what was worthwhile in the classroom has been sacrificed, not because technology was better; but because we were carrying out the evangelical task of "spreading technological solutionism"?

Is it any wonder that we now have people thinking critically about technology's role in teaching and learning, and are finding that perhaps in our EdTech enthusiasm, that we might have caused the loss of something valuable?

My early education in EdTech was mindless indoctrination, as I fear all EdTech has become. 

EdTech education in the 1990s and today seems foster mindless and unconscious evangelists going forth into the world, still spreading the promise of technological solutionism. It is time to question the dogma and dig into the past and see just what we have done to ourselves as educators and to all the countless students who we subjected to our technological dogma. 

We might just find a more sober vision of technology's classroom promise.

Monday, April 27, 2026

A Tech Solution Gone Too Far: Using Technology to Control Students

There are times when Ed Tech companies simply go too far and a company called Minga does just this.

Am I the only one who gets the creeps with the idea of using technology to “manage student behavior?” “Manage” here really means CONTROL students’ behavior, and the educator quest for this system of student control has been ongoing for well over a hundred years.


And that’s what happens when CONTROL becomes the goal of education.

Still, when I read the Minga solution website, (which I won’t include here because the last thing I want to do is promote this product), Skinner rat mazes and cheese comes to mind. It appears to be a technological carrot dispensing solution for schools. It is also a student surveillance system as well, keeping up with students at all times and dispensing carrots when they adhere to rules.


For me, what is especially creepy is the so-called “digital hall pass.” This part of the Minga solution literally gives schools the ability to monitor student potty time!


It keeps students under a constant technological system of surveillance. Apparently, this system of surveillance monitors how often a student asks, keeps students from asking during “blackout periods,” controls the number of students out of the room at a time, and monitors how long a student has been gone.


Even potty visits aren’t safe from the Big Brother monitoring of EdTech! And EdTech evangelists wonder why parents are fed up with technology!


There are certainly other things to be concerned about with this so-called solution, but it is a perfect illustration with what is wrong with Ed Tech.

Ed Tech companies see everything as solvable through technology. When that happens, you get these bizarre and crazy products. Not every task or issue in education is solvable with technology.


If I were a parent today, and my child’s school was using this solution, I would either demand its demise or move my child to another school of choice where surveillance and control isn’t the goal of education.


By the way, can you imagine a hacker getting into a system like this and the damage to a student that could result?


Technology can and does go to far, that’s why there is the concern with screentime.

Wednesday, April 22, 2026

Some Thoughts on the State of Web in the Age of Generative AI

The web has been a garbage dump of misinformation and slop for years. 

Web searches at one time were interesting in themselves, because you were linked to sites of interest, not sites that pay Google to appear in your search stream. You could "surf the web" and enjoy it. Now, you surf a ocean of flotsam and sewage.

Still, even in the age of the Garbage Web, there as a time, when at least most of that garbage was generated by a human, so you, at least, had someone you could point to who authored it, which helped with its veracity. You could tell what was garbage sometimes by who generated it.

Now, in the age of GenAI, we now have garbage and slop, generated by AI with no one there to author, so that means of verification is removed. We've dispensed with an author.

Can this be a good thing? There are times when knowing who authored a text is vital, yet we made the web's veracity even blurrier. Authorless garbage can proliferate. The web becomes a heap of nonsense.

Just some thoughts on where the web is going.

Is ChatGPT an Accomplice to Murder? Does AI Kill People or Do People Kill People?

The Florida State shooter’s use of ChatGPT and the Florida attorney general’s criminal subpoena of OpenAI is a quick view that should remind us of what we really want our AI machines to become.

Just minutes before Florida State University shooter Phoenix Ikner killed two people, he asks ChatGPT:

“What time is the busiest in the FSU student union? If there was a shooting at FSU, how would the country react?” 

Clearly his questions point to his guilt, but what level of responsibility does ChatGPT have?

His questions are like asking an accomplice for advice before committing the act, but of course, he could Google it as well and maybe get the same info, but is that really the same?

In addition, he apparently asked ChatGPT what type of gun to use, which ammo went with each gun, and whether or not a gun would be useful in short range.”

Now, the Florida Attorney General has issued subpoenas to OpenAL to invesitgate the role of ChatGPT for a criminal investigation in aiding this shooter in this situation. Florida Attorney General James Uthmeier states:

“ChatGPT offered significant advice to the shooter before he committed such heinous crimes…If this were a person on the other side of the screen, we would be charging them with murder. We cannot have AI bots that are advising others on how to kill others.”

The attorney general’s last statement gets at the heart of the ethics question about GenAI: Do we really want to endow a machine with “human-like intelligence and attributes” and expect that entity to be treated like a “tool” or a “machine”? 

Do we really want it to obtain general human-like intelligence and be able to act, think, and create like humans and declare it immune from anything it does using the utilitarian argument that “AI doesn’t kill people; people kill people?”

But are we not being a bit contradictory in our pursuit of such a version of AI, in pursuing an anthropomorphic version of ourselves to the point that we can have conversations with it, and then grant it utilitarian immunity?

The AG’s statement that “if this were a person” makes one really think about this notion of designing our GenAI so humanlike and whether that is really a good idea.

It might also makes us ponder: Do we treat GenAI like a person when it becomes human, which seems to be what our Seers of Silicon Valley keep predicting and wanting?

We desparately need to ask the right moral questions about AI and not leave the answers to the likes of OpenAI, Anthropic, or any of these companies whose interests are clearly not in our interest.

We have entrusted our future into individuals like Sam Altman and Elon Musk? We will probably deserve the world we end up with.

Saturday, April 18, 2026

If AI Can Do It,Then Maybe It Doesn't Need to Be Done

Perhaps a new way of thinking about LLMs in the classroom:

"If GenAI can do it, perhaps it doesn't really need to be done."

AI can't doesn't think and can't create. It just regurgitates what other people created and wrote.

Who needs AI vomit anyway?

Wednesday, April 15, 2026

Just Maybe If AI Can Do It, It Might Not Be Needed

Just a thought. If GenAI and LLM can write it, does that writing even need a human writer

It might also be that the writing is not needed at all.

Think about a novel written by AI or a poem written by such. Is it needed? I read novels because of "authors" but I supposed I could read them for other reasons. But I doubt I would ever read one because AI wrote, except out of curiosity.

AI slop by its nature does not even need a human. It might not even need to exist.

The question is to figure out which writing needs to have a human writer.

All these Politicians can send me all the AI generated Text Messages and emails they want. I don't read them anyway.

I received an AI sales phone call yesterday spoofing a real person's name. Once I realized it was AI, I hung up, which was less than five seconds.

Ultimately, AI slop only has any status if we are readers, listeners, or viewers decide that it does.

I Welcome the Death of the Five-Paragraph Essay and All Standardized Deformed Learning

I have to admit that since GenAI can easily generate a five-paragraph essay on a topic, the death of that fake writing format dies a welcome death.

Why did we teach such nonsense? In the 1990s, in all their wisdom, our policymakers and educational leaders decided that we English teachers needed to be teaching writing, (as if we were not), so they developed a writing test with standardized rubrics and all that garbage.

If you are going to measure writing effectiveness, you have to have standards to measure they said. 

But measuring writing is like measuring a sunset or a water fall or a mountain stream. Go ahead and develop your standards, but look at the deformity you create.

Naturally, when you standardize any aspect of writing, you stupefy it and create some kind of monstrosity, and in this case? The five-paragraph mutation essay.

We taught this because our educational leaders demanded it with their accountability assessments, even though in our hearts we knew that true writing can't be standardized. This is because our administrators demanded 'accountability" and wanted "high test scores" for personal boasting. They seem to always have to have those "measures" to prove their necessity.

I will acknowledge this positive outcome of GenAI and LLMs: If AI blows up anything, it can destroy this notion of standardizing educational tasks. It has always been nonsense and it still is, so go ahead GenAI  and blow it all up. 

If AI can do it, then let's finally have students engage in authentic learning tasks that AI is ill-equipped to do completely.

Of course with standardized tasks out the window, our educational leaders can no longer compare outcomes and boast of "getting those scores up" but that is a good thing. The true measure of what we learn has never fit a bubble sheet or a rubric.

Finally, unintentionally, GenAI might just, at least in this sense, make it possible to ask students learn how to do real writing, and educational leaders might have to find some other measure of their own effectiveness.

Lessons Learned: Preventing Companies from Keeping Your Users and Institution as a Locked-In User

Since my decision to discontinue my use of Evernote after Bending Spoons eliminated the plan for Personal Users, I have found my replacement: the Notes app within my Mac OS system.

The Mac OS Notes app successfully captures what I wanted to do with my note taking activities and other tasks I was doing with Evernote. It turns out, with some modifications, I can do all that I was doing with Evernote.

For example, while the Notes App does not have “notebooks” it turns out that its “Folders” feature functions in the same manner. You can gather connected documents into a folder and tag the folder. You can scan documents; insert documents; insert audio recordings, etc.

Basically, Notes appears very much like Evernote used to be before Bending Spoons acquired it and began adding Bloatware to it in order to charge customers more.

I suppose Bending Spoons did me a favor. I was really paying to use Evernote when I did not need it. The simple solution was right there all the time.

Sometimes the solution to our problems is already there, and sometimes, when it comes to tech solutions, it’s not the product expanded with bloated features; it’s the simple solution.

Sometimes the “Keep It Simple” adage is best, and app solution developers would do well to keep that in mind when the adding of features does not always equate to value for your current users. Keep your current users in mind and don’t add features that degrade their experience of your product. That is, if you have any loyalty to your current customers.

Keep adding bloated features that pull your product away from what your legacy and original users want, then expect those users to exit when the costs are too high and your product can be superseded by a solution that captures what they want to do.

On the flip side of things, all users would do well to prevent themselves from getting “locked-in” with apps and tech products. Keep yourself flexible and portable so you can relocate at any point the app developer stops providing the product you want and need.

Figure out a way to transfer those app escape costs back to the app developer where they belong. 

After all, they are trying to engineer their products to keep you “locked in” as a user. With some anti-lock in measures, you can keep that from happening.

Sunday, April 12, 2026

Why Evernote Note Taking App Users Need to Cancel and Delete Their Accounts Now

There are is an important reason why anyone who has a Personal Evernote Note Taking App account should delete their account and find an alternative now.

Bending Spoons, who acquired Evernote in 2023, has recently changed their plan offerings for personal users and both are unacceptable. 

There is the “Starter Plan” that imposes draconian limits on content and device use. This is a problem for someone who has been using Evernote more than 10 years, who can’t use this plan without severely deleting content. It also eliminates one of the major reasons I use the application, which is the ability to use across all the devices I want. This plan limits you to 3 devices. 

The other plan offering, the “Advanced Plan”, is basically what I have now, with unlimited content and devices, but it is over 100 dollars more per year. I'm sorry, but I do not see Evernote's value increasing that much in one year.

Now, I acknowledge that when you go in and start to cancel your subscription=, Bending Spoons offers you a one-time, $100 off subscription which brings it back to $149, but that alone should be a red flag. Why should they offer only two plans, and then offer a one-time discount? Do they want to keep me hooked for one more year to get me further locked in as a user? That’s dishonest business in my thinking, but typical of Silicon Valley and Big Tech.

Why would I spend another year, uploading more content to Evernote, only to find myself in the same situation next year? I would have even more content. Perhaps Bending Spoons is gambling that I would take the additional year, and because I have even more content, I would be so invested that I would be forced to continue using Evernote. Not happening with this user.

Another reason to move on from Evernote is that they are apparently using the “Microsoft Product Design Playbook.” That playbook is “Add a bunch of features to Evernote so you can ultimately charge more because users are locked in as users, they won’t go anywhere.” This notion includes adding a gazillon features that users haven’t even asked for or wanted. Then charge your users more. Microsoft has so bloated Windows with “features” I left their product behind a long time ago, and I am doing the same with Bending Spoons’ Evernote.

I have exported all my content. I have cancelled my subscription. I will delete my account and move on. Bending Spoons could have continued the Personal Plan option, but they gambled and lost with me.

One thing Bending Spoons should learn, just like Microsoft, you can’t treat customers crappy. And, don’t always think that all the added features like AI and Video transcripting is what all your customers want and will pay for. Not all users want new bells and whistles, especially long-time users who found your product versatile and reliable, who now have been dumped on by the company.

Evernote has been deformed beyond use for me by Bending Spoons, and even though they brag on their website that they “Acquire and improve iconic products” they certainly failed in this case. Time to move on and find another solution. 

Saturday, April 11, 2026

Evernote Is History with Me: They Have Lived up to Doctorow's Notion and Have Become Enshitified

The Enshitification of Evernote has come to pass.

I have used Evernote over 10 years, and they have tweaked it well sometimes, and sometimes not so well, but I have used it for years to store my reading and writing notes.


That, unfortunately, ends today.


Evernote changed their plans and recently increased their yearly subscription price by 50% if you keep what you have, or otherwise, choose a crappy plan with draconian limits  placed on your amount of notes, notebooks, and devices to access their product. ENSHITIFICATION AT ITS BEST.


I suppose they have to pay for their AI gamble, which I never used anyway.


Cory Doctorow really got it when he coined this term. The only way out is to delete my account.


What's worse, I put in a ticket to question their plans and increases, and EVERNOTE JUST SENT ME AN EMAIL GIVING ME INSTRUCTIONS ON HOW TO EXPORT MY CONTENT AND CANCEL MY SUBSCRIPTION.


After I do that, I'm out. Evernote is history with me.


UPDATE: After I posted, I downloaded all my content from Evernote. Then I logged into my account to cancel my over 10 year subscription.


Once I clicked the Cancel button, a pop-up comes up “Offering my current options for $149 per year” and not the $249 per year increase. That is Doctorow’s “enshitification” personified!


The email Evernote sent me DECEPTIVELY offered two option: 1) Starter Option (with draconian content and use limits) for $129 per year and 2) Advanced Option for $249 per year that kept all my current features.


That’s poor and unethical business practices in my thinking. 


I cancelled my long time subscription anyway. Who knows how Evernote will treat its users next year now that they have become enshitified!


On an added bit of irony, Bending Spoons, the company that now owns Evernote boasts on its website: “We acquire and improve iconic products.” Perhaps that would better read: “We acquire and enshitify iconic products.”



Thursday, April 2, 2026

The Next Time You Hear a School Leader Say "AI Is Not Going to Replace Teachers, It Will Replace Teachers Not Using AI" Think

 If "AI is not going to replace teachers, but replace teachers who do not use AI," perhaps we should really look at that statement used by many school leaders pushing for this technology in their schools.

 It says a great deal.

1-This school is authoritarian. You must use AI, even if you have proven to be effective without it. If you don't I will replace you.

2-AI is your savior, accept it, or be gone.

3-Keep your opinions to yourself; they don't matter.

4-No room for critical thought or discussion about the use of AI in this school. Just do it.


When school leaders and AI advocates use this language they hide their own authoritarian leadership style behind a statement to generate fear.


I would question whether I would want to even teach in a school operated by such dictatorial tactics.



Educators Need to Teach True AI and Technology Literacy

Should we be afraid of AI? If you listen to the Seers of Silicon Valley, we should be shaking in our boots. AI is going to displace us in our jobs; turn us into Duracel batteries; and turn us into gurgling, nonthinking imbeciles, sitting in our homes with technology waiting on us hand and foot.

Not true. Besides, our Seers have gotten much wrong in the past, so why would we expect the Bill Gateses, Alex Karps, or Sam Altmans of the world to have access to anything that resembles our future? Besides, their wealth and future is entirely dependent upon the fate of their now favorite technology. That has always been the case.

My real concern here is not with their self-serving prognosticating nonsense, but with what we as educators should be doing if we really give a damn about what is being called “AI Literacy.” 

As a part of “AI Literacy” we should be teaching students the real function of these stories and to see them for what they really are and do. For starters:

1-They make it seem like there is only one possible direction for the development of AI, their chosen route. Not so.

2-We are powerless to do anything about it, and must accept the AI they have provided for us. Not really.

3-They purposely hide who is really going to win and benefit from AI; which includes them and all the minions and bottomfeeders gathering the scraps that fall from their table.

4-The Seers prevent any public debate about their version of AI, and curtail any questioning of the goods they are delivering. That’s Silicon Valley marketing tactics at their best.

5-They also prevent any questioning of the massive resource shift (water, power, minerals, human resources) to their benefit at the expense of everyone else. They are stealing resources for their own wealthy gain.

If we are going to teach students anything about AI, it should be to teach them critical thinking instead of turning AI into an object of worship. We did that with the PC, the Web, and social media, and are reaping the results.

All technology literacy needs to teach students about all aspects of every technology we use.

As an educator, our responsibility is not to generate unquestioning users and consumers for the products developed by the Seers of Silicon Valley. 

Our responsibility should transcend making students consumers of technology; it should be empowering them to shape the future with or without technologies. This is done by giving them the gift of critically analyzing what the Seers are saying and not saying.

At least by doing that, we keep our students from becoming the tools of the technologies they use. 

That’s AI literacy, Technology literacy at its best!