Tuesday, April 28, 2026

Let's Welcome the Debate Regardin Screen Bans in Education: It Forces a Converation that EdTech Stifled from the Beginning

One great positive that is coming from the threat of "Screen bans."

Ed Tech is being forced to defend its support of unbridled access to devices and a long-time cozy relationship with technology companies.


It is forcing a great deal of critical thought about the rightful place of all these technologies in the classroom. That's of great benefit.


Through criticism, there can be the asking of the difficult questions, and the questioning of the taken-for-granted assumptions that have fostered the uncritical acceptance of technology's potential.


I welcome the debate and let's see whether truth wins or whether special interests and money wins.

Ed Tech Taught Us Technology Solves All Our Problems: But Perhaps Technology Causes More Problems and It's Time to Think Critically

I got to thinking that when I earned my masters degree in Educational Technology back in the mid-1990s, there was extreme excitement about the potential of the Web and computers in the classroom. There was reason to be: the web was wild and free and not the commericial, paywalled, dungeon that it is today. There also weren't fifty gazillion companies burying the truth in marketing malarkey. 

I was actually the first teacher in my building to use the internet during class as part of instruction. God bless that dialup connection! I even had to explain to students what that buzzing and chirping was while it connected. 

It was the CIA Worldfactbook site, that I think has now been dismantled under the Trump administration. It was an excellent source of information in those days, but it was free and loaded with information.

In reflection though, I notice that the EdTech degree program taught one underlying and implicit notion: Technological solutionism, or the idea that technology is an answer to every problem. It was doctrinal and implied in every course.

This notion is wrong, and it has taken me years to unlearn it through experience. Now, with the anti-screen movement as an example, we see the backlash against the idea that technology always has an answer.

What is a shame though, is that my Ed Tech masters program failed to teach any critical thought about technology. It was one big Tech-Promotion program. There were no courses in critical thought about tech, just teaching of the tenets of technological solutionism.

There should have been a strong critical, philosphical base to the learning; but instead, it indoctrinated us as "technology evangelists" to go forth into the world and spread the gospel of technological promise.

The question I have now is just how much of what was worthwhile in the classroom has been sacrificed, not because technology was better; but because we were carrying out the evangelical task of "spreading technological solutionism"?

Is it any wonder that we now have people thinking critically about technology's role in teaching and learning, and are finding that perhaps in our EdTech enthusiasm, that we might have caused the loss of something valuable?

My early education in EdTech was mindless indoctrination, as I fear all EdTech has become. 

EdTech education in the 1990s and today seems foster mindless and unconscious evangelists going forth into the world, still spreading the promise of technological solutionism. It is time to question the dogma and dig into the past and see just what we have done to ourselves as educators and to all the countless students who we subjected to our technological dogma. 

We might just find a more sober vision of technology's classroom promise.

Monday, April 27, 2026

A Tech Solution Gone Too Far: Using Technology to Control Students

There are times when Ed Tech companies simply go too far and a company called Minga does just this.

Am I the only one who gets the creeps with the idea of using technology to “manage student behavior?” “Manage” here really means CONTROL students’ behavior, and the educator quest for this system of student control has been ongoing for well over a hundred years.


And that’s what happens when CONTROL becomes the goal of education.

Still, when I read the Minga solution website, (which I won’t include here because the last thing I want to do is promote this product), Skinner rat mazes and cheese comes to mind. It appears to be a technological carrot dispensing solution for schools. It is also a student surveillance system as well, keeping up with students at all times and dispensing carrots when they adhere to rules.


For me, what is especially creepy is the so-called “digital hall pass.” This part of the Minga solution literally gives schools the ability to monitor student potty time!


It keeps students under a constant technological system of surveillance. Apparently, this system of surveillance monitors how often a student asks, keeps students from asking during “blackout periods,” controls the number of students out of the room at a time, and monitors how long a student has been gone.


Even potty visits aren’t safe from the Big Brother monitoring of EdTech! And EdTech evangelists wonder why parents are fed up with technology!


There are certainly other things to be concerned about with this so-called solution, but it is a perfect illustration with what is wrong with Ed Tech.

Ed Tech companies see everything as solvable through technology. When that happens, you get these bizarre and crazy products. Not every task or issue in education is solvable with technology.


If I were a parent today, and my child’s school was using this solution, I would either demand its demise or move my child to another school of choice where surveillance and control isn’t the goal of education.


By the way, can you imagine a hacker getting into a system like this and the damage to a student that could result?


Technology can and does go to far, that’s why there is the concern with screentime.

Wednesday, April 22, 2026

Some Thoughts on the State of Web in the Age of Generative AI

The web has been a garbage dump of misinformation and slop for years. 

Web searches at one time were interesting in themselves, because you were linked to sites of interest, not sites that pay Google to appear in your search stream. You could "surf the web" and enjoy it. Now, you surf a ocean of flotsam and sewage.

Still, even in the age of the Garbage Web, there as a time, when at least most of that garbage was generated by a human, so you, at least, had someone you could point to who authored it, which helped with its veracity. You could tell what was garbage sometimes by who generated it.

Now, in the age of GenAI, we now have garbage and slop, generated by AI with no one there to author, so that means of verification is removed. We've dispensed with an author.

Can this be a good thing? There are times when knowing who authored a text is vital, yet we made the web's veracity even blurrier. Authorless garbage can proliferate. The web becomes a heap of nonsense.

Just some thoughts on where the web is going.

Is ChatGPT an Accomplice to Murder? Does AI Kill People or Do People Kill People?

The Florida State shooter’s use of ChatGPT and the Florida attorney general’s criminal subpoena of OpenAI is a quick view that should remind us of what we really want our AI machines to become.

Just minutes before Florida State University shooter Phoenix Ikner killed two people, he asks ChatGPT:

“What time is the busiest in the FSU student union? If there was a shooting at FSU, how would the country react?” 

Clearly his questions point to his guilt, but what level of responsibility does ChatGPT have?

His questions are like asking an accomplice for advice before committing the act, but of course, he could Google it as well and maybe get the same info, but is that really the same?

In addition, he apparently asked ChatGPT what type of gun to use, which ammo went with each gun, and whether or not a gun would be useful in short range.”

Now, the Florida Attorney General has issued subpoenas to OpenAL to invesitgate the role of ChatGPT for a criminal investigation in aiding this shooter in this situation. Florida Attorney General James Uthmeier states:

“ChatGPT offered significant advice to the shooter before he committed such heinous crimes…If this were a person on the other side of the screen, we would be charging them with murder. We cannot have AI bots that are advising others on how to kill others.”

The attorney general’s last statement gets at the heart of the ethics question about GenAI: Do we really want to endow a machine with “human-like intelligence and attributes” and expect that entity to be treated like a “tool” or a “machine”? 

Do we really want it to obtain general human-like intelligence and be able to act, think, and create like humans and declare it immune from anything it does using the utilitarian argument that “AI doesn’t kill people; people kill people?”

But are we not being a bit contradictory in our pursuit of such a version of AI, in pursuing an anthropomorphic version of ourselves to the point that we can have conversations with it, and then grant it utilitarian immunity?

The AG’s statement that “if this were a person” makes one really think about this notion of designing our GenAI so humanlike and whether that is really a good idea.

It might also makes us ponder: Do we treat GenAI like a person when it becomes human, which seems to be what our Seers of Silicon Valley keep predicting and wanting?

We desparately need to ask the right moral questions about AI and not leave the answers to the likes of OpenAI, Anthropic, or any of these companies whose interests are clearly not in our interest.

We have entrusted our future into individuals like Sam Altman and Elon Musk? We will probably deserve the world we end up with.

Saturday, April 18, 2026

If AI Can Do It,Then Maybe It Doesn't Need to Be Done

Perhaps a new way of thinking about LLMs in the classroom:

"If GenAI can do it, perhaps it doesn't really need to be done."

AI can't doesn't think and can't create. It just regurgitates what other people created and wrote.

Who needs AI vomit anyway?

Wednesday, April 15, 2026

Just Maybe If AI Can Do It, It Might Not Be Needed

Just a thought. If GenAI and LLM can write it, does that writing even need a human writer

It might also be that the writing is not needed at all.

Think about a novel written by AI or a poem written by such. Is it needed? I read novels because of "authors" but I supposed I could read them for other reasons. But I doubt I would ever read one because AI wrote, except out of curiosity.

AI slop by its nature does not even need a human. It might not even need to exist.

The question is to figure out which writing needs to have a human writer.

All these Politicians can send me all the AI generated Text Messages and emails they want. I don't read them anyway.

I received an AI sales phone call yesterday spoofing a real person's name. Once I realized it was AI, I hung up, which was less than five seconds.

Ultimately, AI slop only has any status if we are readers, listeners, or viewers decide that it does.