Friday, May 1, 2026

Ed Tech Critical Reflection Needs in a Time of Screen Time Limits

EdTech has some accounting to do now that major questions about the place of technology and screens in are being heavily scrutinized.

It is a time for the Ed Tech field to come to a reckoning.

Instead of acting like dogmatic, fundamentalists defending their technology tenants of faith, those in the field of Ed Tech should be engaging in mass self-criticism and self-examination, focusing on everything they have taken for granted since they first pushed devices into the schools.

Some thoughts on what those should be?

For example, Ed Tech has always had an extremely cozy relationship with those who create and sale the gadgets (and I use that word to broadly cover everything, computers to AI). These companies sponsor Ed Tech conventions, and Ed Tech has allowed them free uncritical access to all the educators attending. At these events they give attendees free gifts and subject them to company delivered or sponsored keynote addresses. They provide “free” training on their products. Not one minute is devoted to critical thinking about the products peddled.

In this way, Ed Tech has allowed the product companies to control the discourse and the discipline. Leaders controlling the budgets who really do not understand the technologies are sold on these, then Ed Tech jumps on board and tries to justify the purchase. This should not be.

Ed Tech needs to develop a conscience. It needs a “critical mind” that looks upon its discipline with skeptical, questioning eyes. 

Instead, we salespeople are allowed to promote unquestioningly their wares, and then, we horrifyingly, subject our students to these. Use now and ask questions later with no regard of the effects on our students is sometimes the thinking.

Is it any wonder, that these devices and gadgets have sometimes caused much harm and little good?

Joseph Weizenbaum, computer scientist and pioneer thinker about AI, once wrote:

“There are certain tasks which computers ought not be made to do, independent of whether computers can be made to do them.”

This statement, the field of Ed Tech does not get. It sees their devices as always the answer. They are most often “pure technology solutionists,” who look for problems to solve with their tools, instead of looking at the problems and then trying to find the tool to solve them. Maybe sometimes, even inventing problems in order to use their gadgets to solve them.

That’s why they always see their devices as the answer to every educational problem.

But here’s the rub: As Weizenbaum points out, just because a computer, a smart phone, or AI can do it, that does not mean we should use them to do it.

In these times, Ed Tech as a field would do well to reflect critically on itself.

Instead of a field that acts as a conduit to pipe gadgets into the classroom and schools marketed to them by tech companies, Ed Tech educators need to begin asking questions like these:

-Is this something I want technology to do?

-Is it something technology should be doing? 

-Is it just possible, that this learning, this teaching, this task would be best achieved through analog means?

Asking such critical questions, and being skeptical and critical of technology would perhaps give this field the beginnings of some kind of conscience. It would upset the uncritical value tech has and decenter it in the field of education, which is what should happen.

If Ed Tech educators had become critical and skeptical about the role of gadgets in the classroom from the beginning, instead of being awestricken by the glow of the devices, this might have also headed off the push to limit screens in schools because educators would have been more discerning before subjecting children to devices in their Ed Tech experiments.

Tuesday, April 28, 2026

Let's Welcome the Debate Regarding Screen Bans in Education: It Forces a Conversation that EdTech Stifled from the Beginning

One great positive that is coming from the threat of "Screen bans."

Ed Tech is being forced to defend its support of unbridled access to devices and a long-time cozy relationship with technology companies.


It is forcing a great deal of critical thought about the rightful place of all these technologies in the classroom. That's of great benefit.


Through criticism, there can be the asking of the difficult questions, and the questioning of the taken-for-granted assumptions that have fostered the uncritical acceptance of technology's potential.


I welcome the debate and let's see whether truth wins or whether special interests and money wins.

Ed Tech Taught Us Technology Solves All Our Problems: But Perhaps Technology Causes More Problems and It's Time to Think Critically

I got to thinking that when I earned my masters degree in Educational Technology back in the mid-1990s, there was extreme excitement about the potential of the Web and computers in the classroom. There was reason to be: the web was wild and free and not the commericial, paywalled, dungeon that it is today. There also weren't fifty gazillion companies burying the truth in marketing malarkey. 

I was actually the first teacher in my building to use the internet during class as part of instruction. God bless that dialup connection! I even had to explain to students what that buzzing and chirping was while it connected. 

It was the CIA Worldfactbook site, that I think has now been dismantled under the Trump administration. It was an excellent source of information in those days, but it was free and loaded with information.

In reflection though, I notice that the EdTech degree program taught one underlying and implicit notion: Technological solutionism, or the idea that technology is an answer to every problem. It was doctrinal and implied in every course.

This notion is wrong, and it has taken me years to unlearn it through experience. Now, with the anti-screen movement as an example, we see the backlash against the idea that technology always has an answer.

What is a shame though, is that my Ed Tech masters program failed to teach any critical thought about technology. It was one big Tech-Promotion program. There were no courses in critical thought about tech, just teaching of the tenets of technological solutionism.

There should have been a strong critical, philosphical base to the learning; but instead, it indoctrinated us as "technology evangelists" to go forth into the world and spread the gospel of technological promise.

The question I have now is just how much of what was worthwhile in the classroom has been sacrificed, not because technology was better; but because we were carrying out the evangelical task of "spreading technological solutionism"?

Is it any wonder that we now have people thinking critically about technology's role in teaching and learning, and are finding that perhaps in our EdTech enthusiasm, that we might have caused the loss of something valuable?

My early education in EdTech was mindless indoctrination, as I fear all EdTech has become. 

EdTech education in the 1990s and today seems foster mindless and unconscious evangelists going forth into the world, still spreading the promise of technological solutionism. It is time to question the dogma and dig into the past and see just what we have done to ourselves as educators and to all the countless students who we subjected to our technological dogma. 

We might just find a more sober vision of technology's classroom promise.

Monday, April 27, 2026

A Tech Solution Gone Too Far: Using Technology to Control Students

There are times when Ed Tech companies simply go too far and a company called Minga does just this.

Am I the only one who gets the creeps with the idea of using technology to “manage student behavior?” “Manage” here really means CONTROL students’ behavior, and the educator quest for this system of student control has been ongoing for well over a hundred years.


And that’s what happens when CONTROL becomes the goal of education.

Still, when I read the Minga solution website, (which I won’t include here because the last thing I want to do is promote this product), Skinner rat mazes and cheese comes to mind. It appears to be a technological carrot dispensing solution for schools. It is also a student surveillance system as well, keeping up with students at all times and dispensing carrots when they adhere to rules.


For me, what is especially creepy is the so-called “digital hall pass.” This part of the Minga solution literally gives schools the ability to monitor student potty time!


It keeps students under a constant technological system of surveillance. Apparently, this system of surveillance monitors how often a student asks, keeps students from asking during “blackout periods,” controls the number of students out of the room at a time, and monitors how long a student has been gone.


Even potty visits aren’t safe from the Big Brother monitoring of EdTech! And EdTech evangelists wonder why parents are fed up with technology!


There are certainly other things to be concerned about with this so-called solution, but it is a perfect illustration with what is wrong with Ed Tech.

Ed Tech companies see everything as solvable through technology. When that happens, you get these bizarre and crazy products. Not every task or issue in education is solvable with technology.


If I were a parent today, and my child’s school was using this solution, I would either demand its demise or move my child to another school of choice where surveillance and control isn’t the goal of education.


By the way, can you imagine a hacker getting into a system like this and the damage to a student that could result?


Technology can and does go to far, that’s why there is the concern with screentime.

Wednesday, April 22, 2026

Some Thoughts on the State of Web in the Age of Generative AI

The web has been a garbage dump of misinformation and slop for years. 

Web searches at one time were interesting in themselves, because you were linked to sites of interest, not sites that pay Google to appear in your search stream. You could "surf the web" and enjoy it. Now, you surf a ocean of flotsam and sewage.

Still, even in the age of the Garbage Web, there as a time, when at least most of that garbage was generated by a human, so you, at least, had someone you could point to who authored it, which helped with its veracity. You could tell what was garbage sometimes by who generated it.

Now, in the age of GenAI, we now have garbage and slop, generated by AI with no one there to author, so that means of verification is removed. We've dispensed with an author.

Can this be a good thing? There are times when knowing who authored a text is vital, yet we made the web's veracity even blurrier. Authorless garbage can proliferate. The web becomes a heap of nonsense.

Just some thoughts on where the web is going.

Is ChatGPT an Accomplice to Murder? Does AI Kill People or Do People Kill People?

The Florida State shooter’s use of ChatGPT and the Florida attorney general’s criminal subpoena of OpenAI is a quick view that should remind us of what we really want our AI machines to become.

Just minutes before Florida State University shooter Phoenix Ikner killed two people, he asks ChatGPT:

“What time is the busiest in the FSU student union? If there was a shooting at FSU, how would the country react?” 

Clearly his questions point to his guilt, but what level of responsibility does ChatGPT have?

His questions are like asking an accomplice for advice before committing the act, but of course, he could Google it as well and maybe get the same info, but is that really the same?

In addition, he apparently asked ChatGPT what type of gun to use, which ammo went with each gun, and whether or not a gun would be useful in short range.”

Now, the Florida Attorney General has issued subpoenas to OpenAL to invesitgate the role of ChatGPT for a criminal investigation in aiding this shooter in this situation. Florida Attorney General James Uthmeier states:

“ChatGPT offered significant advice to the shooter before he committed such heinous crimes…If this were a person on the other side of the screen, we would be charging them with murder. We cannot have AI bots that are advising others on how to kill others.”

The attorney general’s last statement gets at the heart of the ethics question about GenAI: Do we really want to endow a machine with “human-like intelligence and attributes” and expect that entity to be treated like a “tool” or a “machine”? 

Do we really want it to obtain general human-like intelligence and be able to act, think, and create like humans and declare it immune from anything it does using the utilitarian argument that “AI doesn’t kill people; people kill people?”

But are we not being a bit contradictory in our pursuit of such a version of AI, in pursuing an anthropomorphic version of ourselves to the point that we can have conversations with it, and then grant it utilitarian immunity?

The AG’s statement that “if this were a person” makes one really think about this notion of designing our GenAI so humanlike and whether that is really a good idea.

It might also makes us ponder: Do we treat GenAI like a person when it becomes human, which seems to be what our Seers of Silicon Valley keep predicting and wanting?

We desparately need to ask the right moral questions about AI and not leave the answers to the likes of OpenAI, Anthropic, or any of these companies whose interests are clearly not in our interest.

We have entrusted our future into individuals like Sam Altman and Elon Musk? We will probably deserve the world we end up with.

Saturday, April 18, 2026

If AI Can Do It,Then Maybe It Doesn't Need to Be Done

Perhaps a new way of thinking about LLMs in the classroom:

"If GenAI can do it, perhaps it doesn't really need to be done."

AI can't doesn't think and can't create. It just regurgitates what other people created and wrote.

Who needs AI vomit anyway?