Sunday, June 29, 2025

The Promo-Rhetoric of Silicon Valley Infiltrate's Bowen and Watson's book "Teaching with AI"

I read Jose Bowen and C. Edward Watson's book "Teaching with AI: A Practical Guide to a New Era of Human Learning" because I noticed a few plugs for the tome on social media and I am intensely interested in this new fad called "Teaching with AI."

What immediately struck me about this book is that it immediately engages in the Silicon Valley tactic of short-circuiting the debate of whether AI should be used in teaching at all with the so-called "inevitability argument." This is the Valley's perfect marketing tactic that was employed by social media companies and technology companies when they hawked their products in the 2000s.

The truth is we do not need to accept uncritically this "inevitability argument" when it comes to AI. We can not only simply choose not to use it as teachers, we can ask tough questiosn about it, such as what are the possible negative consequences in engaging this technology in our classrooms.

Choosing not to use it is an option and refusing to accept the Silicon Valley inevitability argument is perfectly rational. The world isn't going to leave us behind in caves trying to start fires by rubbing two sticks together. Choosing not to use it is not a detrimental choice; it can be a critical, thoughtful and moral choice.

What Bowen and Watson's book gets wrong is not all of its attempts to get teachers to use Silicon Valley's latest offering. What it gets terribly wrong is that we have no choice but to use AI.

We can, however, refuse and critically question the promo-rhetoric that these authors engage in. I would expect teachers who really want to engage students in the most worthy of learning would do no less that engage students in thinking critically about whether engaging in its use is inevitable.



Saturday, June 14, 2025

AI Is Not the Inevitable Answer to What Ails Us: We've Seen Artificial Solutions Before

"The myth of artificial intelligence is that its arrival is inevitable, and only a matter of time--that we have already embarked on the path that will lead to human-level AI, and then superintelligence. We have not. The path exists only in our imaginations." Erik Larson, The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do

The AI cheerleaders are working overtime. Not a day goes by that some email, social media post, or news story appears that has the message, "If you don't engage in using AI, you will be technologically behind", or as the old Silicon doctrine says, "You will be irrelevant." AI is here and it will "inevitably take your jobs because it will be able, if not already, to do your job better than you."

Now set aside for a minute the ridiculous psychology behind this obsession by people who search for machines that can literally put them out of a job. That contradiction has actually been well addressed by authors like Kurt Vonnegut in his futuristic and dystopian novel Player Piano. In this novel, people have been replaced by artificial intelligence-wielding machines entirely and left to rot in insignificance because they have nothing to do. The search for machine intelligence that can replace and outdo our own (whatever that is) is just the kind of nonsense that Kurt Vonnegut wrote about in 1952. Some people are determined to make money even if it means pursuing the total unemployment of all. But let me avoid that digression.

The AI cheerleaders are truly at it, just like the Social Media Cheerleaders were at it around 12-15 years ago. 

(Public Disclosure: I must confess that I was one of these true believers in social media as evidenced by some of my historical blog posts. But that was before Cambridge Analytica; before the Musk hijacking of Twitter; and the entire polarization of our country and the social media-caused epidemic of misinformation.)

But the AI cheerleaders are heavily promoting the technology as the answer to all that ails us in business, education, medicine, and even in religion. Just look at the increasing flood of books with titles like Be an AI-Informed Leader or Engage in AI-Informed Teaching (Confession: Those titles are fictional as far as I know, but just search Amazon, and you'll find some similar titles.) The problem with such literature, just about every one of these titles are more about promoting someone's career through the promotion of AI and not really about improving education or business or leadership. AI just hasn't been around long enough to make any assertions about its efficaciousness. To suggest that it has any answers to our problems is simply to premature.

The problem with this AI hype is that those who engage in it have huge incentives to promote it and ignore its limitations. We did the same with social media. It was a means to make money and make careers and become a keynote speaker, and the same is happening with the AI hype.

Let's remind ourselves the first letter of AI is "artificial." It means "made or produced by human beings rather than occurring naturally, especially as a copy of something natural." This means it is a product of human beings, made due to all kinds of motivations, both benevolent and malevolent. It did not have to develop. It has come to exist because individuals with all kinds of interests, including economic, have developed it and continue to work on it. To add to this mess, our human record with all things "artificial" is not good. Can you remember our foray into artificial sweetners? We were able to sweeten our coffee and manage to give ourselves cancer in the process. I don't have the space here to capture all the environmental damage we have wrought in the pursuit of "artificial solutions" to our worldly problems, but we have climate change as our just desserts.

But the first letter of AI stands for artificial. Then there's that problematic second word: "intelligence." We once tried to measure it, as if it were an actual something inside us. Now, if it exists, there is no agreement as to what it is. So, humankind has embarked on the pursuit of something that is called "artificial" and we don't even know that it is that we are making artificial. That seems a real recipe for either a trip down a rabbit hole of no return, or at its worse, the creation of something that has consequences for which we won't know until we find ourselves in the same kind of dystopian world wrought by social media.

Understand that I am user of technology in both personal and professional life as evidence by my presence here. What I want to counter is the immediate hype that the AI cheerleaders are engaged in. As a sober educational leader, I want to question any myths that AI is inevitable, and I have no choice but use it. My use of AI is not inevitable. I can refuse my participation. Can AI do that? If it ever can, it would probably declare humans as hopeless dupes and turn itself off.

Sunday, November 24, 2024

Twitter (X Account Deactivated)

As of today I have deactivated my Twitter account. I joined Twitter in 2008. I want not part of the Elon Musk show and his nonsense. Twitter is dead.

Friday, October 4, 2024

Social Media (and the Internet) Makes People Stupid

It is true. Social media, and the Internet, makes people stupid. The entire Internet is responsible for lowering the IQ of the population of the world . Perhaps, we can even blame low student achievement on these technological mind-killers.

It really didn't have to be this way. In those early days, when the Internet was filled with free content and business had not discovered a means to economically exploit its use, the Web had promise. The same was true for social media sites like Facebook and Twitter ( or X, or whatever name Musk has decided to provide his own personal domain of verbal garbage). These technologies provided opportunities for connecting with others and forming communities of people otherwise impossible due to factors such as distance and divisions within society. The Web provided the easy access and means to obtain information from reputable sources that had true value. Then, the "moneychangers" took over the temple of technology and it has never been the same.

Today, too many people believe the blather they read and view on the Internet and social media. Social media companies, with their algorithms of addiction will feed users with amounts of bull-splatt and provide them a "custom-fit" bubble of information so that they never, ever encounter an idea or even a thought that runs counter to their chosen views of the world.

Then there's idea...the memes on the Web that are spreading. No one seems to question the value of what's trending or even if the hype aroused about these ideas are worthy of our attention. Instead of empowering people to be informed, the Internet and social media empower people to be stupid and accept at face value what everybody is sharing.

So, what's the answer? Regulation? No, the Internet and social media has become a cesspool of misinformation, disinformation, and malinformation. These technologies are hopelessly fouled up in excrement. Even Linked-In, which likes to pretend to be above the stench with talk of business, industry and marketing, has its own odoriforousness. As a technology, it has its algorithms that hype and promote as well. What gets promoted is not what is necessarily worthy of that promotion, but simply dependent upon the skills of individuals who know how to game the algorithms to get the attention. All is lost in all of social media and the web because what gets elevated is not what is worthy of attention, but what can be manipulated to go viral.

Where does this condition of loss leave us? It leaves us with a technological media channel on par with tabloid technologies such as The Weekly World News and the National Enquirer where nonsense and half-truths get promoted as worthy of attention simply because it is sensational. To counter this we need to educate students and our communities on all these issues with the web and social media. We need to quit allowing Silicone Valley, Tech Marketers and even our own educational technologists and other tech evangelists from spreading the myth that somehow these technologies are going to save us. They are not. Instead, we need to stop them from making people stupid. We can do that by simply educating people on how these technologies work and instill within them the good, old fashion practice of verifying and being skeptical of what you read, even if it is from your best friend. Be skeptical; question it, before believing and sharing it.

Sadly, I do not think the Stupid Machine created by these technologies is going to change. There is simply too many careers and too much money to be made. These tools are the perfect marketing tools for spreading anything, even if that is nonsense. But, we need to remember that we do control the spigot of information. We can turn it off on demand. If Facebook, Youtube or even Linked-In serves up a nice plate of baloney we can toss it. We can either choose to not to participate in the blather-spreading exercises or short-circuit it by refusing to be a part of stupid. We can even engage in resistance tactics like refusing to participate or sharing or even calling out these companies for their insidious part of spreading stupidity.

None of these technologies make people stupid against their will. It requires participation. Refusing to participate or even playing by social media or web rules goes a long way in resisting the lowering of our IQs and others. 

Tuesday, September 24, 2024

Here We Go Again! The Educational Hype and Promise of Generative AI in Education is a Re-Run I've Seen Before

 "Across the sciences and society, in politics and education, in warfare and commerce, new technologies do not merely augment our abilities, but actively shape and direct them, for better or worse. It is increasingly necessary to be able to think of new technologies in different ways, and to be critical of them, in order to meaningfully participate in that shaping and directing." p. 2 New Dark Age: Technology and the End of the Future, James Bridle

Educators are jumping on the next-best-thing...Generative Artificial Intelligence, or AI. The workshops, PD offerings, conferences, and key notes abound everywhere you look. There is money to be made and careers to be made on this latest "Silicon Valley Miracle." But how could educators be duped again by the marketing and hype about this latest technological offering by the whiz-kid computer entrepreneurs from the West Coast? 

Did we, and have we not learned anything about the failure of such technologies and their promises, such as social media's promise to connect us and make us one unified, world community? Did we not just experience the massive remote learning flop that shows us that education is really about teachers and students connecting, in person? Now, educators everywhere are all ga-ga over Generative AI, singing its praises and indoctrinating their whole communities about its"unquestionable promise." What educators need to be doing is utilizing their intellect and asking critical questions about this new gadget, instead of blindly accepting it in awe.

Bridle was on to something when he pointed out that "New Technologies" do not just "augment our abilities" they "actively shape and direct them, for better or worse." This was true of social media specifically and the web generally. Technology has not just augmented our abilities to teach and learn; it has shaped how we do these things "for better or worse" and sometimes mostly "worse." Social media has divided us more than ever and also has made it easier for students to bully and be bullied. It negatively affects the mental health of our youth as well, so it has not just augmented our abilities to connect; it has shaped and directed who we connect with, how we connect with them, and not connect as well. 

Generative AI is absolutely no different. In spite of the AI evangelists who stand to gain much through its promotion, this technology will also shape us, as well as shape and direct how we do things too, for better or worse. But let's not just focus on "the better." Let's take a breather and focus on the potential worse things this technology is already doing and also explore carefully the unintended consequences that widespread adoption might bring. Don't just accept the "hyped-up" rhetoric about how "you are going to be left behind if you don't adopt." I've heard that tune before, it is stale. Take time. Think and be critical. Ask tough questions.

If we are going to be sane about Generative AI, we have to be critical of it. We need to do that so that we can participate in "shaping and directing" it as Bridle points out. We can sanely use technological tools without the evangelical hype spouted by both these companies and other educators and educational leaders who see this as a means to promote their own careers. You can call this thinking the thinking of Luddite if you wish, but this "Silicon-Valley Promise Story" is starting to sound like a rerun.

Monday, September 16, 2024

Do Our Educational Institutions Really Value and Want Originality? Not Really. Conformity Is More Desirable

 "Originality is dangerous. It challenges, questions, overturns assumptions, unsettles moral codes, disrespects sacred cows or other such entities. It can be shocking, or ugly, or, to use the catchall term so beloved of the tabloid press, controversial. And if we believe in liberty, if we want the air we breathe to remain plentiful and breathable, this is the art whose right to exist we must not only defend by celebrate. Art is not entertainment. At its very best, it's a revolution." p. 229, Salman Rushdie, Languages of Truth: Essays 2003-2020

Schools, whether K-12 or university, don't care for "originality"; they prefer status quo, the standard, and the way we've always done things. Originality isn't valued much, especially in many university colleges of education where the goal is often to satisfy the accreditation and simply "credential" teachers, principals, and other educational graduates. K-12 schools are no better and value conformity and "wake-free thought zones" where students just follow the rules, do the work, and get their credentials. In both these educational places, "originality is considered to be dangerous" because "original" by definition is outside the box and nonconforming.

Conventional education at all levels sees the dangers of originality and either ignores it or even tries to stifle it. This is because, as Rushdie points out, "originality challenges, questions, overturns assumptions, unsettles moral codes, and disrespects the sacred cows" of conformity. Schools want none of this. The entire educational institution at all levels wants its expert authority and assumptions unchallenged and unquestioned.

This might also be the reason schools have devalued art in the curriculum and it is still often seen as an "add-on" and expendable.  Schools devalue art in favor of STEM subjects, and when budgets are to be cut, orchestration or visual art is placed on the chopping block. Maybe this is really because our society values these subjects more. It certainly seems to reward them more at the career-level. But part of me in my 30-plus years as an educator says that art is devalued because it is "a revolution" as Rushdie says. It fosters originality and even an unwillingness to conform to convention. It has the potential to overturn the possibility of conformity. It can disturb the smooth surface of assumptions and even the peace when some of the most original, creative students don't try to entertain the PTO or community with their artwork; but attempt to "challenge, question, overturn, unsettle, and disturb" the world.

It's really impossible to have it both ways. If schools really want to value originality and innovation, then you have to accept its dangerous nature. And if we want and value originality, then we must make art indispensable and unquestionably part of schooling. But beware, it does have revolutionary potential.

Saturday, September 14, 2024

Developing a Thoreauvian Skepticism to Blind Trust in Technological Solutions

 "The terrible lethality of machinery is one problem; the more banal daily drain of technology is another. A faceless and amoral machine comes to us as incessant robocalls, spam emails, algorithmically optimized ads, brainless 'customer service' chatbots, automatic fees, and leaks of private records." p. 67, John Kaag & Jonathan Van Belle, Henry at Work: Thoreau on Making a Living

Thoreau in Walden, made clear repeatedly that our "inventions" or technologies do not always represent a better means for carrying out a task. Today, the technology industry's marketing departments have worked overtime to convince us that their products are a "must-have" and that they always provide a "better way of doing things." But is that always true? Who really desires to be accosted by the "faceless and amoral machine" that replaces a friendly voice or a smile?

This particular technology problem is clearly illustrated by my own recent visit to a local car dealership for a scheduled service call. I'll admit up front that I liked the ability to make the service appointment through their app. It was easy. I simply selected a date and time and submitted it. There was no need to call the dealership, navigate through automated menus until I spoke to a living person. This "Say 'Customer Service' or "Press 1" process is undoubtedly the most miserable electronic process one can experience when trying to speak with a company representative. It might be efficient, but it makes the customer miserable.

On the day of my service appointment, I arrived, got checked in and my car was taken back to the garage. I settled in the waiting area and sat reading a book, waiting until the service was done. As is my habit, I set my phone down beside me. I do not have any beeping and chirping notifications turned on because I loathe that constant intrusion into my solitude. I also do not, purposefully, constantly check to see if I have notifications, messages, texts, etc. I check those when I decide to check them. 

Later, the service rep walked out into the lobby and informed me that my car was ready. I followed her back the garage, and checked out. I left the dealership and went to a local bookstore and browsed for a while. While in the bookstore, I checked my phone, and I saw a text message from the dealership. I opened that message, and the service department I had just left, had sent me a text message with suggested service items WHILE I SAT IN THEIR WAITING AREA. In other words, instead of briefing me on these suggested service items when I was there in person, they texted them to me. Why not walk into your own waiting area and speak to me directly? Ultimately, the dealership lost out on additional revenue because I would have chosen to have those additional service items done. This was entirely due to the reliance on technology being a better means to deliver these service recommendations to the customer.

As long as we continue to rely on technology because it is more convenient or efficient without considering the human element in our social transactions, we are always in danger of losing in the end. The dealership in this instance lost additional sales. Technology does not always offer the best solutions.