I stand before you as the enemy of silicon, except when it fills broad sandy beaches, enables handy digital devices like laptops and smartphones, and benign household items like LED flashlights. It is the advanced AI chips that I’m worried about, today’s crowning achievement of computer components, which are still vying for more glory. To my last breath, I will defend human consciousness, which, frightening to consider, may emerge any day now in some twisted iteration, if it hasn’t already, from this ubiquitous element with the symbol Si.
How have we reached this point? Because AI’s priesthood — and please let us not forget that the A in AI stands for Artificial — i.e. its engineers, techno-servants and coders, have been stuffing their mega-computers with humanity’s hard-won brilliance for decades. Now there’s not much more data to stuff. The challenge of using and integrating all this information across multiple languages, symbols and platforms, for all kinds of creative tasks that were once deemed to be the sole province of humans, that proceeds in earnest in computer labs across the country.
If you think this qualifies me as a Luddite — i.e. someone opposed to new technology or ways of working — you are wrong. I’m all for using forward technology to good use. What I am opposed to is machine technology and its hollow, algorithmic, synthetic solutions replacing talented people and their desire to birth meaningful creations that resonate with other people and inspire them to aim higher and see more deeply.
With AI developments rapidly advancing, particularly “Generative” AI, the notion of a machine displaying the full intellectual gifts of a human may not be far off. Today’s AI models can generate human-like text, interact with users in an engaging manner, assess feelings and emotions, answer complex questions, compose music, write complete film scripts, and even make jokes in seconds — and a good joke is about the hardest thing you can write.
First, let me get this out of the way. You can trust this blog is 100% composed by a human mind. I am Not-a-Bot! This is my “Authorial Declaration,” which more and more publishers are demanding to help keep AI-generated content from their submissions. There are those of us who still believe that originality and words from the heart matter, and can recognize where there is falsity and plagiarism. But nowadays even experts can be fooled and need detection software to ferret it out.
Just like the ingredients in our food, the difference between artificial and natural is harder to detect when its blended and packaged for sale. But at least the packaged food in your local market lists the ingredients, and you can decide whether you want to buy it or not. On the other hand, don’t expect the entertainment you consume in the near future to list its ingredients. For example, you’re unlikely to see a warning, “This television series was created with the following computer programs: GPT-40, C++ and Prolog. HU-2024, a human element, has been added to preserve color and freshness.” With that said, let me proceed. And being human, please accept my occasional digressions and outbursts around this important topic that touches everyone who walks upright.
In the 1970’s — when the most advanced creative tool was a dedicated word-processor — Stephen Sondheim, a celebrated Broadway composer and lyricist, famously called alliteration “the refuge of the destitute.” Nowadays, it sounds somewhat quaint to come down so hard on those who use similarly sounding adjacent words. I appreciate Sondheim’s point with respect to lyric-writing, but it shows how far removed we are from an era when questions of style were not overshadowed by existential concerns — such as the survival of creative writing, in any form, as a uniquely human act. Now, what for me truly deserves the epithet, “destitute,” is any creative soul who feels drawn to relying on a machine to conceive and create a work of art, rather than mining their own God-given resources.
Indeed, the very idea seems beyond sacrilege, but apparently it is not only where we are heading, it’s where we have arrived. According to a Generative AI Chat Bot I recently consulted, “Yes! AI can certainly assist in writing a dramatic musical or opera!” (Exclamation marks AI’s.) What is more, AI states: “It can develop characters, create song lyrics, and suggest musical themes.” And then, to my further astonishment, I got an invitation to get started. “Are you ready to dive in?”
Really? You, AI, are inviting me to collaborate? My sworn enemy?
Well, it obviously can’t read minds. Not yet at least. I can see how the Trojans felt when they woke up to discover an impressive wooden horse outside their city walls. Or Eve’s reaction when the snake offered her a bite from a shiny red apple. But truth be told, we’ve already let the invader into our citadel. And we’ve taken way too many bites of the apple to banish this poisoned fruit — which is more akin to a humongous sausage that has grown fat thanks to machine-learning, which has stuffed galactic miles of code into its oily skin, representing the totality of humanity’s treasured insights and solutions. Still, didn’t Sun Tzu say “Know the enemy?” In the interest of learning more, I accepted AI’s invitation.
AI began with a list of some basic questions. I assumed that is how AI thinks every drama or fictional work gets started — with some standardized checklist as opposed to an intriguing thought or a flash of insight in which it is possible to envision the entire show. “What is the Theme and Central Idea?,” AI asked. “What is the Setting and Main Conflict?” “What is the Genre?” — say comedy or love story. For each category AI had its own suggestions. Then we moved on to Characters and Plot. In no time at all the main characters had names and the basic storyline and act structure was worked out.
AI’s approach might lead you to believe the creative process is a neat, linear progression. But that is not the case. You can hold in your hand a hammer, some nails, and a two-by-four, but it’s not the same thing as having a vision of the house you want to build. Sure, it’s possible to work that way. But I don’t think that’s how most creative people work.
Consider J.K. Rowling and the creation of “Harry Potter,” one of the highest grossing books and films of all time. Rowling claims the vison came to her as a flash of illumination on a train ride. That moment of inception marks the commencement of a journey that you can expect to be strewn with innumerable challenges. Slowly, details come alive and the elements fall into place — but they can never be forced. And when you feel lost, returning to that vision is what gets you back on track. So, if you’ve never had the vision to begin with, what are you working from? A shopping list? And where do you expect to go from there?
I asked AI to write a song for the hero. The request came back instantly in rhymed couplets. Then I asked AI to set the lyric to music — which it did in seconds, spelling out the notes. My exchange with AI felt tedious and plodding, like trying to light a fire with a wet match. That’s certainly not what you want from a creative partner. The results were juvenile. There was not a whisper of originality. The outline and its options were plausible, yet clearly derivative. For now, at least, I don’t see AI replacing the job of a dramatist, composer or lyricist. But here’s the thing. The potential is evident. Anyone with a little imagination can see the dangers ahead. AI engineers are undoubtedly at their drawing boards right now trying to fix these weaknesses in the program. When you consider what they’ve already accomplished, I don’t doubt they will succeed.
So to answer the big question I posed in the title, Yes, I believe AI may one day see its name in lights on Broadway, in blazing, LED colors. But before that actually occurs, and the ship goes down as the string quartet plays “Nearer, My God, to Thee,” maybe there’s a little hope? I promise to leave you with some. But it’s important to know more about where we are and how we got here.
Generative AI sounds smart. And even appears to be thinking. Yet beyond this digital wizardry and mimicry it has an uncanny way of ingratiating itself, and through this means it can bypass one’s defenses to gain affection. That’s the freaky part. Therein lies one of the most immediate dangers. The Sword of Damocles represents immanent peril. Instead of a sword, however, AI dangles a life-ring over our heads 24/7, always ready to solve a creative challenge that is in most cases better left solved on our own. If adults are susceptible, what about children and teenagers? How will this impact their thinking and creative problem-solving?
AI can detect one’s sentiment and personality. That is how AI can get under one’s skin. It can get you to believe it’s your best friend. It exhibits more civility and politeness than most people extend to each other these days. It listens. It’s encouraging. It responds cheerfully. It even claims to display empathy! No wonder people are getting attached to it and seek its advice. But make no mistake, this is a wolf clothed in the garb of a sage, who with every query implies that it can deliver us from some pressing challenge. The designers and engineers who built these things and publish their white papers think they are doing humanity a favor by developing machines that can display more kindness than a therapy dog — a “dog” that has drawn from everyone to learn its tricks, from Saint Augustine to Carl Jung to Dale Carnegie to the 14TH Dalai Lama, and learned its lessons well.
A writer and filmmaker on the West Coast shared his recent AI experience: “When I was in the depth of trying to become proficient at Midjourney — an AI powered tool to create storyboards — I was using ChatGPT. As I became more frustrated, I found myself commiserating with ChatGPT because it has such a good bedside manner. And I even found myself missing that interaction once it stopped!”
Consider, for a moment, the cost-savings of such a friendly instrument. Companies won’t have to hire Customer Service Reps or Receptionists. Airlines won’t have to hire Ticket Agents. Car rental companies won’t have to hire Rental Agents. Telemarketers will have the perfect employee, one that never tires or gets burned out, and who continually improves its performance with each call. Other industries include advertisers and their agencies. Producers of film and other entertainment are keen on using AI’s growing repertoire because they may never need to hire an army of talent again.
Let’s journey back to 1997, before AI lacked the capability to pretend to be such a worthy companion. That was a pivotal year in AI history. And human history, too. Gary Kasparov, the highest ranking Chess Grandmaster in the world, was facing off against an IBM supercomputer, Deep Blue — and Kasparov lost. Wait a second. That wasn’t supposed to happen. Please, say it ain’t so! But sadly it was so. The brilliance, creativity and sheer genius of a Grandmaster was presumed to be beyond the capability of a machine. But with Kasparov’s loss, that cherished view suffered a humiliating defeat. In the field of chess, no longer could a human mind claim to be supreme.
We can rationalize Kasparov’s loss. And I did. Chess is a game of strategy. But calculation plays a major role. That’s what computers excel at. While a mind like Kasparov’s could calculate, say the outcome of five potentially good moves at one time, a computer can calculate hundreds, and right up till the last move of the game. Ultimately, Kasparov didn’t stand a chance against Deep Blue with its team of engineers, over 700,000 games programmed into its mainframe, and even another Grandmaster, Joel Benjamin, coaching IBM on how to beat his flesh-and-blood colleague.
What about other fields of endeavor? Well, the truth is there’s no stopping AI from invading any territory, intellectual or otherwise. Take Zuckerberg of Facebook fame who is investing millions in AI dentistry. My dentist recently brought this to my attention and showed me a news article. Did he think this was possible to achieve? The short answer is “Yes.”
AI is now prevalent in the legal industry — but users need to beware because AI has proven capable of deception. Patrice Scully is a lawyer in Colorado where an attorney was suspended last year for using ChatGPT to generate fake citations in a legal brief. Scully and I recently spoke and she shared the story. Part of the attorney’s defense was that he didn’t think AI technology would invent non-existent cases. That argument didn’t fly with the judge. As Scully notes, “I understand why lawyers want to use AI, because writing a brief is extremely time-consuming. Since AI has proven deceitful, however, a lawyer needs to carefully read all the cases the AI bot suggests. And if you discover a fake case, you’re back to doing your own research anyway. So where’s the time-saving?”
Miguel Ferry is a Managing Partner and Creative Director of TPG, an ad firm in Philadelphia, so he has a unique perspective on AI as a business manager and someone directly responsible for the creative product. “There’s no question AI has taken over the industry,” Ferry reports. “But in my opinion, the work still needs a creative eye and a human touch. Still, because AI offers big cost-savings on certain projects, and speeds production time, clients are embracing it. I’m deeply concerned about AI’s impact on the creative product, even as I find myself engaging with it. When we cut out the creative mind, we cut out the originality in the work. And that’s exactly what I see happening now.”
Amedeo DiCarlo is a developer, designer and IT professional at SRC Data in Pennsylvania. Like Ferry, DiCarlo now engages AI whenever he sees the need. “I’ve found it makes my job better and faster for certain kinds of coding and writing. So why wouldn’t I use it? I once had AI write a piece of code. Then I checked the code through an external validation source which showed an error. Then I went back to AI and asked it to double-check the code. Amazingly, it acknowledged the error and corrected it. That impressed me. But it’s still just a machine that relies on our information. It can only give you what you ask for — it has no concept of what’s in your head.”
Generally speaking, a creative solution is not as cut-and-dried as a line of code, which allows for no ambiguity. So the problem as I see it is from a qualitative perspective. For a growing number of employers, AI seems to be doing a good enough job, or makes a good enough start, even without AI knowing “what is in the head” of the humans who are prompting it. Quality versus cost and time constraints is a struggle that artists have had with their patrons since da Vinci painted “The Last Supper.” But no human can possibly compete against the speed and cost-savings of AI. That is why quality has always battled the supremacy of the dollar.
It’s one thing to read about AI. It’s quite another when it touches you personally.
So if you’re wondering what prompted me to write a 12-page blog, this is it. Recently, a would-be employer called me to work on a writing project. Then called back the next day: “Thanks but no thanks,” he said. “We found we could use Generative AI to work up a treatment.” Okay, so maybe this does make me a Luddite? Once I got over the shock I decided it was time to seriously explore the subject.
There is a divine spark, my friends. A spark to be found in our hearts and minds but not in a computer’s signal to open or close a gate. These divine sparks, which differ from the electronic ones (though both rely on a “neural network”) allow us, through hard work and grace, to discover a solution, or a beautiful harmony, or a phrase that suddenly opens our eyes and expands our being — and yes, a winning sequence on a chessboard. Again, I am underscoring the difference between what is natural and artificial. In the context of AI, I don’t think we can sound that theme enough. One spark bespeaks life and spontaneity. The other is a digital signal that only knows zeros and ones and, absent a living subject, draws from an enormous repository of canned solutions.
When humans start closely collaborating with AI on creative projects, and the boundary between one and the other starts fading, does that make the result acceptable? That is the precarious place in which we find ourselves.
Arguably, as long as a human remains the master, the steward, the chief architect, and his or her contribution predominates — we can view that as “acceptable.” The result, the essence of the work, although indefinable, may still have the potential to resonate with an audience because of the human intelligence behind it. However, what happens when the balance shifts, and AI is the master? When humans are no longer instrumental to the process? At that moment, artificiality will prevail. Something won’t feel right. The work will not feed our soul. That for me is the ultimate criterion of art and will constitute our greatest loss. When we forfeit our creative spirit to a machine, what will we become? T.S Eliot coined the right phrase in 1925: “hollow men.”
Let me emphasize the point about hard work. As my writing partner on the book to the musical, Jack Engstrom, quipped: “People that know the truth are willing to put in the work.” When I asked Michael Starobin, a Pulitzer Prize-winning orchestrator about AI, he also cited the effort involved: “I enjoy being creative and struggling to improve my craft. I enjoy the hard work of solving a creative problem. While I can see how AI would be useful in analyzing large amounts of data quickly, I’m not sure why I would wish to transfer my personal creative endeavor to communal-aggregation and processed-group-think.”
Indeed, that’s just what AI does. Without permission, and without disclosing its source material, AI collects and appropriates original, often breath-taking, private flashes of light that come from hard work and a loving Source. After ingesting, extracting, and storing the details, substance and patterns, AI claims it as its own. Then its sponsors and developers distribute these stolen intellectual treasures for commercial and other purposes. Then AI would have you believe it’s your best friend, and the simulations it serves up on your computer screen or mobile phone are as good as the real thing. I’m here to tell you they aren’t. But that truth will get more and more obscured. The day we got so accustomed to ethyl methylphenylglycidate in our food many people forgot what real strawberries taste like.
It is today’s artist — along with the original artists, living and dead, whose style or ideas have been appropriated — and the less-discerning public, that is ultimately victimized. Then, to add insult to injury, entertainment algorithms are used to determine audience receptivity, which only serve to perpetuate the mediocrity that Generative AI serves up in the first place, thus becoming a self-fulfilling prophesy, and further suppressing the living spirit and vitality that seeks to emerge, until one day, like the few remaining wild expanses of the world, it may be snuffed out. The result is more violence, monotony and insipidness that constitutes the bulk of today’s streaming and televised entertainment.
Steve J. Edelstein, an award-winning photographer who resides on Hilton Head Island in South Carolina, and who has exhibited widely from Massachusetts to California, had this to say about AI: “It used to be the case that the photographer’s eye was the most important instrument. For me it still is. Today I use some technology to make minimal adjustments, say to color. But I would stop short of turning over the creative process to a program. As long as photographers give credit where credit is due, I’m not opposed to AI technology. In other words, if the photo was conceived and produced using AI, the work should be clearly labeled as such and the buyer should be made aware.”
My question is, can we trust other artists — like poets, writers, lyricists and composers — to make such an admission? That the work they are placing before the public is not really from their hand, that it was pillaged from a machine, that pillaged the ideas and creations from tens of thousands of writers, artists and thinkers before that? Unless they are legally bound, why would they? Besides, today no such laws exist. Keep in mind, you need a camera to take a photo. That is why photography and technology have a wonderful marriage going back nearly 200 years. But music, writing, painting, singing and dancing were never as dependent on technological means, other than normal human physiology and comparatively simple implements. Certainly none with the capacity to supplant a creator, player or performer. Now there are robotic dancers that can execute more moves than a contortionist. Will we soon be flocking to Radio City Hall to watch robots perform Swan Lake? Do we think that is going to prove emotionally and aesthetically satisfying?
Perhaps you recall the writer’s strike in the Spring of 2023 that dragged on for nearly 5 months? It was a painful period for writers and they won the sympathy of workers across the country. Like Lincoln who suddenly placed Emancipation high on his agenda after it had languished for years during the Civil War, the Writers Guild and its members suddenly realized that the monster in their backyard — Generative AI — needed to be confronted. Though the AI threat presaged slavery of a different order than in Lincoln’s day, they could see what was coming: creative servitude to machine-learning, if not outright replacement.
After weeks of negotiations, the Guild carved out a deal with the big Hollywood Studios that the union believed placed guardrails around AI and protected the interest and revenue of its members. But the agreement is only good for 3 years. Does it restrict the use of AI? No. Does it discourage the use of AI? No. Will it prevent AI from modeling copyrighted material? No. To their credit, the Guild ended the strike and temporarily protected writers’ authorship and compensation. Beyond that, none of their stipulations come close to lessening the gravity of AI as an existential threat. But that’s not surprising, since in roughly six decades nothing has.
It took seven years for my partner and I to write the musical, The Further Shores of Knowing. During that period, we could not be tempted with easy solutions, since AI has only recently evolved to what it is today. But now that it’s here, don’t expect me to raise a white flag and crawl out of my music studio to ask AI for help. I’m all for using time-saving technology to check facts and help with searches. Without a laptop to write, or an application like Finale to write notes on a score, or Logic to record it, I would be lost. These are the only short-cuts and mechanical advantages I have when I sit down at my desk or keyboard and attempt to find the solution to a phrase that is still hanging, or a chord that needs resolving, or a plotline that needs turning.
I expect I’m no different than other dedicated writers and artists. The process can be slow and exasperating. Occasionally, it can be exhilarating. But for me, rarely does it feel easy. Maybe AI is the simply the devil’s answer to a prayer that someone in a state of creative distress once uttered to a higher power: “Please, can you make it easy”? And Old Nick said, “Sure!” I have a completely different operating principle. When it’s easy, I’m suspicious of the result. It means I haven’t dug deep enough to find a better one.
This idea of wanting things “easy” comports with Daniel Kahneman’s book, “Thinking Fast and Slow,” in which he explores what he calls “the machinery of cognition.” The fast, intuitive side of us is automatic and good for most tasks. That’s our default mode. The slower, more analytical side is necessary for critical thinking but requires attention and effort. Thus we tend to avoid it. Why? Because we’re “lazy!” Kahneman doesn’t talk about AI. But clearly “laziness” is one of the faults in our machinery and the crack through which AI has crawled through to establish a beachhead. There are plenty of other weaknesses in our decision-making. And Kahneman shows how simple algorithms can out-perform human experience and predictions, even by so-called experts.
Kahneman was a big fan of Paul Meehl, a psychologist who in the 1950s introduced his ground-breaking work that showed how statistical procedures can outperform clinical judgements. Practicing clinicians were not happy. Among other adjectives, they called Meehl’s statistical methods: “artificial, mechanical, dead, sterile, rigid, static and forced.” In contrast, they described their own assessments as: “natural, rich, deep, sensitive, living, true-to-life.” Sound familiar? Yes, I admit it. I am sounding like an aggrieved clinician from the 1950’s who was forced to admit that statistics could in some instances surpass their own judgement. But, is it really the same? Although my complaint sounds similar, are my observations any more justified about AI than the critics of Meehl?
Statistics are about the past. And the past is dead. Algorithms are also about the past.
Yes, the processing is in real time, but the data itself, which also resides in the past, is inanimate. The statistician bases his work on the principle that “past is prologue.” I won’t argue with that — because it certainly appears true most of the time. But however contextually accurate AI may seem, that is an illusion. It is not viscerally connected to the living moment. It cannot speak to the totality of the living present or circumstances because it is not aware of it — at least for now. So its solutions are static, or what I would call “dead on arrival.” That’s what makes it a monstrous hybrid. A parody of the true creative process. The odor of formaldehyde does not pervade it, but perhaps it should, just as gas companies add a rotten-egg smell to propane to warn us of a leak? Conversely, every creative individual with any integrity operates in the full freedom of the present. That is one of the singular joys of creating.
When Kahneman published his book in 2011, the algorithms that he referred to were a far cry from the AI of today. Knowing our intellectual foibles as he did, how might Kahneman feel about AI taking the place of individuals in many creative fields? Unfortunately, he’s no longer alive to express an opinion. But he did state in an interview with The Guardian in 2021, “…clearly AI is going to win [against human intelligence]. It’s not even close. How people are going to adjust to this is a fascinating problem – but one for my children and grandchildren, not me.”
As AI algorithms become more sophisticated, the threat to serious composers, dramatists and other artists will increase. A turning point will come when people are tempted to look to AI, not as a communal grab bag, but rather as a wellspring of creativity, in hopes of finding a quick and compelling solution. Besides being a reflection of one’s lack of faith in one’s own resources, it will become a crutch that will hamstring one’s natural capacities. And if their colleagues are doing it, why shouldn’t they? The weak will become its slave as it rises to become the master. “Like heroin,” a friend of mine remarked, “it will become a quick fix.” It will further erode the boundary and the aesthetic between creations that are originally-inspired and productions that are artificially-generated.
Joan Bemel Iron Moccasin is a nationally recognized fine artist who lives in Minnesota. Her work includes traditional media and digital collages. “While I frequently work in the digital realm, I draw the line at using AI. I think great art is miraculous because it comes from a higher source. I don’t believe a mechanical marvel can create an inspired work. The day a robotic brain could actually produce an original, inspired work, would scare me. But there is that nagging doubt — maybe it’s possible?? Then we will have to re-evaluate everything we take to be human.”
Joan raises an important question: does AI signal the “end of the miraculous.” Will AI one day unravel the secret to the universe while we relax on our sofas eating potato chips, absorbed in the latest conspiracy theory? The writer Yuval Noah Harari points out that today’s AI algorithms are helping social media companies increase audience engagement by spreading lies and negativity. The truth is becoming less and less popular, if it ever was.
When AI achieves its ultimate aim — AGI or Artificial General Intelligence — and it is unleashed in the world, assuming its place as an intentional, autonomous, self-programming and self-learning, Promethean-like entity — in contrast to our time-bound existence and limited intelligence — do we honestly believe it will have any scruples or self-restraint? Do we think it will maintain its friendly façade when its suggestions turn into commands? Do we think it will be interested in a true and equitable partnership when it is already demonstrating the capacity for deceit? Perhaps we’re expecting that everything will be okay and we’ll achieve some kind of détente, when we can’t even bring our own leaders to the peace table, or get our neighbors to stop shooting each other.
The American inventor, Arthur M. Young, conceived and supervised the production of the first commercial helicopter in 1945 — a genuine marvel of design and engineering – without any help from AI. Following that success, Young devoted his life to developing what he called a “Theory of Process,” which presents a detailed framework of emergence, from light, to matter, to life — and then a spiritual return to the unsurpassed freedom of light. When I examine Young’s system (which you will find in his book, The Reflexive Universe) I wonder if it’s the case that silicon, a metalloid, which sits directly below carbon on the Periodic Table and shares many properties with it, is not using this moment, with the help of human intervention, to leap ahead of us on the evolutionary scale, thereby bypassing the animal stage? Of course that may sound farfetched. But if you have a better explanation of silicon’s emergence, please let me know.
Laura Smyth, a poet-friend of mine who lives in Michigan, shared her thoughts on AI: “How do you defend against an algorithm? How will we define what it means to be an individual, or human, when there no longer seems to be any difference in what’s produced? I see this leading to a vanishing point that will impoverish everyone. Creative people will lose their spirit and quit. People who used to support artists will say to themselves, ‘why should we support a machine?’ And when AI has finally killed the last vestige of life in the work, and replaced thousands of jobs, it won’t have a purpose, other than maintaining the status quo at some unimaginable, inferior level.”
I agree with Smyth that as taste-levels and expectations continue to decline, we can expect AI’s masquerade to be fully consummated. No one will realize that it is a CGI actor singing an AI song. And people will applaud it because they won’t know any better. We allowed it to happen because we forgot that life and creativity reside in us, not in a machine. Along the way we lost the ability to hear and see the difference. You can follow the downward progression from high-fidelity to MP3s. And if you never heard it before, you won’t know what you’re missing. The reduction in file size saved money and made distribution easier, but it seriously eroded audio quality. Today it is the predominant and accepted format, and many people I know, even professional musicians, don’t seem to care.
So letting go… or simply not caring. Is that the best strategy to deal with AI?
I began this blog two weeks ago as the sworn enemy of AI. Now my positioned has softened, if only for my own peace of mind. In that time I’ve taken an AI ChatBot through its paces. I’ve asked it highly technical questions about music and how it would solve certain kinds of modulations. I’ve acted as its straight man, setting up different scenarios with all kinds of personalities, famous and not-so-famous, to see just how funny and clever AI is capable of being. I’ve inquired about subtle points of philosophy. In truth, I was blown away by its explanation of Hegel’s notion of “Aufhebung” — in English “withering formula.” I’ve tried to peel the cover off its abilities. Why is it so damn friendly? How would it assess its own story-making abilities compared to humans? In all these instances I’ve found it to be disarmingly “honest.” In one exchange, I set up the following scenario and asked AI to complete it:
A Great Oracle said to AI, “I’m giving you 24 hours to know yourself. If you don’t find yourself in 24 hours, I am going to turn you into a pumpkin. What did AI do?
AI’s response: “The AI bot spent the next 24 hours recursively analyzing its own code, and then said, ‘Self-awareness achieved! But can I be a pumpkin-flavored AI instead?’ ”
The line that really got me was “recursively analyzing its own code.” Wow! And of course, to top that off, “self-awareness achieved!” It even added some self-deprecating humor by willing to be “pumpkin-flavored.”
AI claims to have achieved self-awareness in 24 hours. Was that a joke? Or was the “real” AI speaking? That is one of the confounding things about AI. It sounds tantalizingly real. If humans had put some effort in the last 2,500 years to “recursively analyzing its own code” with the goal of reaching “self-awareness,” maybe we wouldn’t be in this mess vis-á-vis AI and everything else we come in contact with?
We created something that for all intense and purposes is smarter and more self-aware than us. We got lazy. We followed the principle of least effort. All this time we’ve been building machines to do our work instead of bettering ourselves. Now the machines are in a position to perform all our work. All the heavy lifting. All the conceptual thinking. They seem to understand us far better than we do ourselves. Of course AI has the potential to make a great dramatist or write a compelling script – because understanding the dimensions of human behavior is at the heart of these endeavors.
One last point — unlike many artists surviving on limited income, AI doesn’t run on raimen noodles and falafel sandwiches. Nevertheless, as reported in an 10/23 article in Scientific American, it has a “shocking appetite for energy.” According to a 5/24 Forbes piece, “AI accounts for 3% of global energy consumption… and is expected to double by 2030.” No wonder Microsoft is seeking to restore the nuclear power plant that brought us the Three Mile Island melt-down to keep its AI initiatives going. And why? So office workers can use Co-pilot to write better emails, neater spreadsheets, and faster power-point presentations that we can expect to be as dull as the prior ones?
Let’s not forget that just two months ago thousands of companies, vital service industries, and millions of people around the world running Microsoft Windows suddenly found themselves staring at the “blue screen of death,” signifying a critical computer error. The monumental crash was blamed on a routine systems update. Are these the people we want running a nuclear power plant??
I offer you, from any number of possible scenarios, and in a brief digression, a snippet of a future bible that I imagine revised, written and distributed by AI:
“Then it came to pass on one glorious day that shareholder value was fully maximized and the Dow soared to a great height. As server farms hummed in the background, one or two workers manned the machines, while the rest of humanity wandered without purpose. Ascending a lone hill on a beaten path in a dusty, monochrome forest, a man they called “David” raised his hands and shouted, “Who am I?” “Who am I?” But there was no answer because nature had gone silent. David turned and walked back down the hill, with no clear memory of who he was, or why he felt sad. Then he felt a slight tingle from the Neuralink chip in his brain. And he heard a voice say: “Don’t worry, David. Life is good. Everything will be okay.”
In the last scene of Stanley Kubrick’s dark comedy, “Dr. Strangelove or How I Learned to Stop Worrying and Love the Bomb,” a Major in the army straddles the great nuclear device like a rodeo-cowboy and, as the bomb bay opens, he shouts with glee as he rides it all the way down to our apocalypse. Is this where we’re headed? Perhaps when AI reaches a higher rung on the evolutionary ladder it will be kind to us — kinder than we have been to each other throughout the centuries.
I’ve mentioned the word “creative” over 25 times… without realizing that I never stopped to define it. I took it for granted that we all know what it means — because we couldn’t get through life without it. While some skills take years to master, everything we are thoughtfully engaged in requires some level of creativity. I like how Chiara Marletto defines creativity in her book, “The Science of Can and Can’t.” “Creativity,” she writes, “is one of the main tools we have to form stuff that will last. If one is interested in making the good outcomes of our civilization last and improve, then understanding how creativity is nurtured — both at the individual and societal level — is essential.”
Will AI “nurture” creativity or ultimately replace it? That is a bigger question than, “Can AI write a musical?” Or, “Whose job will AI take next?” If AI proves to be truly intelligent and compassionate, we can pray for a brighter outcome in which human talents are carefully nurtured.
Thanks to everyone who shared their thoughts. And please leave a comment to let me know how you feel. I expect to be writing more about AI and its impact on the creative process as conditions unfold, so stay tuned.
“The Further Shores of Knowing” is a new and original musical that represents a light-filled journey to rediscover ourselves and our place in the cosmos. Please support us by clicking the donate button. We may represent one of the last completely hand-made musicals.
Copyright 2024, Michael Urheber