Monday, September 15, 2025

Data Centers On the Grid: Ballast or Essential Cargo?

 

Back in the days of sailing ships, the captain had a choice when a storm became so severe that it threatened to sink the ship.  He could throw the cargo overboard, lightening the ship enough to save it and its crew for another day.  But doing that would ruin any chance of profiting from the voyage. 

 

It was a hard decision then, and an equally hard decision is facing operators of U. S. power grids as they try to cope with increasing demand for reliable power from data centers, many of which are being built to power the next generation of artificial-intelligence (AI) technologies. 

 

An Associated Press report by Marc Levy reveals that one option many grid operators are considering is to write into their agreements with new data centers an option to cut off power to them in emergencies. 

 

Texas, which is served by a power grid which is largely independent of the rest of the country's networks, recently passed a law that prescribes situations in which the grid operator can disconnect big electricity users such as semiconductor-fab plants and data centers.  This is not an entirely new practice.  For some years, large utility customers have taken the option of being disconnected in emergencies such as extremely hot or cold days that put a peak strain on the grid.  Typically they receive a discount for normal power usage when they allow the grid operator to have that option.

 

But according to Levy, the practice is being considered in other parts of the country as well.  A large grid operator called PJM Interconnection serves 65 million customers in the mid-Atlantic region.  They have proposed a rule similar to the one adopted in Texas for their data-center customers.  But an organization called the Digital Power Network, which includes data-center operators and bitcoin miners, another big energy user class, complained that if PJM adopts this policy, it may scare off future investment in data centers and cause them to flee to other parts of the U. S. 

 

Another concern is rising electricity prices, which some attribute to the increased demand by data centers.  These prices are being borne by the average consumer, who in effect is subsidizing the gargantuan power needs of data centers, which typically pay less than residential consumers per kilowatt anyway.

 

In a way, this issue is just an extreme example of a problem that power-grid operators have faced since there were power grids:  how to handle peak loads.  Historically, electricity has to be generated at the same time it's consumed, although there is some progress recently in battery storage of electricity, though not enough to make much of a large-scale difference yet.  This immediacy requires a power grid to have enough generating capacity to supply the peak load—the most electricity they will ever have to supply on the hottest (or coldest) day under worst-case situations. 

 

The problem with peak loads from an economic view is that many of those generating facilities sit idle most of the time, not producing a return on their investment.  So it's always been a tradeoff between taking a chance that your grid will manage the peak load and scrimping on capacity, versus spending enough to make sure you have margin even with the worst peak load imaginable, but having a lot of useless generators and network stuff on your hands most of the time.

 

When the electric utility business was highly regulated and companies had a guaranteed rate of return, they could build excess capacity without being punished by the market.  But since the deregulatory era of the 1970s, and especially in hyper-free-market environments such as Texas, the grids no longer have this luxury.  This is one reason why load-shedding (the practice of cutting off certain big customers in emergencies) looks so attractive now:  instead of building excess capacity, the grid operator can simply throw some switches and pull through an emergency while ticking off only a few big customers, rather than cutting it off to everybody, including the old ladies who might freeze or die of heat exhaustion without power. 

 

Understandably, the data-center operators are upset.  They don't want to spend the money on backup generators that they would rather the grid operators spend.  But the semiconductor manufacturers have learned how to do this already, and build costs for giant emergency-generation facilities into their budgets from the start. 

 

Some data-center operators are starting to build their own backup generators so that they can agree to go off-grid in emergencies without interrupting their operations.  After all, it's a lot easier to restart a data center after a shutdown than a semiconductor plant, which could suffer extreme damage after a disorganized shutdown that could put it out of action for months and cost many millions of dollars. 

 

Compared to plants that make real stuff, data centers can easily offload work to other centers in different parts of the country, or even outside the U. S.  So if there is a regional power emergency, and a global operation such as Google has to shut down one data center, they have plenty more to take up the slack. 

 

It looks to me like the data centers don't have much of a rhetorical leg to stand on when they argue that they shouldn't be subjected to load-shedding agreements like many other large power users tolerate already.  We are probably seeing the usual huffing and puffing that accompanies an industry-wide shift to a policy that makes sense for consumers, power-grid operators, and even the data centers themselves, if they agree to take more responsibility for their own power in emergencies. 

 

If electricity gets expensive enough, data-center operators will have an incentive to figure out how to do what they do more efficiently.  There's plenty of low-power technology out there developed for the Internet of Things and personal electronics.  We all want cheap electricity, but if it's too cheap it leads to inefficiencies that are wasteful on a large scale.  Parts of California in the 1970s had water bills that were practically indistinguishable from zero.  When I moved out there to school in 1972 from water-conscious Texas, I was amazed to see shopkeepers cleaning their sidewalks every morning, not with a broom, or a leafblower, but with a spray hose, washing down the whole sidewalk. 

 

I don't think they do that anymore, and I don't think we should guarantee all data centers that they'll never lose power in an emergency either.

 

Sources:  Marc Levy's article "US electric grids under pressure from power-hungry data centers" appeared on the Associated Press website on Sept. 13 at https://apnews.com/article/big-tech-data-centers-electricity-energy-power-texas-pennsylvania-46b42f141d0301d4c59314cc90e3eab5. 

Monday, September 08, 2025

The Fading Glory of Subsidiarity

 

I'll get to subsidiarity in a minute.  First, here is why I'm writing about it this morning.

 

For many years, I have subscribed to the Austin American-Statesman, first in its hard-copy paper form, and then when that got insupportably expensive, in its digital form only.  Already by then, it was owned by a large media conglomerate called the Cox Media Group, but the operations and editorial control of the paper remained in Austin.  An outfit called GateHouse Media bought it from Cox in 2018, but relatively little changed when the owners of GateHouse bought the company that ran USA Today, Gannett Media, and moved the Statesman under the Gannett umbrella.  That caused some changes, but they were tolerable.  Back in February 2025, however, Gannett sold the Statesman to Hearst Communications, another media conglomerate. 

 

This may or may not have anything to do with what happened to me this week, but I suspect it does. 

 

I've been accustomed to propping my iPad on the breakfast table and reading the "e-edition" of the Statesman along with having my cereal and orange juice.  The software worked reasonably well most of the time, and until Wednesday of this week (Sept. 3) everything went smoothly. 

 

Suddenly on Wednesday, I was asked for a password, and the system rejected it.  After futilely trying to reset the password and getting no response from the paper's system, I called a help number and got connected to a man who said there was a software problem, and I should uninstall the Statesman app on my iPad and reinstall it. 

 

I tried that Thursday, but it didn't help.  Then I tried pretending I was a new subscriber (although I had found a place online which said my subscription was paid up until December of 2025), and tried to subscribe.  That didn't even work. 

 

Finally, I called the help line again.  I spoke to one person, who silently connected me to another person, who sounded like she was working in a boiler room with fifteen other people crowded into a space the size of a VW bus. 

 

She tried to identify me by name and phone number, and all those records had been lost.  (This was also the case when I called the day before).  Finally, she could locate me by street address, but it said I wasn't a subscriber.  I asked if she could look up my subscription record to tell when it expired.  She said because of the transfer to Hearst they didn't have that information, and would I like to subscribe now?

 

Seeing no other option, I said yes.  I'd already spent about half an hour on the phone, and I figured this was the only way to get my paper back.  It took about twenty minutes for her to take my information and put it in the system, and I could hear her asking for help in the background.  Then it took another twenty minutes for me to log on and get my new subscription going, and we never could change the password they started me with. 

 

The entire megilla cost me an hour of time I was not planning to spend, and $360 for a year's subscription to the e-edition only, which was the price point that made me cancel the hard-copy edition a few years ago.  We've had some inflation since then, but not that much.

 

If there is anybody under 40 reading this, you are probably wondering why this old guy is insisting on paying that much money for stuff that I could get for free.  Well, while I disagree with the editorial positions of the Statesman staff on most matters, it is still an edited entity that does a fairly good job of telling me what's been going on.  And for another thing, you can't find that many comics all in one place on the internet for free, or if you can I don't know where to go.

 

Now for subsidiarity.  It is a term from Catholic social teaching which describes the principle that "issues should be dealt with at the most immediate or local level that is consistent with their resolution."  That's according to Wikipedia, and while that source is slanted on some matters, they are right on with this definition.

 

Going straight to my problem with the Statesman, most of the paper is written and edited thirty-five miles away up in Austin.  The issue of my e-edition was a local issue, not going any farther in principle than Austin and San Marcos.  The principle of subsidiarity says that the problem with my subscription, and the list of when I've subscribed, and my credit balance which has apparently vanished into the bit void, and my passwords, and whatever other stuff is relevant to the problem including the authority to do something about it, should all be right there in Austin, and not stuck in some anonymous server farm in Seattle and controlled from a boiler-room operation in God knows where, and owned by a corporation based in Manhattan which clearly doesn't give a flip about how it treats its new customers.

 

Simply because technology did not yet permit otherwise, newspaper operations prior to about 1970 had to be local, in accordance with the principle of subsidiarity.  If my father had a problem with his subscription to the Fort Worth Star-Telegram, he'd get on the phone and call their office downtown.  A human being less than 20 miles away would answer the phone, and flip through physical pages of paper until he or she found the hand-written subscription records, and the issue would be resolved, or not.  People made mistakes with paper records too, but they were more easily resolved.  I have no idea what's gone wrong with my subscription to the Statesman, but again, only God knows exactly what the problem is, where all the loose ends are, and whether and how it can be resolved, because it's now so complex and involves incompatible computer systems and who knows what else. 

 

I don't have an answer to this problem, except to point out that if we try moving toward systems that are more in accordance with the principle of subsidiarity, a lot of these kinds of problems might take care of themselves. 

 

Sources:  I referred to the Wikipedia articles on subsidiarity and the Austin American-Statesman

Monday, September 01, 2025

Will AI Doom A Good Chunk of Hollywood?

 

This week's New Yorker carries an article by Joshua Rothman about what artificial intelligence (AI) is poised to do to the arts, broadly speaking.  I'd like to focus on one particularly creepy novelty that AI has recently empowered:  the ability of three guys (one in the U. S., one in Canada, and one in Poland) to produce fully realized short movies without actors, sets, cameras, lights, or any of the production equipment familiar to the motion-picture industry.  The collaborators, who call themselves AI OR DIE, use only prompts to their AI software to do what they do.

 

I spent a few minutes sampling some of their wares.  Their introductory video appears to show what a camera sees while it progresses through a mobile-home park, into one home, where the door opens, another door opens, and finally a piece of toast appears, floating in the air.  Another one-minute clip shows a big guy pretending to be a karate-type expert knocking down smaller harmless people.  If all the characters in that clip were creations of AI without any rotoscoping or other involvement of real humans or their voices, we are very far down a road that I wasn't aware we were even traveling on.

 

Software from a firm called Runway is not only used by AI OR DIE, but by commercial film production companies in mainstream films as well.  But so far, nobody has produced an entire successful feature film using only AI.  But it's only a matter of time, it seems to me.

 

Rothman quotes the AI OR DIE collaborators as saying how thrilled they are when they can have an idea for a scene one day, and then start making it happen the next day.  No years in production hell, fundraising, hiring people, and all the other pre-production hassles that conventional filmmaking entails—just straight from idea to product.  So far, most of what they've done are what Rothman calls "darkly surrealistic comedies."  If the samples I saw were representative, it reminds me of an afternoon I spent at Hampshire College in the 1990s, I think it was, in Massachusetts, where a screening of student animations was held.

 

Early in the development of any new medium, you will come across works that were made simply to exploit the medium, without much thought given to what ought to be said through it.  The short animations we saw that afternoon were like that.  The students were thrilled to be able to express themselves in this semi-autonomous way.  Chuck Jones, the famous Warner Brothers animated-film director, once said that animators are the only artist who "create life."  Most of the time they let their thrills outweigh their judgment.  A good many of the films we saw back in Massachusetts that afternoon were in the category of the classic "Bambi Meets Godzilla."  This film, which I am not too surprised to learn was No. 38 in a book of fifty classic animated films, at least meets the Aristotelian criteria of unities of action, time, and place.  There is one principal action, it happens over less than a 24-hour period, and it happens in only one location.  We see the fawn Bambi, or a reasonable facsimile thereof, browsing peacefully among flowers to idyllic background music.  Suddenly a giant foot—Godzilla's, in fact—drops into the frame and squashes Bambi flat.  End of story.  Most of the other films were like that:  a silly, stupid, or even mildly obscene idea, realized through the painful and tedious process of sole-author animation.

 

Just as our ability to manipulate human life technologically has led us to face fundamental questions about what it means to be human, the ability to only a few people to digitally synthesize works of art that formerly required the intense collaboration and technology-aided actions of hundreds of people will lead us to ask, "What is art?"  And here I'm going to fall back on some classical sources.

 

Plato posed that the transcendentals of truth, goodness, and beauty are things that lie at the roots of the universe.  According to theologian and philosopher Peter Kreeft, art is the cultivation of beauty.  Filmmaking is a type of storytelling, one in which the way the story is told plays as much of a role as the story itself.  And it's obvious that AI can now replace many more expensive older ways of doing moviemaking without compromising what are called production values.  The authentic quality of the AI OR DIE clips I saw would fool anybody not thoroughly familiar with the technology into thinking those were real people and real mobile homes.

 

But AI is in the same category as film, cameras, lights, microphones, technicians, and all the other paraphernalia we traditionally associate with film.  These are all means to an end.  And the end is what Kreeft said:  the cultivation of beauty. 

 

I think the biggest change that the use of AI in film and animation is going to make will be economic.  Just as the advent of phototypesetting obsoleted entire technology sectors (platemaking, Linotyping, etc.), the advent of AI in film is going to obsolete a lot of technical jobs associated with real actors standing in front of real scenery and being photographed with real cameras. 

 

Will we still have movie stars?  Well, is Bugs Bunny a movie star?  You can't get his autograph, but nobody would deny he's famous.  And he's just as alive as he ever was. 

 

Before we push the panic button and write off most jobs in Hollywood, bear in mind that live theater survived the advents of radio, film, and television.  It was no longer something you could find in small towns every week, but it survived in some form.  I think film production with real actors in front of cameras will survive in some form too.  But the economic pressures to use AI for more chunks of major-studio-produced films will be so immense that some companies won't be able to resist.  And if the creatives come up with a way to make a film that cultivates beauty, and also uses mostly AI-generated images and sounds, well, that's the way art works.  Artists use whatever medium comes to hand to cultivate beauty.  But it's beauty that must be cultivated, not profits or gee-whiz dirty jokes.  And unfortunately, the dirty jokes and the profits often win out.

 

Sources:  Jonathan Rothman's "After the Algorithm" appeared on pp. 31-39 of the Sept. 1-8, ,2025 issue of The New Yorker.  I also referred to Wikipedia articles on "Bambi Meets Godzilla," and the software company Runway.  Peter Kreeft's ideas of art as the cultivation of beauty can be found in his Doors In the Walls of the World (Ignatius Press, 2018).

Monday, August 25, 2025

RAND Says AI Apocalypse Unlikely

 

In 2024, several hundred artificial-intelligence (AI) researchers signed a statement calling for serious actions to avert the possibility that AI could break bad and kill the human race.  In an interview last February, Elon Musk mused that there is "only" a 20% chance of annihilation from AI.  With so many prominent people speculating that AI may spell the end of humanity, Michael J. D. Vermeer of the RAND Corporation began a project to explore just how AI could wipe out all humans.  It's not as easy as you think.

 

RAND is one of the original think tanks, founded in 1948 to develop U. S. military policies, and has since studied a wide range of issues in quantitative ways.  As Vermeer writes in the September Scientific American, he and his fellow researchers considered three main approaches to the extinction problem:  (1) nuclear weapons, (2) pandemics, and (3) deliberately-induced global warming. 

 

It turns out that nuclear weapons, although capable of killing billions if set off in densely-populated areas, would not do the job.  There would be little remnants of people scattered in remote places, and they would probably be enough to reconstitute human life indefinitely.

 

The most likely scenario that would work is a combination of pathogens that together would kill nearly every human who caught them.  The problem here ("problem" from AI's point of view) is that once people figured out what was going on, they would invoke quarantines, much as New Zealand did during COVID, and entire island nations or other isolated regions could survive until the pandemic burned itself out.

 

Artificially-induced global warming was the hardest way to do it.  There are compounds such as sulfur hexafluoride which have about 25,000 times the global-warming capability of carbon dioxide.  And if you made a few million tons of that and spread it around, it could raise the global average temperature so much that "there would be no environmental niche left for humanity."  But factories pumping megatons of bad stuff into the atmosphere would be hard to hide from people, who naturally would want to know what's going on.

 

So while an AI apocalypse is theoretically possible, all the scenarios they considered had common flaws.  In order for any of them to happen, the AI system would first have to make up its mind, so to speak, to persist in the goal of wiping out humanity until the job was actually done.  Then it would have to wrest control of the relevant technology (nuclear or biological weapons, chemical plants) and conduct extensive projects with them to execute the goal.  It would also have to obtain the cooperation of humans, or at least their unwitting participation.  And finally, as civilization collapsed, the AI system would have to carry on without human help, as the few remaining humans would be useless for AI's purposes and simply targets for extinction.

 

While this is an admirable and objectively scientific study, I think it overlooks a few things. 

 

First, it draws an arbitrary line between the AI system (which in practice would be a conglomeration of systems) and human beings.  Both now and in the foreseeable future, humans will be an essential part of AI because it needs us.  Let's imagine the opposite scenario:  how would humans wipe out all AI from the planet?  If every IT person in the world just didn't show up for work tomorrow, what would happen?  A lot of bad things, certainly, because computers (not just AI, but increasingly systems involving AI) are intimately woven into modern economies.  Nevertheless, I think issues (caused by stupid non-IT humans, probably) would start showing up, and in a short time we would have a global computer crash the likes of which have never been seen.  True, millions of people would die along with the AI systems.  But I'm not aware of any truly autonomous AI system of any complexity and importance that has no humans dealing with it in any way, as apparently was the case in the 1970 sci-fi film "Colossus:  The Forbin Project."

 

So if an AI-powered system showed signs of getting out of hand—taking over control of nuclear weapons, doing back-room pathogen experiments on its own, etc.—we could kill it by just walking away from it, at least the way things are now.

 

More likely than any of the hypothetical disasters imagined by the RAND folks is a possibility they didn't seem to consider.  What if AI just gradually supplants humans until the last human dies?  This is essentially the stated goal of many transhumanists, who foresee the uploading of human consciousness into computer hardware as their equivalent of eternal life.  They don't realize that their idea is equivalent to thinking that making an animated effigy of myself will guarantee my survival after death, much as the ancient Egyptians prepared their pharaohs for the afterlife. 

 

But pernicious ideas like this can gain traction, and we are already seeing an unexpected downturn in fertility worldwide as civilizations benefit from technology-powered prosperity.  If AI, and its auxiliary technological forms, ever puts an end to humanity, I think the gradual, slow replacement of humans by AI-powered systems is more likely than any sudden, concentrated catastrophe, like the ones the RAND people considered.  And the creepy thing about this one is that it's happening already, right now, every day.

 

Romano Guardini was a theologian and philosopher who in 1956 wrote The End of the Modern World, in which he foresaw in broad terms what was going to happen to modernity as the last vestiges of Christian influence were replaced by a focus on the achievement of power for power's sake alone.  Here are a few quotes from near the end of the book:  "The family is losing its significance as an integrating, order-preserving factor . . . . The modern state . . . is losing its organic structure, becoming more and more a complex of all-controlling functions.  In it the human being steps back, the apparatus forward."  As Guardini saw it, the only power rightly controlled is exercised under God.  And once God is abolished and man sets up technology as an idol, looking to it for salvation, the spiritual death of humanity is assured, and physical death may not be far behind.

 

I'm glad we don't have to worry about an AI apocalypse that would make a good, fast, dramatic movie, as the RAND people assure us won't happen.  But there are other dangers from AI, and the slow insidious attack is the one to guard against most vigilantly.

 

Sources:  Michael J. D. Vermeer's "Could AI Really Kill Off Humans?" appeared on pp. 73-74 of the September 2025 issue of Scientific American, and is also available online at https://www.scientificamerican.com/article/could-ai-really-kill-off-humans/.  I also referred to the Wikipedia article on sulfur hexafluoride.  The Romano Guardini quotes are from pp. 161-162 of his The End of the Modern World, in an edition published by ISI Press in 1998. 

Monday, August 18, 2025

Is the Internet Emulsifying Society?

 

About a year ago I had cataract surgery, which these days means replacing the natural lens in the eye with an artificial one.  Curious about what happens to the old lens, I looked up the details of the process.  It turns out that one of the most common procedures uses an ultrasonic probe to emulsify the old lens, turning a highly structured and durable object that served me well for 70 years into a liquefied mess that was easily removed. 

 

If you're wondering what this has to do with the internet and society, be patient.

 

A recent report in The Dispatch by Yascha Mounk describes the results of an analysis by Financial Times journalist John Burn-Murdoch of data from a large Understanding America survey of more than 14,000 respondents.  Psychologists have standardized certain personality traits as being fairly easy to assess in surveys and also predictive of how well people do in society.  Among these traits are conscientiousness, extraversion, and neuroticism.  People who are conscientious make good citizens and employees:  they are "organized, responsible, and hardworking."  Extraversion makes for better social skills and community involvement, while neuroticism indicates a trend toward anxiety and depression.

 

Burn-Murdoch divided up the results by age categories, with the youngest being 16 to 39, and compared the rates of these traits to what prevailed in the full population in 2014, less than ten years ago.  The results are shocking.

 

Everybody (16-39, 40-59, and 60+) has declined in extraversion from the 50th to the 40th percentile, although by only ten percentile points out of 100.  (If a number is unchanged from 2014, the results would be 50th percentile today).  But in neuroticism, those under 40, who were already in the 60th percentile in 2014, have now zoomed up to the 70th.  Lots of young neurotics out there.  And they have distinguished themselves even more in the categories of agreeableness (declining from 45 to 35) and most of all, in conscientiousness.  From a relatively good 47th percentile or so in 2014, the younger set have plummeted to an abysmal 28th percentile of conscientiousness in less than a decade.

 

When the results of conscientiousness are broken down into their constituent parts, it gets even worse.  Starting about 2016, the 16-39 group shows jumps in positive responses to "is easily distracted" and "can be careless." 

 

If the survey was restricted to teenagers, you would expect such results, although not necessarily this big.  But we're talking about people in their prime earning years too, twenty- to forty-year-olds. 

 

Mounk ascribes most of these disastrous changes to influences traceable to the Internet, and specifically, social media.  He contrasts the ballyhoo and wild optimism that greeted various Internet-based developments such as online dating and worldwide free phone and Zoom calls with the reality of cyberbullying, trolling, cancel culture, and the mob psychology on steroids that the Internet provides fertile soil for. 

 

Now for the emulsion part.  An emulsion takes something that tends to keep its integrity—such as a blob of oil in water or the natural lens of an eye—and breaks it up into individual pieces that are surrounded by a foreign agent.  In the case of mayonnaise, the oil used is separated into tiny drops surrounded by water.  Oil doesn't naturally mix with water, but when an emulsifier is used (the lecithin in egg yolk, in this case), it reduces surface tension and breaks up the oil into tiny droplets.

 

That's fine in the case of mayonnaise.  But in the case of a society, surrounding each individual with a foreign film of Internet-mediated software that passes through firms interested not primarily in the good of society, but in making a profit, all kinds of pernicious effects can happen.

 

There is nothing intrinsically wrong with making money, so this is not a diatribe against big tech as such.  But in the case of cigarettes, when a popular habit that made the tobacco companies rich was shown to have hidden dangers, it took a lot of political will and persistence to change things so that at least the dangers were known to anyone who picks up a pack of cigarettes.

 

Mounk thinks it may be too late to do much about the social and psychological harms caused by the Internet, but we are still at the early stage of adoption when it comes to generative artificial intelligence (AI).  I tend not to make such a sharp distinction between the way the Internet is currently used and what difference the widespread deployment of free software such as chatGPT will make.  For decades, the tech companies have been using what amounts to AI systems to addict people to their social media services and to profit from political polarization.  So as AI becomes more commonplace it will be a change only in degree, not necessarily in kind.

 

AI or no, we have had plenty of time already to see the pernicious results among young people of interacting with other humans mainly through the mediation of mobile phones.  It's not good.  Just as man does not live by bread alone, people aren't intended to interact by smartphone alone.  If they do, they get less conscientious, more neurotic, more isolated and lonely, and more easily distracted and error-prone.  They also find it increasingly difficult to follow any line of reasoning of more than one step.

 

Several states have recently passed laws restricting the use of smartphones in K-12 education.  This is a controversial but ultimately beneficial step in the right direction, although it will take a while to see how seriously individual school districts take it and whether it makes much of a difference in how young people think and act.  For those of you who believe in the devil, I'm pretty sure he is delighted to see that society is breaking up into isolated individuals who can communicate only through the foreign agent of the Internet, rather than being fully present—physically, emotionally, and spiritually—to the Other. 

 

Perhaps warnings like these will help us realize how bad things have become, and what we need to do to stop them from getting any worse.  In the meantime, enjoy your mayonnaise.

 

Sources:  John Burn-Murdoch's article "How We Got the Internet All Wrong" appeared in The Dispatch on Aug. 12, 2025 at https://thedispatch.com/article/social-media-children-dating-neurotic/.  I also referred to the survey on which it was based at https://uasdata.usc.edu/index.php. 

Monday, August 11, 2025

"Winter's Tale" and the Spirit of Engineering

 

Once in a great while I will review a non-fiction book in this space that I think is worth paying attention to if one is interested in engineering ethics.  Winter's Tale by Mark Helprin is a novel, published in 1983, and even now I can't say exactly why I think it should be more widely known among engineers and those interested in engineering.  But it should be.

 

Every profession has a spirit: a bundle of intuitive and largely emotional feelings that go along with the objective knowledge and actions that constitute the profession.  Among many other things, Winter's Tale captures the spirit of engineering better than any other fiction work I know.  And for that reason alone, it deserves praise.

 

The book is hard to describe.  There are some incontestable facts about it, so I'll start with those.  It is set mainly in New York City, with excursions to an imaginary upstate region called Lake of the Coheeries, and side trips to San Francisco.  It is not a realistic novel, in the sense that some characters in it live longer than normal lifespans, and various other meta-realistic things happen.  There are more characters in it than you'd find in a typical nineteenth-century Russian novel.  There is no single plot, but instead a complex tapestry that dashes back and forth in time like a squirrel crossing a street. 

 

But all these matters are secondary.  The novel's chief virtue is the creation of an atmosphere of hope and, not optimism, exactly—some truly terrible things happen to people in it—but a temperate yet powerful energy and drive shared by nearly all the characters, except for a few completely evil ones.  And even the evil ones are interesting. 

 

The fertility of Helprin's imagination is astounding, as he creates technical terms, flora and fauna, and other things that are, strictly speaking, imaginary yet somehow make sense within the story.  One of the many recurring elements in the book is the appearance of a "cloud wall" which seems to be a kind of matrix of creation and time travel.  Here is how Virginia, one of the principal characters, describes it to her son Martin:

 

           ". . . It swirls around the city in uneven cusps, sometimes dropping down like a tornado to spirit people away or deposit them there, sometimes opening white roads from the city, and sometimes resting out at sea while connections are made with other places.  It is a benevolent storm, a place of refuge, the neutral flow in which we float.  We wonder if there is anything beyond it, and we think that perhaps there is."

           "Why?" Martin asked from within the covers.

            "Because," said Virginia, "in those rare times when all things coalesce to serve beauty, symmetry, and justice, it becomes the color of gold—warm and smiling, as if God were reminded of the perfection and complexity of what He had long ago set to spinning, and long ago forgotten."

 

The whole novel is like that.

 

Although there is no preaching, no doctrine expounded, and very few explicitly religious characters such as ordained ministers, a thread of holiness, or at least awareness of life beyond this one, runs throughout the book.  This is probably why I learned about it from a recommendation by the Catholic philosopher Peter Kreeft, who mentioned it in Doors in the Walls of the World.

 

The reason engineers might benefit from reading it is that machines and other engineered structures—steam engines, cranes, bridges, locomotives—and those who design, build, and tend them, are portrayed in a way that is both appealing and transcendent.  At this moment I feel a frustration stemming from my inability to express what is so attractive about this book. 

 

You may learn something from the fact that the reviews of it I could find fell into two camps.  One camp loved it and wished it would go on forever.  The other camp, of which I turned out to be a member, said that after a while they found the book annoying, and almost didn't finish it.  I think one reason for the latter reaction is that structurally, it is all trees and very little forest.

 

The very fertility of Helprin's imagination leads him to introduce novel and fascinating creations, incidents, and characters every page or two, and the result is a loss of coherence in the overall story and sequence of events.  A chart of every character and incident with lines drawn among them would look like the wiring diagram of a Boeing 747. 

 

But every time I said to myself that I was going to stop reading it, I picked it up again, and finally chose one free day to finish the thing, all the time hoping that it would get to the point.  There is no crashing finale in which everything is tied up neatly with a bow.  There is, however, a climax of sorts, and toward the end events occur which have parallels in the New Testament.  Farther than that I shouldn't go, for fear of spoiling the ending for anyone who wants to read it. 

 

The only other novel I can think of that bears even a faint resemblance to Winter's Tale is G. K. Chesterton's The Man Who Was Thursday.  It is also a fantasy in the sense that unrealistic things happen, and it features characters who are what Kreeft calls archetypes, embodied representations of ideas.  Not everyone likes or can even make sense of Chesterton's novel, and the same will undoubtedly be true of Winter's Tale.

 

For a fantasy, Helprin's book is rather earthy in spots, and for that reason I wouldn't recommend it for children.  But the earthiness is not gratuitous, and rounds out the realism of his character portrayals.  Many of the main actors behave courageously and even nobly, and would be good subjects for the exemplary mode of engineering ethics, in which one describes how engineering went right in a particular case with ethical implications. 

 

If you pick up the book, you will know in the first few pages whether you can stand to read the rest.  If you persist till the end, you will have experienced a world unlike our own in some ways, but very like what it could be if we heeded, in Lincoln's phrase, the better angels of our nature. 

 

Sources:  Winter's Tale was published in 1983 by Harcourt Brace Jovanovich.  Peter Kreeft's Doors in the Walls of the World was published in 2018 by Ignatius Press.

Monday, August 04, 2025

Should We Worry About Teens Befriending AI Companions?

A recent survey-based study by Common Sense Media shows that a substantial minority of teenagers surveyed use AI "companions" for social interaction and relationships.  In a survey of over a thousand young people aged 13 to 17 last April and May, the researchers found that 33% used applications such as ChatGPT, Character, or Replika for things like conversation, role-playing, emotional support, or just as a friend.  Another 43% of those surveyed used AI as a "tool or program," and about a third reported no use of AI at all.

 

Perhaps more troubling than the percentages were some comments made by teens who were interviewed in an Associated Press report on the survey.  An 18-year-old named Ganesh Nair said, "When you're talking to AI, you are always right.  You're always interesting.  You are always emotionally justified."

           

The researchers also found that teens were more sophisticated than you might think about the reliability of AI and the wisdom of using it as a substitute for "meat" friends.  Half of those surveyed said they do not trust advice given to them by AI, although the younger teens tended to be more trusting.  And two-thirds said that their interactions with AI were less satisfying than those with real-life friends, but one-third said they were either about the same or better.  And four out of five teens spend more time with real friends than with AI.

 

The picture that emerges from the survey itself, as opposed to somewhat hyped news reports, is one of curiosity, cautious use, and skepticism.  However, there may be a small number of teens who either turn to AI as a more trusted interlocutor than live friends, or develop unhealthy dependencies of various kinds with AI chatbots. 

 

At present, we are witnessing an uncontrolled experiment in how young people deal with AI companions.  The firms backing these systems with their multibillion-dollar server farms and sophisticated software are motivated to engage young people especially, as habits developed before age 20 or so tend to stay with us for a lifetime.  It's hard to picture a teenager messaging ChatGPT to "grow old along with me," but it may be happening somewhere.

 

I once knew a woman in New England who kept a life-size cloth doll in her house, made to resemble a former husband.  Most people would regard this as a little peculiar.  But what difference is there between that sort of thing and spending time in online chats with a piece of software that simulates a caring and sympathetic friend?  The interaction with AI is more private, at least until somebody hacks the system.  But why does the notion of teenagers who spend time chatting with Character as though it were a real person bother us?

 

By saying "us," I implicitly separate myself from teens who do this sort of thing.  But there are teens who realize the dangers of AI overuse or misuse, and older teens especially expressed concerns to the AP reporter that too much socializing with chatbots could be bad. 

 

The same teen quoted above got "spooked" about AI companions when he learned that a friend of his used his companion to compose a Dear Jill message to his girlfriend of two years when he decided to break up.  I suppose that is not much different than a nineteenth-century swain paging through a tome entitled "Letters for All Occasions," although I doubt that even the Victorians were that thorough in providing examples for the troubled ex-suitor. 

 

Lurking in the background of all this is a very old theological principle:  idolatry.  An idol is anything less than God that we treat as God, in the sense of resorting to it for help instead of God.  For those who don't believe in God, idolatry would seem to be an empty concept.  But even atheists can see the effects of idolatry in extreme cases, even if they don't acknowledge the God who should be worshipped instead of the idol.

 

For a teen in a radically dysfunctional household, turning to an AI companion might be a good alternative, but a kind, loving human being would always be better.  Kind, loving human beings aren't always available, though, and so perhaps an AI companion would suffice in a pinch like a "donut" spare tire until you can get the flat fixed.  But you shouldn't drive on a temporary tire indefinitely, and teens who make AI companions a regular and significant part of their social lives are probably headed for problems.

 

What kind of problems?  Dependency, for one thing.  The AI firms are not promoting their companions out of the kindness of their collective hearts, and the more people rely on their products the more money they make.  The researchers who executed the survey are concerned that teens who use AI companions that never argue, never disagree with them, and validate everything they say, will be ill-prepared for the real world where other humans have their own priorities, interests, and desires. 

 

In an ideal world, every teen would have a loving mother and father they would trust with their deepest concerns, and perhaps friends as well who would give them good advice.  Not many of us grew up in that ideal world, however, and so perhaps teens in really awful situations may find some genuine solace in turning to AI companions rather than humans.

 

The big news of this survey is the fact that use of AI companions among teens is so widespread, though still in the minority.  The next thing to do is to focus on those small numbers of teens for which AI companions are not simply something fun to play with, but form a deep and significant part of their emotional lives.  These are the teens we should be the most concerned about, and finding out why they get so wrapped up with AI companions and what needs the companions satisfy will take us a long way toward understanding this new potential threat to the well-being of teenagers, who are the future of our society.

 

Sources:  The AP article "Teens say they are turning to AI for friendship" appears on the AP website at https://apnews.com/article/ai-companion-generative-teens-mental-health-9ce59a2b250f3bd0187a717ffa2ad21f, and the Common Sense Media survey on which it was based is at https://www.commonsensemedia.org/sites/default/files/research/report/talk-trust-and-trade-offs_2025_web.pdf.