Monday, February 08, 2016

Twitter and Terrorism


On Feb. 5, the short-message-service Internet firm Twitter announced that since the middle of 2015, it has suspended 125,000 accounts because they appeared to be promoting terrorism or similar extremist activities.  While Twitter has long maintained rules against such content in tweets, this is the first time they have made public a specific number of account suspensions connected with terrorism.  This move and the associated problem Twitter is trying to deal with bring up important questions about the ethics of communications technologies and the way private organizations have displaced national laws as arbiters of free speech.

Historically, communications systems rarely arise in discussions of engineering ethics.  For example, I doubt that in the 1950s the Society of Motion Picture Engineers debated the question of screenwriters who were blacklisted during the McCarthy communism-scare era.  The question of a medium's content was seen to be almost totally distinct from the technology and engineering it used. 

But gradually that has changed as technical, managerial, and censorship roles have morphed and merged in the strange new cyberspace world of spam, viruses, and tweets.  The problem Twitter faces, of groups such as ISIS using Internet services to promote and coordinate terrorist activities, is real.  Syed Rizwan Farook and his wife Tashfeen Malik apparently drew much of their inspiration for the attack in San Bernadino, California from Internet sites promoting jihad.   Their December 2015 attack killed fourteen and wounded twenty-two.  Even messages limited to 140 characters can be used to recruit and coordinate such things, although there is no evidence that Twitter was involved in that particular incident.

Nevertheless, Twitter, with only 3,900 employees, faces the daunting task of enforcing its Twitter Rules on all 300-some million active users every day.  Clearly, much of this task involves technology to sift through the millions of messages pouring through Twitter's servers.  It also involves the cooperation of groups concerned about terrorism, with which Twitter has teamed in an effort to find and suspend violators of Twitter's rule against promotion of terrorism.  But it also involves fundamental questions of free speech—questions that used to be debated mainly in the halls of legislatures and courts of law, not in the cubicles of software engineers.  Increasingly, it's the engineers—or people who work closely with them—making the on-the-ground decisions about who gets to tweet and who gets their beaks clamped shut.

The fact that Twitter has gone public with a specific number of account closures is a move apparently designed to send a message to those who would use the service for nefarious purposes.  It also serves to raise the status of the company in the eyes of those who are worried about misuse of the Internet for terrorist activities.  And it emphasizes the magnitude of the problem.  Suspending accounts can be compared to a medical test for a serious ailment.  If you get too many false positives, you'll be bothering healthy people with a diagnosis that later has to be reversed.  But if you get too many false negatives, you let people with a serious disease slip through without treatment, possibly leading to worse results later on.  So the challenge for Twitter is to find accounts that are being used to promote terrorism in some way and suspend only those, without cutting off people who are not trying to make trouble.

From a free-speech point of view, these suspensions could be viewed as censorship.  But even the courts recognize that free speech has limits—the classical example being the lack of a right to yell "Fire!" in a crowded theater.  So Twitter's actions are justifiable on that basis in cases where the possible harm to others in the form of terrorist activity appears to outweigh the value of preserving free speech for all Twitter account holders. 

This is not a critique of Twitter, by any means.  They appear to be taking responsibility for a hard job and doing it as well as they can.  Looming in the background, of course, is the possibility that if a family of someone killed in a terrorist attack discovers that Twitter accounts were involved in planning the attack, the firm might get sued.  While I'm not aware of any such suits, such possibilities always have to be considered when you are dealing with a large-scale operation involving millions of people. 

But I think the most notable thing about this situation is the way that the practical basis of free speech, in this case anyway, has spread from the legal system to international private firms where the parties are mostly anonymous users, largely invisible software engineers, and company policy makers, in cooperation with various outside agencies who are all selected by Twitter.  The legal system hasn't entirely lost its influence, in that companies such as Twitter are still responsive to sustained large-scale legal challenges.  But in the wild-West environment of the Internet, such challenges are unusual and often politically inspired.  Preventing terrorism is a pretty uncontroversial position politically, and so Twitter doesn't seem too worried that it will get sued by a coalition of terrorist groups for what it's doing to their accounts.  Terrorists have other ways of settling such disputes, and I hope they don't use them.

It's a shame that evildoers have bent the Internet to their will to the extent that firms like Twitter have to spend a lot of time and effort whacking moles, which in many cases pop up again right away, either on Twitter or on other more private Internet communications setups.  But doing nothing would be irresponsible.  The knowledge that such suspensions can happen is what makes most Twitter users behave, not so much the actual suspensions, just as the knowledge that one is liable to get a speeding ticket makes most people obey speed-limit signs whether or not there is an actual traffic cop in sight.  Kudos to Twitter for kicking suspected terrorists off the telephone wires, so to speak, and let's hope that their very public stance against such things forces terrorists into corners of the Internet where it is harder to recruit people to their cause.

By the way, I have begun to do a weekly tweet summarizing each blog post.  My Twitter handle is @karldstephan, in case you want to follow me there.

Sources:  The New York Times carried an article by Mike Isaac entitled "Twitter Steps Up Efforts to Thwart Terrorists’ Tweets" on Feb. 5, 2016 at http://www.nytimes.com/2016/02/06/technology/twitter-account-suspensions-terrorism.html.  I also referred to the Twitter announcement of the 125,000 suspensions at https://blog.twitter.com/2016/combating-violent-extremism, the Twitter Rules at https://support.twitter.com/articles/18311#, and the Wikipedia article on Twitter.

Monday, February 01, 2016

Dereliction of Duty: The Flint Water Crisis


When a city operates a public water-supply system, it enters into an implied agreement with its customers, most of whom have no realistic second choice as to where to get domestic water.  Customers buy water from the city, and the city guarantees that the water is safe to drink. 

Starting in April of 2014, the city of Flint, Michigan began to violate its part of the bargain.  In 2011, the impoverished tax base of Flint, long past its glory days when the U. S. automotive industry was king, forced the city into receivership.  A dispute with the city of Detroit, from which Flint had purchased its water for the past several years, led to an attempt by the state-appointed emergency manager of Flint to save money by switching to a backup source of water, the Flint River.

The Flint River's water itself was safe to drink, but it was more acidic and had more salinity than the treated Detroit water did.  During the spring and summer of 2014, residents of Flint, especially those in older homes, began to notice that the water had an odd taste.  Up to about 1920, most service connections to the water mains were made with lead pipes.  While lead is a well-known toxin that is especially hazardous to pregnant women's babies and children under 6, the Detroit water previously supplied by Flint before April 2014 had coated the inside of the pipes with an inert phosphate or oxide layer that usually kept the levels of lead small enough not to cause problems.

However, the acidic and saline Flint water began etching away the mineral coating in the lead pipes to expose bare lead to the water going through the pipes, and levels of lead in water supplied to Flint homes began to rise.  In March of 2015, a private water-infrastructure firm called Veolia issued a report saying that lead in Flint water was in violation of EPA regulations.  While it was tragic that increased levels of lead got to Flint's citizens at all, something worse was about to happen.

In response to the Veolia report, the state-appointed financial manager of Flint, Jerry Ambrose, said that the city water was in compliance with all EPA and Michigan Department of Environmental Quality standards, and "the city is working daily to improve its quality."  This statement may have been technically true in the sense that the water Flint was putting into the mains was safe by itself.  But once it passed through resident-owned lead pipes, the combination was dangerous.  In September of 2015, a study by a Virginia Tech professor revealed that the levels of lead in tap water was higher than federal regulations allowed in about a quarter of Flint's households, and was up to 800 times the limit in some locations. 

Finally, in October of 2015, under pressure from state and local groups, Flint switched back to buying water from Detroit.  But the damage had already been done:  the mineral coatings that had built up over many decades in lead pipes was now mostly gone, and just going back to water with less acidity and salinity wasn't going to fix the problem. 

The story since then has been one of complex political wrangling that has tainted Michigan Governor Richard Snyder, who has approved over $30 million of state aid for the crisis, and is still ongoing in the form of lawsuits, emergency orders, water testing, and questions from residents about what harm has befallen their children and what they should do next.

As so often happens, the people most affected by this crisis are the ones least able to do something about it:  babies of pregnant women who drank lead-laced water and children who may still be ingesting lead from a place that ought to be safe to drink from, namely, the water faucets in your own house. 

When lawsuits come into play it isn't always easy to get to the bottom of a situation and find out exactly who knew what when.  To my knowledge, regular tap-water tests inside resident's homes are not routinely done by municipal water departments, but testing for lead in one's drinking water is not something that it is reasonable to expect private individuals to do—especially not those below the poverty line, which describes many of Flint's residents.  Engineers in the Flint water department should have known (and may well have known) that the combination of acidic Flint River water and lead pipes in old infrastructure was going to lead to problems.  But even after the trouble was widely known by the public and verified by independent tests, the financial manager of Flint apparently remained in denial.  Admittedly, being in bankruptcy makes things more complicated for a municipality, but the physical safety of citizens should override fiscal considerations. 

The Flint water crisis is an object lesson in how not to handle a public-health problem, especially one that was caused, at least indirectly, by actions of the city itself.  Despite abundant evidence that there was a problem, city officials delayed remedial action for another six to nine months.  This will probably dig the city even deeper into its financial hole after lawsuit judgments come due, and shows how important prompt, definitive action can be, and how much trouble can result if it is delayed.

Ideally, every bit of lead pipe in the city of Flint should be dug up and replaced with non-toxic service pipes.  But that would cost several thousand dollars per household in a city that is already reeling from decades of economic decline.  The United Way and other charitable organizations have gotten involved, but their efforts are limited to checking on current lead levels and alleviating possible medical consequences of ingesting lead during the worst of the crisis.  The fallout from this incident will haunt Flint for years, and I can only hope that the awareness of lead-contaminated drinking water brought into prominence by this situation will lead other cities with similar problems to get their own lead-in-water issues in order.  Sometimes a blunt colloquialism is the best way to express things:  "Get the lead out!"

Sources:  I referred to the Wikipedia article "Flint water crisis," an article in the Detroit Free Press online edition of Jan. 30, 2016 at http://www.freep.com/story/news/local/michigan/flint-water-crisis/2016/01/29/epa-high-lead-levels-flint-exceed-filters-ability/79540740/ entitled, "EPA:  High lead levels in Flint exceed filters' rating," and a Massachusetts Water Resources Authority online report's executive summary about the effects of acidic water on lead pipes at http://www.waterrf.org/ExecutiveSummaryLibrary/4064_Executive_Summary.pdf.  I also referred to an Associated Press article, "$28 million added to address water crisis in Flint," by Jeff Karoub and David Eggert carried in the Austin American-Statesman print edition of Jan. 30, 2016.

Monday, January 25, 2016

Personalized Engineering and the Vision of Christopher Alexander


One way to encourage ethical engineering is to talk about moral exemplars—people who have faced a challenging ethical situation and dealt with it in a remarkable and positive way.  The moral exemplar I'd like to introduce to you today is someone you have probably never heard of—Christopher Alexander.  He's not even an engineer in the conventional sense.  But he has devoted his career to a vision that I think engineers ought to know about, at least, and perhaps can apply in hitherto un-thought-of ways.

Alexander is emeritus professor of architecture at the University of California at Berkeley.  His undergraduate education at Cambridge University was in chemistry and physics, but then he went on to receive the first Ph. D. in architecture awarded by Harvard University. 

I can perhaps describe his unique achievements by setting up a contrast between how architecture is usually done in modern industrial countries, and what Alexander does.  Most buildings that people live in and work in these days in the U. S. are products of a mass-production philosophy whose criteria are efficiency, profit, conformance to building codes, and free-market forces that favor economies of scale over small, individualized efforts.  For example, the town I live in—San Marcos, Texas—has broken out over the past decade in literally dozens of mass-produced apartment complexes.  Some of these are better to look at than others, but one glance at them tells you they were designed by some anonymous committee in Atlanta or Pittsburgh and plopped down here in Central Texas with the main goal of maximizing return on capital invested.  The fact that people spend parts of their lives in these things is almost an afterthought, at least in some cases.

Here is how Alexander would design an apartment building, as he describes in his book The Timeless Way of Building.  First, he gathers not other architects, or building inspectors, or structural engineers, but the people who are actually going to live in the building.  He spends a lot of time with them, and familiarizes them with a special set of phrases that he calls "pattern languages."  A lifetime of study has enabled him to describe the complex of interactions between people and the built environment in a rigorous yet understandable way that brings architectural design within the grasp of the ordinary people who will use the buildings—who, incidentally, were the ones who designed most buildings before architecture became an independent profession. 

Once the future occupants understand how pattern language is used to describe a design, Alexander takes them to the actual building site and asks questions—lots of questions.  Where will the entrance be?  What should we do with these trees?  Which way does the light fall at various times of year?  And in a process that takes days, rather than weeks or months, he stakes out locations on the ground where different structures will rise up organically, in response to a reasoned and thoughtful discussion about the needs and feelings of those who will live in the apartments.  Ideally, this back-and-forth discussion using pattern languages continues during the construction of the building as well, down to details such as ceiling heights and doorknobs. 

The result, according to Alexander, is a building that lives.  Most people have had the experience of visiting a special place that stayed in your memory as (in Alexander's carefully chosen words, none of which does the job completely), alive, whole, comfortable, free, exact, egoless, and eternal.  For one person, it might be a certain bench in a park—for another, a cathedral.  He claims that his pattern languages can capture those aspects of special good places that make them that way, and his process allows people—ordinary people, not just architects—to express their thoughts in a way that allows more good places to be built:  places that can grow organically like trees even after they are nominally finished. 

What has this got to do with engineering?  Surprisingly, more than you might think.  The Wikipedia article on Alexander says that some of his pattern-language ideas have found applications in computer science and have been applied to software design.  But I think every engineer, not just software engineers, could benefit from a knowledge of Alexander's philosophy and approach.

What Alexander is trying to do is to humanize architecture, reversing a trend that has roots in the industrial revolution of the 1800s.  Modernist architect Le Corbusier's famous description of a house as a "machine for living in" expresses this trend, whose underlying philosophy is behind nearly all modern technological developments.  What is the entrepreneurial dream of today?  To come up with a single concept—Google, Facebook, self-driving cars—that billions of people want and are willing to conform their lives to.  Most of the time, these developments simply ignore or displace existing social and cultural structures and impose a bland, uniform modern appearance everywhere they go—like seeing McDonald's golden arches in Paris, London, and Tokyo. 

I wonder if it is possible to humanize engineering the way Alexander has humanized architecture.  This would involve bucking a million trends and starting small, and probably staying small, too.  Certain attempts of charitable organizations to fit engineering to indigenous needs make efforts in this direction, so it's not like nobody at all is trying.  But by the nature of things, such anti-establishment attempts will not attract a lot of money or attention.  That doesn't mean they are not worth doing.  But it does mean those who try them will probably be misunderstood and lonely, and may not be able to succeed against the incredible pressures to conform to the modernist paradigms. 

Alexander came to my attention through an essay he published in First Things, a journal of religion and public life.  He is a practicing Catholic and in the essay he says that "the sacredness of the physical world—and the potential of the physical world for sacredness—provides a powerful and surprising path towards understanding the existence of God, whatever God may be, as a necessary part of the reality of the universe."  And to those of us who believe in God as the ground of all being, systems which work under the assumption that God doesn't exist are fatally flawed, though the flaws may not become evident right away.  Maybe doing engineering the way Alexander does architecture could teach us something equally profound about engineering.

Sources:  Christopher Alexander's essay "Making the Garden" appeared in the February 2016 issue of First Things, pp. 23-28.  A good introduction to his work is his The Timeless Way of Building (Oxford Univ. Press, 1979).

Monday, January 18, 2016

Earthquake Prediction Goes Commercial — Sort Of


Everybody's now used to seeing weather maps with "past" and "future" buttons on them, allowing you to see what the weather is likely to be a day or two ahead of time.  Did you know there is at least one company that is now publishing a similar map of the world that depicts regions that may shortly experience earthquakes?  QuakeFinder, which calls itself a "humanitarian R&D project" of a parent firm named Stellar Solutions, has a Public Data Center page where they put little red dots in regions that have experienced a change in electromagnetic activity, which (according to QuakeFinder) has been correlated with future earthquakes.  I don't know how much traffic their site attracts, and so far I haven't seen any red dots show up, but I just found out about the site today. 

QuakeFinder bases their predictions on three types of data:  (1) ultra-low-frequency (ULF) magnetic fields, (2) concentration of ions in the air, and (3) emission of infrared radiation as monitored by satellites.  A number of studies over the last few decades have turned up situations in which disturbances in all three quantities have preceded medium to large earthquakes in many locations.  Of course, it's a long stretch between noticing some correlations and using data to make specific predictions about earthquakes.  But at least two organizations—QuakeFinder and another outfit called GeoCosmo—seem to think that there's enough data to start estimating the timing, location, and size of future earthquakes.

I will leave the question of whether QuakeFinder's predictions are accurate aside for the moment, and turn to what might be an even more vexing issue:  once you have a way of predicting earthquakes with some degree of precision, what should you do with it?

A lot depends on the level of false positives (times you say there will be a quake and nothing, or almost nothing, happens) and false negatives (times you miss making a prediction and an earthquake catches you by surprise).  Let's say for the sake of argument that the system does as well at predicting earthquakes as today's weather forecasters do at predicting tornado activity.  I don't have exact statistics on hand at the moment, but my sense is that the great majority of the time when a region is in a tornado watch, some violent weather usually occurs—either a tornado or high winds that can cause as much damage as a small tornado.  And the weather prophets very rarely get caught napping nowadays by failing to predict violent weather, although there are times when a storm becomes a lot worse than forecasts predicted.

At one extreme, it would be the height of moral irresponsibility to know that a major earthquake is going to hit a populated area (where "know" means, say, an 80% chance), and not do anything to let the affected people take precautions.  So the development of a truly reliable earthquake prediction system carries with it the moral obligation to share the information in some form with the general public.

On the other hand, what sorts of precautions should be taken if earthquake prediction becomes a reality?  I can imagine different degrees of preparedness for different groups.  First responders and emergency services would take such predictions most seriously by increasing reserve staffing and supplies and heightening their readiness for a crisis.  People in structures that are known to be especially vulnerable to earthquake damage might consider just staying away for a few days.  Depending on how far in advance a quake could be predicted, this could be a problem. 

It's not clear yet whether earthquake prediction will share with tornado prediction the characteristic that shorter time spans mean more accurate predictions.  If a weather radar shows a tornado two miles west of you heading east at thirty miles an hour, it's pretty easy to say you'll be in big trouble in about four minutes.  It's possible that the best earthquake predictions may never provide time windows narrower than many hours or even days.  Making people stay home or in earthquake-resistant shelters for several days is simply not going to fly, so a lot will depend on how chronologically precise the predictions can be made. 

Another important question is, who's going to pay?  When scientific prediction of weather first became possible in the late 1800s, the economic and military advantages of doing so were so obvious that most national governments established weather bureaus or the equivalent, and for many years government weather prediction was the only show in town.  The observation end of weather forecasting—all those weather stations, weather satellites, and people keeping records for decades—is still expensive, and borne largely by government agencies, but a large number of private weather-forecasting firms now take government data and use it for both public predictions through the media and specialized predictions through commercial transactions.

So far, the model used by QuakeFinder is a non-profit one, although the line dividing a non-profit organization from a commercial operation is not always that easy to draw.  QuakeFinder does apparently have "subscribers" who presumably get customized data.  Weather bureaus and weather forecasting prospered because their forecasts were accurate enough to be valuable, and we can expect earthquake forecasting to be held to a similar standard.  On its website, QuakeFinder claims to have predicted a couple of Peruvian earthquakes, which claim is confirmed indirectly by contemporary news reports citing the involvement of a "California company" (presumably QuakeFinder) in a prediction by a Peruvian scientist of two medium-size earthquakes in Peru in April of 2013. 

But just as two swallows don't make a summer, two predictions don't make a successful prediction system.  Large segments of the scientific community remain unconvinced that earthquake prediction is anything more than a slightly informed guess.  According to some sources (including journalist Alberto Enriquez), one of the biggest wet blankets on earthquake prediction is the United States Geological Survey (USGS).  Apparently back in the 1980s, this agency received extra funding to develop earthquake predictions, and they got burned when their forecast of a major earthquake (again in Peru) failed to materialize in 1981.  Ever since, according to Enriquez, they have been critical of earthquake prediction and have made it hard for researchers to publish in this area or to receive funding.

But other agencies such as the National Aeronautics and Space Administration (NASA) are supporting the work of researchers such as Friedemann Freund, who has been mentioned previously in this space as the developer of a theory (confirmed by experiments) that stressed rocks can produce large electric and magnetic fields when mobile charge carriers he calls "p-holes" arise in them.  Freund is one of the founders of GeoCosmo, which focuses on earthquake prediction studies.

The nice thing about private enterprise is that it's self-limiting.  If QuakeFinder or GeoCosmo get it right often enough, people will start paying attention.  Let's hope they can figure out how to do it and get taken seriously enough to save some lives before the next big quake hits.

Sources:  I thank Alberto Enriquez for drawing my attention to recent developments in this field through his website http://seismoem.com/blog/earthquake-forecasting-is-here-today.  QuakeFinder's website is at https://www.quakefinder.com/.  GeoCosmo's website is geocosmo.orgA news report on June 25, 2013 providing independent confirmation of the Peruvian earthquake prediction attempt is at http://www.peruthisweek.com/news-peruvian-geologists-may-be-able-to-predict-earthquakes-100220.  I also referred to an article by Julia Rosen carried on the American Association for Advancement of Science Science website, entitled "Can electric signals in Earth's atmosphere predict earthquakes?" at http://www.sciencemag.org/news/2015/12/can-electric-signals-earth-s-atmosphere-predict-earthquakes.  Friedemann Freund's research in "seismoelectromagnetics" (the electric and magnetic fields produced by stressed rocks) was summarized in this space in "Global Warming or Global Shaking?  A Tale of Two Theories" on Feb. 20, 2007.

Monday, January 11, 2016

Repurposing the Refrigerator


The development of consumer technology is a two-way street.  Manufacturers can't sell a product if nobody wants it, so successful consumer-product firms pay attention to what their customers are using their products for, and adapt new versions to those uses.  A good example of how this can work is on display at this year's Consumer Electronics Show in Las Vegas:  Samsung's Family Hub refrigerator.

As described by the Washington Post's Hayley Tsukayama, the Family Hub features an electronic version of the pictures and sticky notes that many of us cover the front of the refrigerator with.  It's a large touchscreen on the refrigerator door that can display a calendar, notes, photos, and I suppose anything else an Internet-enabled appliance can download.  It interfaces with a Samsung mobile-phone app, so you can easily transfer data from your phone to the front of the refrigerator.  The refrigerator also has cameras inside that let you see how much milk you have left when you're grocery shopping—no need to call home and ask somebody to look in the fridge.  Just call up your refrigerator app and take a look yourself.

How did a device whose original purpose was to preserve food become a communications center?  Will Samsung's innovation catch on?  And what difference does it make in the broader scheme of things? 

In 1996, historians of technology Ronald Kline and Trevor Pinch showed how U. S. farmers took an early-twentieth-century technology intended for one purpose—the Model-T automobile—and repurposed it for a variety of other uses, ranging from plowing to running washing machines.  It's pretty safe to say that Henry Ford did not anticipate these alternative uses for his brainchild.  Kline and Pinch say this was a specific example of what is known to historians as the "social construction" of a new technology, in which users become active agents of change rather than just passively accepting what the manufacturer sells them and using it only in the way it was intended.

You could say that the refrigerator as family bulletin board is another socially constructed technology.  My grandmother had a refrigerator that must have dated back to the 1950s.  It had the old-fashioned (and dangerous) mechanical-locking door and a smooth white enameled finish.  I don't recall that she ever affixed notes or other documents to the door, but she died in 1992, just as the rubber-ceramic refrigerator-type magnets were becoming popular, both for the easily opened gasket seal around the door (which kept abandoned refrigerators from becoming deathtraps for small children), and for holding notes and photos to the front of the door.

Almost everyone in a household who is old enough to read is going to open the refrigerator on a regular basis.  So the refrigerator door is a logical place to put notes, photos, and other things that you want everyone to see.  For at least the last twenty or thirty years, the refrigerator-magnet calendar or business card has been a staple of promotional advertising products.  Most homes I have visited, especially if there are children involved, have had a refrigerator door festooned with a kind of graphic history and projection of the family's life and activities.  I suppose some sociologist somewhere has made a study of the kinds of things people put on their refrigerator doors, but the content isn't so important as the fact that it became a sort of custom, like the town crier in old New England.

Then came the stainless-steel refrigerator, first in high-end products, and later spreading to pretty much the entire line of products.  The stainless-steel style is so dominant now that I'm not sure you can find new refrigerators with a painted or enameled steel exterior anymore.  When our fifteen-year-old refrigerator died last year, the stainless-steel models were pretty much the only choice at the hardware store we went to.  If somebody had asked me, I could have told them that stainless steel is non-magnetic, but the full impact of this didn't happen till we'd stripped all the refrigerator magnets off the old unit and tried to put them on the new one.  They stick to the non-stainless sides, but not the front. 

So I welcome Samsung's attempt to bring back the repurposed refrigerator as family communications center, but I'm not sure whether a twenty-inch touchscreen is the right idea.  It all depends on the software.  Unless the Family Hub comes with its own keyboard, typing inputs is going to be a pain, as typing on a vertical surface is not that comfortable.  Of course if you have a Samsung phone, it won't be a problem.  (I don't know about the other kinds.)  A promotional video shows that you can write on the screen with your finger, but that rarely works well for more than a word or two.  And another question involves permanence.  Some of the photos we had on our old fridge were twenty years old and more.  Somehow I doubt that it's going to be easy to keep old images or other memorabilia that long on the Family Hub display.  And what about power failures?  If your emergency numbers are on the display and the display goes blank in an emergency, that's a problem. As for the camera feature, I can see potentials for hacking issues.  In addition to all your other passwords, you'll now need a password for your refrigerator.  But these are things that can be dealt with fairly easily.

The touchscreen-enabled refrigerator shows that Samsung is thinking about how people really use their products, not just how they're supposed to use them, and acting accordingly.  If it catches on, all the other appliance makers will have to come out with their own versions, which of course will not be compatible software-wise with Samsung's.  So if you get a new refrigerator, does that mean you'll have to get a new phone to match?  I hope not.  The Family Hub may be one of those silly things that disappears without a trace.  Or it may be the first sign of something that will become as universal as mobile phones themselves.  Time and the consumer will decide.

Sources:  Hayley Tsukayama's report on the 2016 Consumer Electronics Show was carried in the Washington Post online edition on Jan. 8, 2016 at https://www.washingtonpost.com/news/the-switch/wp/2016/01/08/ces-is-known-for-having-some-crazy-gadgets-this-year-is-no-exception/.  The article "Users as Agents of Technological Change:  The Social Construction of the Automobile in the Rural United States," by Ronald Kline and Trevor Pinch appeared in the Society for the History of Technology journal  Technology & Culture, vol. 37, no. 4 (Oct. 1996), pp. 763-795.

Monday, January 04, 2016

To Blog Followers Not Using Google Accounts: Change in Google Policy


This notice is for all followers of this blog who use Twitter, Yahoo, Orkut or other "OpenID" providers to follow this blog.  On Jan. 11, Google is going to require you to have a Google account in order to follow this blog.  If you do not have a Google account and wish to continue following this blog, you need to take action before Jan. 11.  Here is the full text of the information I received on Jan. 4:

An update on Google Friend Connect

Posted: December 21, 2015
In 2011, we announced the retirement of Google Friend Connect for all non-Blogger sites. We made an exception for Blogger to give readers an easy way to follow blogs using a variety of accounts. Yet over time, we’ve seen that most people sign into Friend Connect with a Google Account. So, in an effort to streamline, in the next few weeks we’ll be making some changes that will eventually require readers to have a Google Account to sign into Friend Connect and follow blogs.

As part of this plan, starting the week of January 11, we’ll remove the ability for people with Twitter, Yahoo, Orkut or other OpenId providers to sign in to Google Friend Connect and follow blogs. At the same time, we’ll remove non-Google Account profiles so you may see a decrease in your blog follower count.

We encourage you to tell affected readers (perhaps via a blog post), that if they use a non-Google Account to follow your blog, they need to sign up for a Google Account, and re-follow your blog. With a Google Account, they’ll get blogs added to their Reading List, making it easier for them to see the latest posts and activity of the blogs they follow.

We know how important followers are to all bloggers, but we believe this change will improve the experience for both you and your readers.

Posted by Michael Goddard, Software Engineer

Sandwich Panels and the Dubai Hotel Fire


If you were watching TV on New Year's Eve, amid all the spectacular fireworks displays in cities around the world you might have also seen an unplanned spectacle:  the blaze climbing up one side of the 63-story Address Hotel in Dubai, the largest city of the United Arab Emirates (UAE).  While the Address Hotel is not the tallest skyscraper in the world (that honor goes to the Burj Khalifa, also in Dubai), it's tall enough to attract global attention as it was enveloped in flames during the night.  Amazingly, no fatalities were reported, although numerous people suffered smoke inhalation or minor injuries while the hotel was being evacuated.  The main reason for the absence of serious casualties was that the fire was confined almost entirely to the "sandwich panels" or cladding on the outside of the building.  Why they could catch fire—and why entire buildings are covered with flammable material in the first place—are topics worth pursuing.

As most people know, modern skyscrapers depend on a hidden steel skeleton for mechanical strength, not on the exterior surfaces, which can be chosen for properties other than their ability to support the building.  At first, high-rise architects stuck to the traditional stone, concrete, and brick for facades, but in the 1950s, they began to experiment with lighter-weight and cheaper materials, such as glass and aluminum. Properly handled and mounted, aluminum makes a fine, long-lasting sheathing material, and so does glass.  Then a couple of decades ago, someone had the idea of sandwiching a few millimeters of plastic—polyethylene or some other heat-softening—between two thin  foil-like claddings of aluminum, making something cheaper and lighter but just as good-looking as solid aluminum.  Thus the sandwich panel was born.

Now, most heat-softening (thermoplastic) plastics can burn very easily, and some attempts were made to introduce fire-retardant materials into the plastic core of sandwich panels to make them fire-resistant.  Apparently, these attempts did not convince U. S. building-code authorities that the new sandwich panels were safe enough to use in high-rises.  A comment on an architect's chatroom I found indicates that these types of panels are prohibited in the U. S. for use on buildings taller than about four stories.  But other countries either had no such laws, or came to realize the potential for disaster too late.

What can happen is this.  If you have a whole building encased in this stuff, and one panel near the bottom happens to catch fire somehow (fireworks seem to be a popular way to do this), you are in big trouble.  Aluminum has a low melting point and melts away from the plastic cladding as soon as the flame reaches it, exposing more plastic to air and letting the fire feed on itself.  Hot air and flames travel upward to the next panel and so on, and in the case of a 63-story building, there's plenty of upward to travel through.  This is what happened, apparently, not only to the Address Hotel, but to several other similarly-clad high-rises in Dubai and elsewhere in the last few years.  The architect-chatroom website where this problem was discussed has numerous pictures of burned-over building exteriors in Dubai, China, and elsewhere—all fires in which sandwich panels played a critical role.

Fortunately, the fires that these panels support tend to stick to the outside, and most of the time, people inside the buildings have time to evacuate before anyone gets killed.  But nobody wants to leave a building under duress while dodging falling pieces of burning plastic and metal on your way out.  And it's very costly to clean up the resulting mess and re-cover the structure with something that won't burn as easily next time.

Both Australia (where such a fire happened in Melbourne in 2014) and the UAE have changed their building codes to require sandwich panels to pass certain fire-retardant tests.  There are two problems with this, however.  One, it's not clear exactly how fire-retardant a panel has to be in order to resist spreading a fire on a tall building.  The only sure way to know is to build such a building and try to set fire to it, and this experiment is beyond the resources of most building-code-writing organizations.  Second, such codes generally apply only to new construction, and are not retroactive.  So anyone who's already built a skyscraper with flammable cladding doesn't have to take the cladding down and replace it with something better.  That is, until it catches fire.  Judging by the fact that the Address Hotel fire was the third such conflagration in Dubai in three years, it may be only a matter of time until the others light up too.

Modeling how fires start and spread is still an inexact science, and it is understandable that pressures from the building industry allowed dangerous sandwich panels to be installed in many places around the world, despite the hazards involved.  But it takes only one or two fires like this to demonstrate that there's a serious problem.  The almost universal tradition of not making building codes retroactive makes sense, because taking stuff out of an existing building to replace it can be more expensive than the original building cost.  Better in that case simply to condemn the thing and tear it down, but that's an extreme measure too. 

So what's the best that can be done in the present situation?  There may be some lower-cost ways to reduce the chances that a fire in existing sandwich panels will spread, possibly by installing some kind of fire-break strip at selected heights.  But that would be pretty speculative and might not work.  Another proposal has been to install fire sprinklers on balconies near sandwich panels, because many of the buildings are high-rise apartments, and I bet there has been more than one numskull who's tried to light a barbecue grill on his balcony and let the fire get out of hand.  If a building has potentially flammable sandwich panels, the owners better make sure that all the fire alarms and protection systems are operational, and conducting regular fire drills might not be a bad idea either.  But owners will be reluctant to advertise the fact that their building is a giant firework waiting for someone to light the fuse.

We can also be thankful that U. S. building codes flat-out prohibit the use of sandwich panels in high-rise structures.  Yes, it forces builders to use more expensive materials, and drives the cost up compared to construction costs in other countries.  But we've had enough towering infernos in this country to last us a long time, and we don't need any more.

Sources:  I referred to a Reuters report by Andrew Torchia carried on Jan. 2, 2016 on the Yahoo News website http://news.yahoo.com/dubai-blaze-raises-questions-over-gulf-skyscraper-design-160747983--finance.html#.  The architect's chatroom with a comment about the U. S. prohibition of sandwich panels and photos of similar fires in other countries is at http://www.skyscrapercity.com/showthread.php?t=1801571.  A report of a sandwich-panel-fueled fire in a Melbourne building in 2014 appeared on the Australian website http://www.architectureanddesign.com.au/news/non-compliant-cladding-fuelled-melbourne-apartment on Apr. 28, 2015.  I also referred to the Wikipedia article on sandwich panels.

Monday, December 28, 2015

The Ironies of Carbon Capture Technology


In a recent article in Scientific American, reporter David Biello summarizes the current state of carbon-capture technology, and it's not good.  If a negative view of carbon capture appeared in some obscure climate-change-denier publication, it could be dismissed as biased reporting.  But the elite-establishment Scientific American has been in the forefront of the anti-climate-change parade, and so for such an organ to publish such bad news means that we would do well to take it seriously.

The basic problem is that capturing a gas like carbon dioxide, compressing it, and injecting it deep enough underground where it won't come out again for a few thousand years is not cheap.  And the worst fossil-fuel offenders—coal-fired power plants—make literally tons of the stuff every second.  It would be hard enough to transport and bury tons of solid material (and coal ash is a nasty enough waste product), but we're talking about tons of a gas, not a solid.  Just the energy required to compress it is huge, and the auxiliary operations (cleaning the gas, drilling wells, finding suitable geologic structures to hold it underground) add millions to billions to the cost of an average-size coal-fired plant.  Worst of all, the goal for which all this effort is expended—slowing carbon-dioxide emissions—is a politically-tinged goal whose merit is doubted by many, and which is being ignored wholesale by some of the world's worst offenders in this regard, namely China and India. 

However, shrinking the U. S. carbon footprint is regarded by many as a noble cause, and a few years ago Mississippi Power got on the bandwagon by designing a new lignite-burning power plant to capture its own carbon-dioxide emissions and send them into a nearby oil field, whereupon they expel oil that is, uh, eventually burned to make more carbon dioxide.  Here is the first irony.  Evidently, one of the few large-scale customers for large quantities of carbon dioxide are oil companies, who send it underground (good) to make more oil come to the surface (not so good). 

The second irony is an economic one.  It is the punishment meted out by economics to the few good corporate citizens in a situation where most citizens are not being so good.

Currently in the U. S., there is no uniform, rational, and legally enacted set of rules regarding carbon-capture requirements.  So far, the citizenry as a whole has not risen up and said, "In our constitutional role as the supreme power in the U. S., we collectively decide that capturing carbon dioxide is worth X billion a year to us, and we want it done pronto."  Instead, there is a patchwork of voluntary feel-good individual efforts, showcase projects here and there, and large-scale operations such as the one Mississippi Power got permission to do from the state's utility commission, as long as they didn't spend more than $2.88 billion on the whole thing.

So far, it's cost $6.3 billion, and it's still not finished.  This means big problems for the utility and its customers, in the form of future rate hikes.  Capturing carbon is not a profitable enterprise.  The notion of carbon-trading laws would have made it that way, sort of, but for political reasons it never got off the ground in the U. S., and unless we get a world government with enforcement powers, such an idea will probably never succeed on an international level.  So whatever carbon capturing is going to be done, will be done not because it is profitable, but for some other reason.

The embarrassment of Mississippi Power's struggling carbon-capture plant is only one example of the larger irony, which is that we don't know what an appropriate amount is to spend on carbon capture, because we don't know exactly, or even approximately, what it will cost if we don't, and who will pay.  Probably the poorer among us will pay the most, but nobody can be sure.  (There's a lot of very expensive real estate on coasts around the world, and sometimes I wonder if that influences the wealthy class to support anti-global-warming efforts as much as they do.)  

The time factor is a problem in all this as well.  Nearly all forecasts of global-warming tragedies are long-term things with timelines measured in many decades.  That is good in the sense that we have a while to figure out what to do.  But in terms of making economic decisions that balance profit against loss—which is what all private firms have to do—such long-run and widely distributed problems are chimerical and can't be captured by any reasonable accounting system.  Try to put depreciation on an asset you plan to own from 2050 to 2100 on your income-tax return, and see how far you get. 

So the only alternative in many places for large-scale carbon capture to happen is by government fiat.  A dictatorial government such as China's could do this tomorrow if it wanted to, but as the recent Paris climate-accord meeting showed, it doesn't want to—not for a long time yet, anyway.  In a nominal democracy such as the United States, the political will is strong in some quarters, but the unilateral non-democratic way the present administration has been trying to implement carbon limits has run into difficulties, to say the least.

My sympathies to residents of Mississippi who face the prospect of higher electric bills when, and if, their carbon-capturing power plant goes online.  Whatever else the project has done, it has revealed the problems involved in building a hugely expensive engineering project for a payoff that few of those living today may ever see.

Sources:  The article "The Carbon Capture Fallacy" by David Biello appeared on pp. 58-65 of the January 2016 edition of Scientific American.