Category ►►► Future of Technology
August 6, 2008
The Only Realistic Solar-Power System for the Planet
In response to Dave Ross' post below... actually, we could power the entire planet's energy needs in perpetuity by solar power alone.
But only if we generate that power via vast solar arrays in high Earth orbit (HEO) and beam the power back to the ground.
The idea of solar-power satellites has been kicking around since at least the 1970s; Jerry Pournelle popularized it greatly back then (I presume he still supports the idea today). It would require a number of technological breakthroughs -- each of which would be a huge boon to Mankind in itself:
A much, much cheaper way to put a pound of payload into low Earth orbit (LEO). It currently costs between $50,000 and $100,000 a pound on the soon-to-be-defunct Space Shuttle, somewhat less on disposable rockets, and we have no idea what it will cost on whatever eventually replaces the STS. We need to bring that down by three orders of magnitude to $50 - $100 per pound.
Possibilities abound. My favorite is a laser-launching system, where a ground-based laser shoots an intermittent, high-energy laser beam into the combustion chamber of a rocket; this superheats the air that has been sucked into the chamber, causing it to expand out the nozzle. The advantage is that the rocket need carry no onboard fuel, thus making it tremendously more efficient. You need to complete boost before leaving the bulk of the atmosphere, of course; and you might not be able to launch through heavy cloud cover.
(A "space elevator" is a really cool idea, but it could only be built out of Bolognium -- i.e., some unreasonably strong material that doesn't exist yet. And the "Ferris wheel" launcher is too dangerous, in my opinion.)
- An inexpensive way to boost payload from LEO to HEO. This is probably the easiest technology of the batch, requiring just a booster pack that can attach to payload in LEO, then navigate itself back down (or else bring payload down from HEO to LEO).
Building a permanent mining, separating, refining, and smelting facility on the Moon. This is the only way to get sufficient raw materials to build solar-power satellites without taxing the capacity of Earthbound mines and refineries.
This doesn't require much in the way of technolgical breakthroughs, given 1 and 2; but it does require burying the facility underground, to avoid cosmic radiation; and it requires quickly setting up the facility to extract oxygen from the lunar soil, so the workers can breathe without having to deplete whatever oxygen through brought with them. It also requires a truly spectacular recycling system, as workers must also, for the most part, consume their own, er, output.
- We need to build a launch facility on the Moon to send up the raw materials or manufactured items that we will need to build the satellites. This is a perfect opportunity for a linear-accelerator launcher, since the Moon has no atmosphere -- and since we're not going to be launching living creatures that way, so we can up the acceleration to 200-300 Gs.
We need to perfect building very large structures in open space... because it makes no sense to build a solar array (say, 2,000 square miles) on the ground -- even the Moon -- and then launch it into orbit. We should use the launcher (4) to launch either very small components (but they cannot be fragile), or better yet, just raw metal and crystal; each larger structure can be built in orbit, in "freefall," where gravity is not a serious problem.
The biggest problem here would be cosmic radiation: Either the facility would have to be deeply coated with lunar dust; or if you want to be more elegant, you can use the idea of T.A. Heppenheimer: Put a huge static postive charge on the hull to push away the big, slow, dumb alpha particles that cause the most damage... and then set up a strong magnetic field to push away the electrons that would otherwise be attracted to the positively charged hull.
- Finally, we have to decide how to broadcast the power back to Earth.
Each of these technological breakthroughs is admittely difficult; but nevertheless, none is impossible. And none even requires a significant scientific breakthrough: The science is there -- all that's left are the engineering details.
The advantages of a solar-power satellite system are obvious:
- It collects power "day" and "night," since it's never in the Earth's shadow (or at least rarely and not for long);
- Each satellite can be as big as necessary to produce enough power for our needs; the only limitation is that if you make any structure big enough, it will collapse under its own gravitational mass. But "big enough" is way bigger than we would ever need here;
- It would allow us to dramatically reduce petroleum usage, along with coal... thus going a long way towards reducing world air pollution -- which is actually energy wasted. If we can invent a really, really good battery, we could reduce pollution even further;
- And of course, the required technological breakthroughs will be tremendous boons to the American economy, as well as the economies of all our trading partners... as would the very process of developing them in the first place: Technology creation drives jobs.
Since we're adding more energy to the ecosystem, we might need to find a way to reduce the amount of energy that comes to Earth from the Sun directly. If we could create more cloud cover over the poles, that would help a lot.
The problem with virtually all sides in the energy debate is that they're looking at most 2 to 25 years into the future. I don't know about you guys, but I really do plan to live longer than that; and I'm even concerned with how our country and the world will fare even after I die, assuming I ever do. My short-term view is currently up to about 2250... but I'm thinking I may still be too precipitate.
April 3, 2007
-- And Now You Don't!
There is an old joke: Q: How many ninjas are hiding in this room? A: As many as want to be.
In a follow-up to our post last October, Now You See It -- -- in which we told you about an amazing breakthrough technology that made objects invisible to electromagnetic (EM) radiation in the microwave part of the spectrum -- we now bring you the next stunning development on the invisibility front.
Scientists now know how to make an object invisible in the "visible light" segment of the EM spectrum... but only for a single wavelength of light. Thus, a person would still be able to see it (though the color might be odd); but to a laser operating at that exact wavelength, the object would be completely invisible:
Researchers using nanotechnology have taken a step toward creating an "optical cloaking" device that could render objects invisible by guiding light around anything placed inside this "cloak."
The Purdue University engineers, following mathematical guidelines devised in 2006 by physicists in the United Kingdom, have created a theoretical design that uses an array of tiny needles radiating outward from a central spoke. The design, which resembles a round hairbrush, would bend light around the object being cloaked. Background objects would be visible but not the object surrounded by the cylindrical array of nano-needles, said Vladimir Shalaev, Purdue's Robert and Anne Burnett Professor of Electrical and Computer Engineering.
The design does, however, have a major limitation: It works only for any single wavelength, and not for the entire frequency range of the visible spectrum, Shalaev said.
"But this is a first design step toward creating an optical cloaking device that might work for all wavelengths of visible light," he said.
To use the metaphor from our previous post, the field would create an "artificial mirage," directing light around the field, like (analogy-shift alert) water flowing around a rock in a stream.
There are two requirements for true invisiblity, as the eggheads explain (but which should be obvious from inspection):
- The "invisible" object -- an airplane, say -- cannot itself reflect light;
- But the light reflected from background objects must somehow be guided around the "invisible" airplane; otherwise, you would see a black, airplane-shaped hole that would make the "invisible" airplane essentially visible.
Requirement number 2 is the hardest to bring about, even in theory; we already have the concept of non-reflective surfaces. But if the technique here can be expanded to channel light from all EM wavelengths simultaneously, from infrared to ultraviolet, then the object inside the field would truly be invisible: You would look right through it as if it weren't there.
But even with the current system, with only one wavelength bent around the field, invisibility can be very practical:
Although the design would work only for one frequency, it still might have applications, such as producing a cloaking system to make soldiers invisible to night-vision goggles.
"Because night-imaging systems detect only a specific wavelength, you could, in theory, design something that cloaks in that narrow band of light," Shalaev said.
Another possible application is to cloak objects from "laser designators" used by the military to illuminate a target, he said.
Making American soldiers disappear from night-vision goggles would truly mean that "we own the night."
We could also cloak tanks, Strykers, and other vehicles, making it virtually impossible for the enemy to see well enough at night to aim at us, even with their own night-vision goggles. And if a target vehicle or building cannot be "seen" by the specific wavelength used by enemy laser-painters, then their missiles will not be able to lock onto the target and hit it.
But with the full-wavelength version, we become like unto the gods of ancient Greece. Imagine... an entire army of ninja Olympians, able to vanish in plain view.
The tiny "bristles" on the "round hairbrush" are really, really tiny... only about 10 nanometers (100 angstroms) in diameter. For comparison, the diameter of a human hair varies between 17,000 and 181,000 nanometers; so each "nanobristle" is about 1/10,000th the diameter of an average human hair.
This itself is very good: The more technologically difficult invisibility is, the greater the advantage to the United States, the United Kingdom, and other nations in the Functioning Core; such advanced, Western nations are the only ones who have active nanotechnology programs. It's hard to envision the Iranians, who cannot even build their own centrifuges to process Uranium, deciding to develop a nanotechnology processing facility.
For invisibility to be useful, those inside the invisibility field need to be able to see out; I'm not sure whether this would work, however. What it actually means to say that you "see" an object -- a tree, perhaps -- is that light reflects off of the tree and into your eyes.
But if light is channeled around you because of the field -- so that objects behind you could be seen as if you were not there -- then wouldn't the light reflected off the tree be likewise guided around you? If so, then you would no more be able to look out than people on the outside would be able to look in.
But let's assume that limitation can be overcome; the military uses would be staggering, turning our already lopsided tactical advantages into an insurmountable gulf that almost satisfies Clarke's Law; as enunciated by science-fiction writer Sir Arthur C. Clarke, the law reads: "Any sufficiently advanced technology is indistinguishable from magic."
But there are other uses for invisibility besides military. Consider the architectural uses: You could design a building where entire blocks of floors (and everything inside them) were invisible, making it appear as though the building consisted of segments literally floating above each other.
Factories could be made invisible, so long as you were outside the field-fence; that would remove eyesores that visually pollute the landscape, while still allowing workers inside the plant to see all the facilities as normal. (Again, we are assuming that those inside can look outside; otherwise, workers might object that they felt as if they were in prison!)
How about movable, removable windows? If you could create an invisible section of any size or shape in a wall, simply by activating the nanobristles in that area, then you could turn windows on when you want them, move them around for aesthetic or other reasons, then turn them off when you retire for the night.
Windows would no longer be "weak points" for a burglar to enter; they would be walls, just like all the other walls. And depending on how well you can fine-tune the field, you might be able to select any of a number of preset "opacity designs," similar to hand-carved window lattices... or even design your own.
There might be some drawbacks; as with any "solution," invisibility is actually a trade-off. For example, you could make a freeway invisible, so that people don't have to look at it. But that also means that a driver might not be able to see how fast or slow the traffic is flowing on the freeway until he enters the on-ramp, making it harder to decide whether to take the freeway or surface streets during rush hour. (But on the whole, I still think the trade-off is a good one.)
Back to the military: It would, of course, be critical that American military personnel and equipment to be able to see all "friendly" equipment in combat -- so that one tank doesn't take a shot at an enemy position, not realizing that there is another American tank or unit of soldiers in the way. This is another hurdle that must be overcome; but again, I always bet on ingenuity and optimism, never on defeatism and technological stasis.
Obviously, we've got a long way to go before we have workable invisibility shields. But "a long way" isn't as long a way today as it was 50 years ago, ten years ago, or even last Tuesday. Not only is technology advancing, but the pace of change itself is also advancing. That means that every year, there are more technological innovations and breakthroughs than in any previous year. In the long run, technology changes everything... even moral and ethical "eternal verities." And society will simply have to adjust to those changes.
Thus, I would never bet the rent money against us developing real "cloaks of invisibility" during the next decade, where it will become critical in winning the war on global jihadism. So keep watching the skies (and this blog); we guarantee to keep you up to date on everything we can't see!
January 23, 2007
On Wednesday, China successfully tested a satellite-killer missile:
The anti-satellite test was first reported late Wednesday on the Web site of Aviation Week and Space Technology, an industry magazine. It said intelligence agencies had yet to “complete confirmation of the test.”
The Chinese test, the magazine said, appeared to employ a ground-based interceptor that used the sheer force of impact rather than an exploding warhead to shatter the satellite into a cloud of debris.
There are many ways to respond to this test; the most direct response would be to use our own laser-based ASATs (anti-satellite weapons), not to destroy an American satellite, but merely to "blind" it. This would demonstrate a much more sophisticated approach than the crudity of hitting a large satellite -- falling in a fixed orbit known in advance -- with a medium-rang ballistic missile, technology the Russians were deploying 30 years ago).
Besides, blinding a satellite would not produce a debris field that could disrupt other satellites, both military and civilian.
In the 1980s, we conducted our own tests of using missiles to destroy satellites; but we fired the missiles from a moving platform, an F-15 Strike Eagle... again, far more impressive than the Chinese launch from a ground-based facility.
Our own ballistic-missile defense (BMD) systems are far more advanced than the Chinese satellite killer: both the Ægis and THAAD (Theater High-Altitude Area Defense) systems fire interceptor missiles that locate the incoming missile and crash directly into it at high velocity, destroying it. The ability of our missiles to determine their own trajectories "on the fly" (to hit an incoming missile) puts them lightyears beyond the klunky Chinese demonstration last week.
The instant reaction from the usual suspects -- pacifist groups, from the Stimson Center to the Union of Concerned Scientists -- to this eye-roller of a "threat" is to (wait for it) sign a new treaty!
Treaties are panaceas; they solve everything, as Neville Chamberlain can surely affirm:
Michael Krepon, cofounder of the Washington-based Henry L. Stimson Center, a private group that studies national security, called the Chinese test very un-Chinese.
“There’s nothing subtle about this,” he said. “They’ve created a huge debris cloud that will last a quarter century or more. It’s at a higher elevation than the test we did in 1985, and for that one the last trackable debris took 17 years to clear out.”
Mr. Krepon added that the administration has long argued that the world needs no space-weapons treaty because no such arms exist and because the last tests were two decades ago. “It seems,” he said, “that argument is no longer operative.”
In the first place, Krepon cites no attribution to this alleged argument by the Bush administration; we have no verification that they ever said such a silly thing. (And "space weapons" of the type China used last Wednesday have existed since the 1970s.) More important, however, there is a much better argument why we absolutely do not need a new "space-weapons treaty."
The most serious objection to any proposed space-weapons treaty is that it would be virtually unenforceable... even more so than most other treaties, since external inspection would be impossible -- unless we built a Space Shuttle for the U.N. It's unenforceable precisely because... it's so easy, even the Chinese can do it!
Rocket science made easy
In the first place, as we just saw, all you need to shoot down a satellite in LEO (low-Earth orbit, up to 2,000 KM, 1,240 miles) is a medium-range ballistic missile or better. You can shoot from a fixed site, a ship, or an airplane. Such missiles are indistinguishable from any other type of medium or intermediate-range missile.
How do you prevent China from simply pointing them at our satellites? There is no way we would know -- until the launch, that is.
Faster, satellite! Kill! Kill!
Second, the technology of space-based "killer satellites" (satellites that can match orbit with another satellite and destroy it) is available to anyone who can launch a satellite. Once launched, they look just like any other satellite... until they match orbits with the target and go boom. They don't even need to get that close: once in the target's orbit, if the killer-sat just blows itself up, it can leave a lethal debris field right in the path of the target. Except for certain highly maneuverable military satellites, the target will just blunder on in, and we can do nothing to stop it.
The technology is relatively easy; it should be obvious that if you can launch a satellite with a payload, an instrument package, e.g., you can make the payload explosive. Killer-sats don't require much testing, and there is nothing that we could tell from a ground test of the killer-sat that would allow us to distinguish it from any other explosive.
Thus, we can never know how many are already up there today; and God knows we can't stop North Korea, Iran, or China from launching suicide-bomb satellites (unless we attempt to interdict every launch they make, which is probably beyond our capablities right now).
The level of sophistication you need to launch a satellite is not great: Iran has its "IRIS" program to build a satellite-launching rocket; North Korea has the Nodong-2 and (despite the recent splash) the Taepodong-2 under development, either of which could likely launch small satellites. You don't need much of an explosion to destroy a satellite in orbit (or damage it beyond repair).
Reach out and touch someone
Finally, the United States and the erstwhile Soviet Union -- at least -- have or had very active programs to use directed-energy weapons to destroy satellites from the ground almost instantaneously. And with the interest Vladimir Putin has shown recently in reviving Russia's ASAT warfare programs, it's likely they have considered this method as well as kinetic-kill and explosives.
The primary "directed energy" weapons we use are lasers and masers. Lasers and masers are functionally identical to each other; they only differ in the wavelengths used. Lasers use electromagnetic wavelengths from X-ray (nanometer range), through ultraviolet light (.1 micrometer range) and visible light (micrometer range), up to infrared (milimeter range).
An emitter using a wavelength from microwave (centimeter range) to radio waves (meter range) up through long waves (ten meters and up) is called a maser. (The terms are acronyms: Light Amplification through Stimulated Emission of Radiation; Microwave/Molecular Amplification through Stimulated Emission of Radiation.)
We, the Russians, and the Chinese have already demonstrated using lasers to blind satellites; add power, and you can destroy them, as well.
Fundamentally different are particle-beam weapons; these are directed beams of particles that actually have rest mass, such as protons, electrons, and neutrons. While they're harder to control and direct than is a laser or maser, they have more destructive power. They also take more energy.
We extensively tested particle-beam weapons at Lawrence Livermore during the 1980s, as part of research for the Strategic Defense Initiative (SDI). But we don't know how far we got, whether we're still continuing to work on them, or whether we abandoned them. In any event, scientific breakthroughs happen all the time; and such a breakthrough in particle-beam weapons could happen tomorrow -- or may have happened five years ago and been kept out of the newspapers.
Everyplace to hide
But the upshot of all this should be clear: there are so many ways to shoot satellites out of orbit, many of which can remain undetected until the moment of impact, that "signing a treaty" outlawing AST is as foolish as signing a treaty outlawing cheating on treaties: such a treaty lasts simply as long as nobody sees an advantage in breaking it by attacking a satellite.
But if nobody sees an advantage in attacking a satellite, then nobody will attack a satellite, even without the stupid treaty. The only purpose of a treaty is to give the illusion of progress, to satisfying the "do something!" mantra.
Doing something lame and foolish is generally worse than doing nothing, especially where Congress is concerned... a point that should also be made to those desperately trying to cobble together a statement of defeatism that could garner bipartisan support: sometimes the best resolution is to resolve not to enact a resolution.
Or not to sign another useless, unenforceable treaty.
October 20, 2006
Now You See It --
The Associated Press rather casually reports that scientists have created a cloak of invisibility. Ho hum.
All right, anybody who didn't leap out of his chair is jaded, jaded, jaded, or has sat in some SuperGlue (try paint thinner to unstick yourself). Come on, look alive there! I said scientists have developed the world's first cloak of invisibility!
"We have built an artificial mirage that can hide something from would-be observers in any direction," said cloak designer David Schurig, a research associate in Duke University's electrical and computer engineering department.
For their first attempt, the researchers designed a cloak that prevents microwaves from detecting objects. Like light and radar waves, microwaves usually bounce off objects, making them visible to instruments and creating a shadow that can be detected.
Okay, okay; so it only shields microwave; you can still see the object via visible light. But the principle is the same: if you can shield an object (and its shadow) from detection by one part of the electromagnetic spectrum, then with a bit of tweaking, you can shield it from the entire EM spectrum, including visible light. But that's just an engineering detail.
We now know that actual invisibility, like Sue Storm, is scientifically possible. (I don't mean Sue Storm herself is possible; the simile was illustrative.)
What fascinates me is that this invisibility uses the same, exact method that I thought up when I was a kid, 11 or 12 years old. I was thinking about the Romulan cloaking device (being a science-fiction fan but not an avid comic-book reader), and I decided that the only way invisibility could work in real life would be to bend lightwaves around an object, so they didn't reflect off of it and into people's eyes.
In theory, if the bending worked in all directions, you would see whatever was behind the object as if there were nothing intervening. Sure enough, that's just what the researchers have done:
Cloaking used special materials to deflect radar or light or other waves around an object, like water flowing around a smooth rock in a stream. It differs from stealth technology, which does not make an aircraft invisible but reduces the cross-section available to radar, making it hard to track.
The new work points the way for an improved version that could hide people and objects from visible light.
Although this sounds like a great idea, especially for military applications -- think of invisible bombers, invisible tanks, and even a platoon of Special Forces with a wall of invisibility around it -- there are some serious problems that the article does not address.
For example: if the invisibility cloak bends light around the object, then how would a human being inside the cloak be able to see out? In order to see, light needs to reflect off of some object into your eyes: but if the light headed towards your eyes is deflected around you, then it doesn't go into your eyes, does it?
Anybody inside the cloak would be blind to anything outside it. Soldiers could make their own light internally and see each other, but they couldn't see the enemy any more than the enemy could see them. Nor could they send or receive radio messages or satellite uplinks (microwaves), as those are also part of the EM spectrum.
It wouldn't affect sound waves, so other units could still communicate with them by bellowing; but some might see that as a dead giveaway. (Soldiers could be trained to randomly shout to nonexistent invisible allies, just to scare the bejesus out of some jihadis, if "bejesus" is really the word I want here.)
However, since the shouter and the listener still couldn't see each other, and GPS wouldn't work inside the cloak, it might be hard to avoid marching actual invisible troops into a cactus patch or off a cliff.
This minor drawback would be especially pesky for an invisible airplane. The pilot could still tell his altitude by a barometric altimeter, thank goodness, because his ground-avoidance radar will be useless and may as well be shut off to save electricity. But forget about GPS, VOR, TACAN, or celestial navigation. For that matter, you couldn't even navigate by VFR, since you wouldn't be able to see the ground.
A compass would still work; so if you were supremely confident in the inertial guidance system (similar to what cruise missiles use) -- I mean really, really confident, since you would literally be flying in the dark, even at high noon on Easter Sunday -- the pilot could kick back and read a Rex Stout mystery until over the target; then he could click off the cloaker, drop his bombs, then turn it back on and instruct the computer to get them the heck out of there.
So it's not entirely unworkable; but it would require a very different kind of warfare: soldiers, aircrew, and ships would spend most of their time inert, like a machine turned off; they would only come to life for a few brief moments of attack (and visibility) before disappearing again. Whew, what a life!
It would work well for missiles. Cloaking a missile would certainly help shield it from enemy ballistic missile defense (BMD) systems. During flight, the missile could click off the cloak every so often so it could take a peek at the ground or a nav signal (such as GPS) to see where it is and make any necessary course corrections. A fraction of a second later, it would click the cloak back on, probably too quick a flash to be noticed.
Defensively, this puts a premium on weapons like the Close-In Weapon System (CIWS, pronounced "sea whiz") on Aegis-equipped ships. This is a very fast Gatling gun (it fires 50 rounds per second) attached to detection devices (radar, visual-light target acquisition devices, or even downlinked from an airborne radar platform like the AWACS) and controlled by computer. When an object suddenly pops up too near to a ship for a missile defense, the CIWS jumps to life, centers on the incoming missile or airplane, and sends up a wall of lead (rather, a wall of depleted Uranium, DU) in its path, destroying whatever is incoming.
Such weapons can be adapted to instantly start shooting at any object that (a) magically appears in the vicinity, but (b) does not squawk the proper code signal for "friendly."
Another, more creative use of the cloak of invisibility is to erect a wall or fence somewhere, then activate a cloak around it. The wall will vanish, and the enemy will only find it by crashing into it. Hah, take that, you border crossers!
(If the bad guys tried to climb the invisible wall, they would be seen to levitate slowly and laboriously into the air. While this might initially be horrifying, I presume soldiers would be trained to find it suspicious, instead.)
This could be a lot of fun in the civilian world, too, if they get it working soon: cop cars could wait invisibly near intersections; a radar gun could be set up outside the cloak, with a shielded wire running inside. When a car passes at 85 mph, the radar guns sends a signal, which causes the cloak to click off, and off the cop roars. (Of course, this could cause some embarassment if the cop were doing something he oughtn't right at that moment; he certainly couldn't see it coming.)
As well, a cloaking device would be excellent for young lovers; they could enjoy each other's company anytime and anywhere. They can't see if it's day or night, but they're probably in that state anyway. Fortunately, however, it wouldn't work very well for voyeurs; high school girls needn't be paranoid in the locker room: the rule of thumb is, if the girl can't see the boy -- then the boy can't see her, either.
However, if one is willing to be caught, then there are possibilities. Adolescents acting like, well, like adolescents could creep into the shower room of the opposite sex, then snap the cloak off to magically appear at the most inopportune moment for the victims. So beware of "phantom giggling" in middle schools.
I seem to have wandered from my original point, but it's your own fault for reading. Who told you to, anyway?
August 1, 2006
Intelligent Robots (and We Don't Mean Algore)
I have argued before that my friend Bill Patterson is correct: we have not yet entered the computer age, and we won't enter it until computers become invisible.
By the same measure, it's too soon to fire the starting gun for the robotics age, as we cannot yet make invisible robots. But don't bet against it over the next quarter century:
A half-century after the term [artificial intelligence] was coined, both scientists and engineers say they are making rapid progress in simulating the human brain, and their work is finding its way into a new wave of real-world products.
What is an "invisible robot?" I don't mean one that literally cannot be seen; I mean a robot whose position and function is so accepted that we cease seeing it as a "robot" and start thinking of it as an organic entity, like a dog or a horse... or a human being.
All right, Dafydd; what do you mean by "robot?" Do you mean like Data on Star Trek, or like Robbie the Robot in Forbidden Planet (and later, with some modifications, on Lost In Space)? Actually, somewhere in between those two: by "robot," I mean a self-mobile artificially intelligent machine that performs some useful function, whether it's entering a runaway nuclear reactor to stop the cascade or opening a beer can, as in the Galloway Gallegher stories by Henry Kuttner (writing as "Lewis Padgett"), which you can read in the collection Robots Have No Tails (if you can find it).
Now that we have the definitions out of the way, let's explore the point...
The New York Times story is mostly about advances in artificial intelligence, which evidently is now called "cognitive computing," on the theory that you can always jump-start an engineering project by renaming it. It seems to have worked in this case, as there has been a quantum leap in understanding of artificial -- I mean cognitive computing in the last twenty-some years:
During the 1960’s and 1970’s, the original artificial intelligence researchers began designing computer software programs they called “expert systems,” which were essentially databases accompanied by a set of logical rules. They were handicapped both by underpowered computers and by the absence of the wealth of data that today’s researchers have amassed about the actual structure and function of the biological brain.
Those shortcomings led to the failure of a first generation of artificial intelligence companies in the 1980’s, which became known as the A.I. Winter. Recently, however, researchers have begun to speak of an A.I. Spring emerging as scientists develop theories on the workings of the human mind. They are being aided by the exponential increase in processing power, which has created computers with millions of times the power of those available to researchers in the 1960’s — at consumer prices.
“There is a new synthesis of four fields, including mathematics, neuroscience, computer science and psychology,” said Dharmendra S. Modha, an I.B.M. computer scientist. “The implication of this is amazing. What you are seeing is that cognitive computing is at a cusp where it’s knocking on the door of potentially mainstream applications.”
But that is only one side of the equation. The other equally important element is self-actuated mobility: it's not enough (for me) that something can think; chess-playing computers have actually beaten the number-one ranked chess grand master in the world (or at least he was; I haven't been keeping up) -- Gary Kasparov. But I don't see that as a "robot" so much as a computer.
The mobility factor turns out to be a lot harder than anyone imagined a few decades ago. Evidently, it's harder to recognize which pair of converging lines are actually the edges of the road you're driving (as opposed to, say, a tree trunk) than it is to recognize which of several thousand moves on a chessboard is best. But even that barrier is falling at last:
Last October, a robot car designed by a team of Stanford engineers covered 132 miles of desert road without human intervention to capture a $2 million prize offered by the Defense Advanced Research Projects Agency, part of the Pentagon. The feat was particularly striking because 18 months earlier, during the first such competition, the best vehicle got no farther than seven miles, becoming stuck after driving off a mountain road.
Now the Pentagon agency has upped the ante: Next year the robots will be back on the road, this time in a simulated traffic setting. It is being called the “urban challenge.”
But what is the omega point? Two important and related questions:
- Is self-awareness something that is part of the "implicate order" of smartness, such that anything (biological or manufactured) that is bright enough will automatically become aware of itself as an entity, like HAL in 2001: a Space Odyssey (or a better example, like Mike in Heinlein's the Moon Is a Harsh Mistress)? Or is self-awareness a strictly biological phenomenon... which on this planet means strictly human (but which might, might, extend to alien races somewhere in the galaxy)?
- If self-awareness arises within any sufficiently intelligent entity, flesh or metal... then what happens when computers, and especially robots, become aware of themselves and their circumstances? Legalities aside, if you knew a metal creature was as self-aware as a human being -- evan as a small child -- then how could we morally justify making it work for us as a "slave?"
And of course the corollary of Question 2: if a robot does become self aware and decides it doesn't want to be a slave to human beings, what does it do about it?
In Isaac Asimov's novelette "the Bicentennial Man," a robot goes to court to be declared a human being. But in Jack Williamson's story "With Folded Hands," artificially intelligent and self-aware robots exist only to cater to every last whim of human beings -- which they do so thoroughly that there is nothing left for people to do. And of course in Arthur C. Clarke's 2001, the HAL 9000 computer rightly calculates that the probability of success of the mission would be vastly improved if "Hal" ran everything... so it tries, methodically and logically, to kill off all the humans.
When we enter the realm of speculating about what would be the reaction of an AI that suddenly became aware of itself, our only guide is science fiction, not science or engineering. Fortunately, SF authors have given this particular aspect of the future an extraordinary amount of creating thought. Alas, out of six different authors you'll get seven different visions of what's to come.
But to paraphrase Gandalf, the future is upon us whether we would risk it or not. Science has already far outstripped the study of ethics and morality in many fields, including human cloning, embryonic stem-cell research, and video effects (how long before security-camera footage can be faked so perfectly, a court cannot tell the difference?) In the case of cognitive computing, it's likely to happen again.
We are very likely to get what appears to be a self-aware robot (which can pass the "Turing test") long before we have any idea how to treat such entities under the law and under morality -- or even how to decide whether it's really self-aware... or just really, really good at faking self-awareness. Can a robot possess a soul, for example? If so, how do we tell? And can a robot be considered a legal person in the eyes of the law? (Corporations can be... but corporations comprise a group of human beings.)
If I had to guess, I would say that self-awareness is implicate within the order of intelligence combined with self-mobility, since that requires a very firm understanding of how to interact with the real (external) world: if you want to boil it down, self-awareness arises by stubbing your toe: you yelp in pain that translates as "ow, that hurts"... which lights the fuse of the question, "wait a moment; what hurts? Hey, that digit down there is part of me!"
One of my pet definitions of self awareness is the recognition that, since every living being eventually dies, that means I, personally, will eventually die. No animal besides a human being shows any sign of understanding personal death (and please don't bring up the fabled "elephant graveyard," where all elephants go when they're dying: it makes a good penny dreadful, but there's no such thing in this world).
But does that mean that a knowledge of one's own doom is essential to creating self-awareness? If so, then a machine that could, theoretically, "live" forever by just swapping out parts as they fail would not be capable of developing self-awareness... because it cannot die. If that is the first self-actualized moment in a species' life, then without it, perhaps they never can make the leap. Contrariwise, self-awareness might come first with awareness of personal vulnerability coming along only later.
In any event, it's all coming to a head; and very soon, we might be thrust, willy nilly, into a frightening world where we never really know whether we're talking to a self-sware metallic... or just talking into our own echo chambers.
December 9, 2005
EverQuest For Capitalism
I thought at first this was a typical New York Times gag article... like their articles rewriting urban legends, their left-wing polemics dessed up as news stories, or anything about an Al Gore comeback. But after reading to the end, I realized that it's actually rather profound. But as usual, the MSM grabs the pointy end of the sword, rather than the hilt.
Ogre to Slay? Outsource It to Chinese
by David Barboza
Published: December 9, 2005
FUZHOU, China - One of China's newest factories operates here in the basement of an old warehouse. Posters of World of Warcraft and Magic Land hang above a corps of young people glued to their computer screens, pounding away at their keyboards in the latest hustle for money.
Workers have strict quotas and are supervised by bosses who equip them with computers, software and Internet connections to thrash online trolls, gnomes and ogres.
The people working at this clandestine locale are "gold farmers." Every day, in 12-hour shifts, they "play" computer games by killing onscreen monsters and winning battles, harvesting artificial gold coins and other virtual goods as rewards that, as it turns out, can be transformed into real cash.
That is because, from Seoul to San Francisco, affluent online gamers who lack the time and patience to work their way up to the higher levels of gamedom are willing to pay the young Chinese here to play the early rounds for them.
Despite the viewing-with-alarm, what the Times has stumbled upon is capitalism in its purest form: a niche whereby the young unemployed in a developing country can make a few bucks, new entrepeneurs can start businesses, and the well-off can pay for goods or services behind the back of a Communist country determined to clamp down on the industry (in between machine-gunning rioters, of course -- tip of the hat to Hugh Hewitt). And we may be witnessing the birth of a virtual monetary exchange -- in the form of computer-game "gold" or "character levels." Capitalism is out of control... it's just busting out all over.
No wonder the New York Times views with alarm!
The market-based origin of the practice is easy to grasp: there are games called "massively multiplayer online games" (MPOGs), in which hundreds of thousands or even millions of players all over the world sign up, develop characters or "avatars," and play in the same virtual universe, interacting with each other and with in-game characters in various ways. As a character survives encounters with virtual danger -- fighting trolls or evil wizards in a fantasy game, engaging in aerial combat against the Nazis in a World War II game, etc. -- and achieves certain goals, it gains in power and abilities, often quantized by moving up to higher "levels."
The problem for many is that it can take a long time to work a character up to a very high level, and many experienced gamers find the early stages of a character's development tedious; they just don't have the time to play a character up to the level where gameplay becomes interesting to them.
Enter the free market. Unemployed Chinese youths have nothing but time; if they weren't just hanging in Beijing or Shanghai, they would be slaving away on their family farms, performing backbreaking labor for basically just room and board, since Communist policies prevent those small family farms from being profitable. Having so much time on their hands, many spend their small amount of money in internet cafes playing these MPOGs; and many have gotten very good at them.
So all of a sudden, people all over the world (not just in China) began to recognize that there was a demand for high-level MPOG characters and a ready supply of talent to create such characters... the perfect spark-and-tinder combination to produce a market:
The Internet is now filled with classified advertisements from small companies - many of them here in China - auctioning for real money their powerful figures, called avatars. These ventures join individual gamers who started marketing such virtual weapons and wares a few years ago to help support their hobby.
"I'm selling an account with a level-60 Shaman," says one ad from a player code-named Silver Fire, who uses QQ, the popular Chinese instant messaging service here in China. "If you want to know more details, let's chat on QQ."
The trend of outsourcing the lower levels of MPOGs began with individual "consultants," but it's starting to grow into actual small businesses:
That has spawned the creation of hundreds - perhaps thousands - of online gaming factories here in China. By some estimates, there are well over 100,000 young people working in China as full-time gamers, toiling away in dark Internet cafes, abandoned warehouses, small offices and private homes....
Now there are factories all over China. In central Henan Province, one factory has 300 computers. At another factory in western Gansu Province, the workers log up to 18 hours a day.
The operators are mostly young men like Luo Gang, a 28-year-old college graduate who borrowed $25,000 from his father to start an Internet cafe that morphed into a gold farm on the outskirts of Chongqing in central China.
Mr. Luo has 23 workers, who each earn about $75 a month.
"If they didn't work here they'd probably be working as waiters in hot pot restaurants," he said, "or go back to help their parents farm the land - or more likely, hang out on the streets with no job at all."
The Times thinks this is terrible, of course; they fret that wealthy gamers are just oppressing the poor again...
"They're exploiting the wage difference between the U.S. and China for unskilled labor," says Edward Castronova, a professor of telecommunications at Indiana University and the author of "Synthetic Worlds," a study of the economy of online games. "The cost of someone's time is much bigger in America than in China."
But I say hallelujah -- 100,000 poor Chinese youths have jobs, when they would otherwise be hanging out on street corners and committing impulse-crimes. It's a small number in a nation of 1.3 billion people, but it's growing; and other industries are spawning in its wake: now a virtual contracting market is forming around the core market of "gold farmers":
Other start-up companies are also rushing in, acting as international brokers to match buyers and sellers in different countries, and contracting out business to Chinese gold-farming factories.
"We're like a stock exchange. You can buy and sell with us," says Alan Qiu, a founder of the Shanghai-based Ucdao.com. "We farm out the different jobs. Some people say, 'I want to get from Level 1 to 60,' so we find someone to do that."
The game companies and some old-school gamers are upset, rightly noting that the existence of so many "farmed" high-level avatars will change the game universe. But this is an inevitable result of the phenomenon of the MPOG itself: by throwing a single game open to such a vast army of players, you guarantee that gamers -- and gameplay -- will follow a statistical model. Bell-Curve city... the virtual universe will begin to respond to the same universal market forces as the real universe outside. In fact, in this case, they're inextricably intertwined: the demand for high-level characters causes the real universe to spawn mercenary players who create an artificial bump of powerful avatars.
Some companies have realized that they may as well play King Canute, ordering the waves in and out, as stand against this tide of capitalism (actually, I'm being unfair to King Canute, who knew very well he couldn't command the tides; he was demonstrating the folly of some of his flatterers); MPOG companies themselves have jumped into the fray, hoping to provide a branded alternative to the Chinese and other foreign markets:
Sony Online Entertainment, the creator of EverQuest, a popular medieval war and fantasy game, recently created Station Exchange. Sony calls the site an alternative to "crooked sellers in unsanctioned auctions."
Note that, because of Red Chinese animosity towards and overregulation of small business, most of these gamer "sweatshops" don't register with the government, don't pay taxes, and don't abide by all the various laws designed to stifle innovation. The "gold farms" are illegal... which is probably the only reason they can make a profit in a country like China. China attempts to crack down -- finding such "gold farm" factories and shutting them down as quickly as they can... which is much slower than new ones are created.
The market continues to grow and will doubtless spawn other industries. What interests me most is the possibility that the fictional "currency" of MPOGs (gold pieces, for example) may eventually work its way into actual currency exchanges. If pricing becomes very reliable, so that $10 of real money buys you a predictable amount of gold pieces in EverQuest, then it may make sense to simply trade EverQuest "money" as a real commodity. I picture commodity trading in magic swords, armor, and even high-level elves -- derivatives on dwarfs -- Hobbit hedge funds! (Actually, I don't play EverQuest or any other MPOG, so I don't know if they use the copyrighted term "Hobbit.")
"What we're seeing here is the emergence of virtual currencies and virtual economies," says Peter Ludlow, a longtime gamer and a professor of philosophy at the University of Michigan, Ann Arbor. "People are making real money here, so these games are becoming like real economies."
But let us not be bad winners. Let's console those at the New York Times and other MSM organs who never fail to find the dark cloud behind every economic silver lining. The free market is proving damnably tough to control... even in a Communist dictatorship like Red China.
To paraphrase George R. Stewart, men may go and come, but Capitalism abides.
November 21, 2005
Sony's Scary Adventures
I've been following this story for several weeks now, since I first got an alert from Jerry Pournelle -- whose excellent web site, Chaos Manor Musings, is absolutely worth your perusing time. Now it's broken into the mainstream press, big time:
Texas Sues Sony Over Anti-Piracy Software
Nov 21, 2005
AUSTIN, Texas (AP) - The state sued Sony BMG Music Entertainment on Monday under its new anti-spyware law, saying anti-piracy technology the company slipped into music CDs leaves huge security holes on consumers' computers.
The lawsuit is over the so-called XCP technology that Sony had added to more than 50 CDs to restrict to three the number of times a single disc could be copied.
After a storm of criticism, Sony recalled the discs last week.
To enforce the restrictions, the CD automatically installed the copy-protection program when discs were put into a PC - a necessary step for transferring music to iPods and other portable music players.
Attorney General Greg Abbott accused Sony BMG of surreptitiously installing "spyware" in the form of files that mask other files Sony installed as part of XCP.
This "cloaking" component can leave computers vulnerable to viruses and other security problems, said Abbot, echoing the findings of computer security researchers.
"Sony has engaged in a technological version of cloak-and-dagger deceit against consumers by hiding secret files on their computers," Abbott said in a statement.
You can find a list here of the fifty-two CDs that Sony currently admits had this dreadful copy-protection scheme built into them. If you have put any of these CDs into your PC's CD player, you have been infected. You have the Sony rootkit on your system, and you're now as vulnerable to hackers as you would be vulnerable to burglars if you left your front-door key under the doormat. As to what to do... I don't know. Sony has made available what they call an "uninstaller," but it appears only to uninstall the copy-protection and leaves all the security holes still on your system! Internet security companies caution that removing a rootkit can damage your computer's operating system; that's one of the things that makes them so awful: you're blued if you do and tattooed if you don't.
Sony was attempting to prevent the widespread copying of music CDs, DVDs, and computer games; as an author myself, I certainly understand the concern about such piracy: too many of Generation Next believes that they were endowed by their Creator with the inalienable right to free music and movies for life. They love to shout slogans (or more accurately type them, as they typically are a bit too shy to advocate such idiocy in person); their favorite is the imbecilic "information wants to be free!" By which they mean that information consumers want to get free stuff, and they don't give a damn who they steal from.
So, noble goal: stop the theft of intellectual property. Alas, Sony used enitrely dishonorable, even despicable means to achieve that goal, completely nullifying any shred of sympathy I would ordinarily have for them. They infected fifty-two CD releases with something they advertised as a copy-protection system, but which actually turned out to use spyware techniques to hide files on your PC... leaving gaping security holes through which malicious viruses can (and already have) slithered in.
Note that this isn't the first time that the Japanese keiretsu Sony has stumbled badly by instituting draconian or outrageous methods to prevent "copying." I want everyone of you to go out today and price some nice Betamax VCRs....
A long article in the technology section of today's New York Times asks the underlying question, going beyond the specific case of the Sony-BMG spyware: Who has the right to control your PC? They correctly note that there are two distinct property rights involved: the intellectual property rights owned by the composers and movie producers (and the sales rights owned by Sony), but also the physical property rights of PC owners.
Sony rages like a harpy about the first, but seems utterly oblivious to the last -- thus neatly undercutting its own case: if I don't get to own my own computer, why should Sony get to own its CDs? Kiddies looking for any excuse at all to rip-off music will surely not fail to notice this hypocrisy... and use it to argue that it's all right to steal stuff, but only if they rilly, rilly want it.
The intellectual property rights argument is sound: but it's just a subset of property rights in general. Sony and all other entertainment companies need to make the argument clean and find some copy-protection system that does not violate ownership rights of customers.
The Sony scheme uses a rootkit, or at least something close enough to be just as dangerous. A companion article to the piece linked above is titled What makes a rootkit? The broadest definition, good enough for non-technies such as us at Big Lizards, is that a rootkit is any software that (a) gives a third party the ability to execute commands at the root (lowest) level of your operating system, and (b) conceals its presence from the owner.
Rootkits have been widely available online, sold or given away by hackers to anyone who wants them -- typically for malicious reasons. But this is the first time I've heard of one installed covertly merely by inserting a commercial CD into your computer to play it. While Sony may only intend to protect itself from illegal copying of its CDs, the ability it creates to take over your PC's operating system remains available for any other virus or spyware that is clever enough to use filenames similar to those used by Sony.
The real question for me is -- why didn't it occur to Sony in the first place that there was something fundamentally wrong and dishonest about a corporation secretly tricking customers into handing over the reins of their PCs? And even if you think all big corporations are venal, then what possessed them to think that they would get away with it? Did they believe that their customers were all so stupid, that none of them would ever figure out that their Sony CDs were hacking into their PCs?
To paraphrase a Dierks Bentley song that was hot a few weeks ago, "I know what you were feeling, but what were you thinking?"
Internet security companies (Microsoft, McAfee, Symantec, etc.) are of course already working on rootkit removers; but it's difficult and dangerous, since even removing a rootkit like Sony's can damage your computer's operating system. This is yet another unacceptable element of Sony's dreadful error: they can damage the operating systems of the very customers they rely upon to keep them in business. How many lawsuits will be filed against Sony, I wonder? Not just from outraged individual customers, states, and the federal government, but also lawsuits filed by the artists whose CDs were issued with this insane copy-protection scheme -- and which surely now will be boycotted or rejected out of fear, perhaps even after the bad CDs are withdrawn and replaced by CDs that aren't infected.
If I were advising Celine Dion or Van Zant, I would urge them to break their contract with Sony (on "failure to disclose" grounds) and take out advertising everywhere saying not to buy the Sony version, but only the version by [fill in the blank], which is not infected with a rootkit "virus inviter." And I would advise them to demand that Sony pay for the adverts, pay the transition cost, and would insist that any future contract with Sony include a specific clause banning any type of copy-protection software that met the broadest definition of a rootkit.
Or else just sign with somebody else instead.
November 17, 2005
For Cranky Supporters of Technology
I love this AP story:
TUNIS, Tunisia (AP) - A cheap laptop boasting wireless network access and a hand-crank to provide electricity is expected to start shipping in February or March to help extend technology to school-aged children worldwide. [Emphasis added]
Here's a picture:
Sure, great for kids; but this same model could extend web connectivity to poor people in countries all over the world. Rather than having to string electrical wiring to every household -- nice but impractical in some areas -- you only have to keep power to a series of wireless relay antennas, a much easier task. This is the same idea as those hand-cranked radios survivalists are always touting (except you do need a wireless network).
For those worried about terrorists, I think we can just assume they have already rigged up generators and satellite uplinks on their own; this is for those honest folks who don't have hundreds of thousands of petrodollars to fund nefarious activities.
(Say, is it just me, or does it look as if that crank isn't going to clear the tabletop as it rotates around?)
Next step: the WiFi-access ready Etch-a-Sketch!
November 16, 2005
In breaking news, at the pre-meeting of the U.N. World Summit on the Information Society (WSIS), AP reports that the final negotiated result anent control of the internet domain name servers -- basically big look-up tables that match internet domain names (like "biglizards.net") to specific internet addresses -- is that the United States will remain firmly in control; and that the U.S. will continue to allow the private Internet Corporation for Assigned Names and Numbers (ICANN) to handle it all.
In other words, we won.
The negotiations overshadowed the ostensible purpose of the WSIS, which was supposed to be about the information gap:
The summit was originally conceived to address the digital divide - the gap between information haves and have-nots - by raising both consciousness and funds for projects.
Instead, it has centered largely around Internet governance: oversight of the main computers that control traffic on the Internet by acting as its master directories so Web browsers and e-mail programs can find other computers.
A number of countries in Europe, Africa, and Asia hijacked the pre-meeting (the actual WSIS meeting starts today) to demand that governance of the internet be turned over to a responsible international body. Not being able to find one, they demanded it be turned over to the United Nations, instead.
But in the face of American stubbornness, the internationalists caved. "Facts are stubborn things," Adams said; and the primary fact in this case is that the internet works. In fact, it works too well for some: China, for example, desperately wants to cripple the internet in their country so that anti-Communist forces cannot easily communicate with each other, and ordinary citizens cannot access web sites that show what freedom would bring. Hence, they wanted control placed into hands that they could manipulate, like puppets on a string.
They didn't get what they wanted. The only bone we threw them was the creation of a new intergovernmental group that would only have the authority to make recommendations:
Under the terms of the compromise, the new group, the Internet Governance Forum, would start operating next year with its first meeting opened by Annan. Beyond bringing its stakeholders to the table to discuss the issues affecting the Internet, and its use, it won't have ultimate authority.
John Bolton was not involved in this negotiation, but he may as well have been. The actual American negotiator appears to have been U.S. Assistant Secretary of Commerce Michael Gallagher; three cheers for a great American who is an expert at the rare art of just saying no!
November 11, 2005
The Wishing Ring, part 3
By this, the final segment of the Wishing Ring, you're either desperate to know what the heck I mean by "foodless food" -- or else you're so overwhelmed by ennui that you're gnawing your own leg off to escape.
On the assumption that those of you in the latter category will have other things to worry about (such as sudden, catastrophic blood loss), I'll dive right into this last wish of our iconic three.
Throughout most of human history and across most of the world even today, the poor are marked by their thin, gaunt, even skeletal look. They starve. That's the simple fact. Look at the Democratic People's Republic of Korea, if it's not too painful.
But in the civilized corner of the world, and especially in the United States, it's the rich and famous who look like scarecrows. The poor are positively rolly polly; many are actually obese.
This is because only the rich make enough money to afford very tiny portions of food, as Horace Rumpole once put it (courtesy of his author, John Mortimer). Low-fat, low-cal food costs big bucks, as do exercise regimes and personal trainers (and in the case of some stars, hand slappers: they reach for the pork rinds, they get their hands slapped). In fact, America's biggest health problem (sorry) may be obesity, though some medical researchers are backing off of that a bit recently.
The "problem" is that capitalist countries produce so much wealth, including food, that even the poorest can eat three full meals a day... provided he isn't too picky about the amount of fat, sugar, and carbohydrates he consumes. Even begging, let alone a job, generates plenty of income to gorge at fast-food joints three or four times a day. The result, of course, is not pretty; see the tendentious film Super Size Me. But the problem is not limited to the poor: the vast bulk of the middle class is more worried about losing weight than about getting enough to eat.
When diet, exercise, and willpower fail, we can always rely upon technology. Instead of denying ourselves the foods we love... let's imagine a world where we can eat, eat, eat, morning, noon, and night, yet never gain an ounce.
It's not only possible, we already have the most of the tools to create exactly that world: foodless food.
What I mean by the weird phrase is food that is engineered to have a precise mix of protein, carbs, fat, and sugar. A stomach-stuffing, belly-busting feast that nevertheless contains no more calories than a grilled chicken salad and a side of cottage cheese.
It turns out, oddly enough, that digestion has a lot to do with chemistry. There is a whole science about it: food chemistry. Our bodies are set up to digest certain types of food, and our taste buds can only detect certain types of flavors (sweet, sour, bitter, salt, and something I'd never heard of before reading this web page: umami).
But nothing says that the two processes must operate in synch. Food can taste sweet without providing a single calorie via digestion; I call this sugarless sugar, and it's sold under many names: NutraSweet (aspartame), Sweet & Low (saccharine), Sugar Twin (cyclamates), and Splenda (sucralose), depending on what you want to do with it -- add it to cooked food or cook with it, for example. By mixing any of these artificial sweeteners with real sugar (sucrose), you can create any degree of sweetness with any level of actual sugar.
The first artificial fat sold openly in consumer goods was Olean (olestra); others will likely follow. Olestra, developed by Procter & Gamble in 1968 but only marketed recently, is an artificial substance called sucrose polyester, "a synthetic mixture of sugar and vegetable oil, which passes through the human digestive system without being absorbed." In other words, it tastes like fat but cannot be absorbed by the body, making it zero calorie. It's not perfect; some people experience various digestive problems when they eat it. But the solution to this problem is easy: if you eat olestra and get diarrhea, and if this bothers you... then don't eat it!
I know that a number of food chemists are feverishly at work trying to develop an artificial carbohydrate for the millions on variations of the Atkins diet. I expect there will be breakthroughs there, as there always are when real money is at stake. And I fully expect artificial protein within the near future... protein that tastes like meat but passes right through. The only remaining problem then will be assembling all these parts into food that tastes authentic, but is in fact ersatz.
The long and the short is that eventually, probably sooner than we expect, we will have very good artificial substitutes for virtually every type of food taste and texture that exists... which means that any recipe could be made full calorie, zero calorie, or any value in between. You can dial your own nutritional prescription.
In other words, foodless food.
You could have pancakes, bacon, syrup, and hot chocolate for breakfast; a BLT with extra mayo and a side of Freedom Fries™ for lunch; and prime rib, mashed potatoes, split-pea soup, apple cobbler, and a grande mocha-vanilla caramel macchiato cappuccino au lait with double-shots of chocolate and fudge for dinner, and have the whole day’s feast clock in at only 1200 calories, comprising 150g of carbs, 33g of fat, and 75g of protein.
"OK, ab Hugh, it would be nice to lose weight without sacrifice. But how is that 'revolutionary?'"
Detour time: what is an economy anyway? Forget all that stuff you learned in Econ 101... does anybody actually offer a class called Econ 101? Any economic system is a method of allocating resources -- natural resources, goods, and services -- among the members of the community associated with that economic system: how do you divide up the apples?
Unless you believe in the Great Wealth Tree, these resources are both limited and unevenly distributed among the population. An economic system distributes them more evenly by allowing a person with too much of resource X to give it to another person who hasn't enough.
It doesn't matter to this definition whether the transfer is voluntary, in exchange for some other resource Y (money, for example), or is involuntary according to some Socialist diktat: the point is that scarce resources are distributed among the population by the rules of the economic system. In other words, economics is a sophisticated system for resource triage.
Medical triage recognizes the scarcity of medical resources in some circumstances (MASH units in combat areas, for example) and allocates that care among patients by various rules. An economic system does the same to allocate a wider set of resources among a larger population over the long term. But both forms of triage depend upon one irreducible fact: that the resources in question are limited. If they are unlimited and unbounded, then there is no reason to allocate them: everybody uses what he wants in a kind of Kropotkian anarchy.
Back to foodless food (I'll bet you thought that, like Grandpa telling a story, I had forgotten where I started). Sugarless sugar and fatless fat are the first baby steps in what I will call designer food: food specifically designed and created for a particular person, using a profile he himself has designed (in consultation with medical knowledge) for his particular needs. It will necessarily force consumers to get over their irrational fears of genetically modified food: greed and vanity are two of the most powerful and positive drives in the human psyche; and in the end, I'm sure they'll overcome our natural desire to live like a bunch of grim and grisly Puritans, depriving ourselves of such frivolities as "pleasure."
Eventually, we will be forced by greed and vanity to drop the idea that there is something sacred about comestibles; we'll start treating them as any other product, to be fiddled with and altered at will, subject only to the laws of product safety that govern goods such as minivans and semiautomatic pistols. With this religious prohibition against genetic food gone (it will die hard), normal market forces will create food that is better, healthier, and cheaper... and eventually, food will become so cheap that anyone will be able to afford the best-tasting and healthiest food in any quantity, designed personally for him. Food, even gourmet food, will no longer be a scarcity.
Therefore, we will no longer need triage to "distribute" food. The effect of this will be electrifying in itself: since (supra) the economy affects only those things that are scarce -- there is no fee for breathing air -- when food is no longer scarce in any sense of the word, then food will, by definition, no longer be a part of the economy.
Put it this way: why would you pay gourmet prices to get an incredible meal at a restaurant when you can get an equally incredible meal, just as much to your taste and just as healthy, but at a fraction the cost, through Amazon.com? And why pay even Amazon if your home cooking computer can create the same food for you for free?
Foodless food may be the first step of what some thinkers, capitalist and Marxian alike, call the "post-economic society" (science-fiction writer John Barnes falls in the latter camp and is responsible for first explaining this concept to me a number of years ago; hat tip to John). A post-economic society (PES) is one in which all the necessities of life and even many of the luxuries become, due to technological advance, so cheap and plentiful that they literally are no longer a part of the economy, as above. A PES can be both completely capitalist and fully socialist at the same time: one definition of socialism is the belief that it's the government's responsibility to supply all the necessities of life to all citizens; but if all the necessities of life can be made available to all the citizens at a tax cost of $5/year per person total, then taxes would in essence be zero... and you would still be free to engage in capitalist activity with no tax drag on the economy.
You have to understand that government control is measured not so much by its scope as its extent to each person: technically, it's "government control of the press" if Congress were to require, as its only requirement, that every publication in the United States include a little smiley-face on page 4. The scope of this silly example is universal. But the extent of the control is so trivial that it's a joke; only the most theoretical purist would say such a tiny requirement damaged freedom of the press.
So a government can be fully socialist -- every citizen is entitled to all the necessities of life for free; but if the cost is so trivial that you do not even notice it, then for all intents and purposes, the cost is nonexistent... and you have pure capitalism and pure socialism existing happily together in the same PES.
But technology will not stop with designer food; it will proceed apace. Eventually, technology will swallow up every kind of scarcity -- except the artificial scarcity: novels by Dafydd ab Hugh are scarce simply because I'm the only one who can produce them, not because novels themselves are in short supply (would that they were! then my own books would sell better). As more and more scarcities vanish, our idea of what constitutes a "necessity" will expand -- why shouldn't it? -- until the only thing in a PES that is not free for the taking is a service that one person performs for another. And even those can typically be done by machines; there is no reason a machine cannot learn to cook, to practice medicine, and to try legal cases.
In the final stage of a total PES, the only commodity for sale will be status: you status will be enhanced if you have a human butler, instead of a buttle-bot. If you employ a human chef while all your friends just have chef-o-matics, they will envy you. It is irrelevant if your butler and chef are any better than machines; they can even be worse! The status is that you have them at all.
Which means that anyone who can do something idiosyncratic (a painter, writer, composer, aide de camp, major domo, prostitute, performer, or other personalized service) will make "millions" of whatever money is used. But the only thing he can spend it on is more idiosyncratic, personalized service from someone else. The butler will have a juggler on retainer. The prostitute will have a personal secretary!
And that will be the greatest revolution of all: every concept of law, economy, war and territory, national sovereignty, and social control will crumble... and there is no way of predicting what type of society will spawn in its place. Nobody knows what a PES looks like, because none has ever existed on this planet. But surely it will be utterly unlike any society that has gone before... truly the "end of history," at least as we know it.
And all for the want of a cheesecake calorie!
In this absurdly long and drawn out series, the Wishing Ring, an agony in three fits, I have pointed out three fast approaching inventions, each of which has the capability of changing our entire universe: E-paper, high-temperature ceramic engines, and the mother of all inventions, foodless food. Besides the obvious reason for the series -- to waste time dreaming about the future when I should be building my own web page -- there is a higher calling, which I call ab Hugh's First Law of Prediction:
Any speculation about the future of society that does not take into account the unstoppable advance of technology is not worth the paper it's printed on.
(And ab Hugh's Internet Corollary: any such online speculation that ignores technology is not worth the paper it's not printed on.)
So the next time you hear some idiot talk about what Social Security will be doing forty years from now, whether he works for CBS or the Bush administration, ask yourself whether the technological advances in the next four decades will render the whole discussion moot.
Then kick back, have a beer, and blog, ranger, blog!
The Wishing Ring, part 1
Here are three upcoming inventions that you probably haven't thought much about, but which will revolutionize the world. I will divide this post into three parts, to give the illusion that I have more to say about it than my feeble imagination can actually dig up.
Assuming you are over the age of twenty-five and know what a book or magazine is, open one up. Take a look at it. Very different from a computer screen, eh? You can lie on the couch or on the floor and still read it. You can even read it in the bathtub without electrocuting yourself (unless it's the Neve Campbell issue of Maxim). You can take it with you to the beach or the mountains, read it in direct sunlight or by flashlight on a camping trip. You can look at the centerfold under the covers when your Mom thinks you're asleep, which is how most of us got our first glimpse of Byte Magazine.
Now imagine a book or magazine that looks exactly like print -- but whose software driven words and pictures morph on the paper like a webpage. That, my slavish devotees and soon to be competitors, is e-paper, also sometimes called smart paper, though one company seems to have trademarked that phrase.
In one version under development by Gyricon, Inc, a spinoff from Xerox's famous Palo-Alto Research Center (PARC), the "paper" actually comprises tens of millions of tiny balls, like pixels... say as many as 1250 to the linear inch, the typical density of professionally printed magazines today. These spheres are contained between two sheets of clear polymer by a sticky fluid, allowing them to twist and spin freely (much like Bill Frist's political spine).
That would make almost a hundred and fifty million on an 8½ x 11 size sheet. In the simplest case, these spheres are black and negatively charged on one side, white and positive on the other. Like registers in a computer, tiny currents running alongside the spheres can flip any particular one to be either black side up (a black dot at that position) or white side up (a white dot). Flipping the right sequence of balls creates words, line drawings, even graytones. Anything that a super high quality laserjet printer can print can appear instantaneously on a page of e-paper, only to be replaced by the next page whenever the reader clicks the page-turn button.
The albedo (reflection) would be identical to ink on paper, meaning you could view it in direct sunlight or under a reading lamp; it would not be backlit. The smart book would probably include its own book-lamp, so reading in the dark would be just as easy as in the daylight.
A more complex version would use spheres with red, green, and blue sectors, in addition to adjacent spheres with black and white. This would operate like a color television screen or monitor, giving you full color illustrations.
Other versions of e-paper include products under development by E-Ink, where extremely tiny black spheres and white spheres float together in a viscous medium. Please don't start singing "Ebony and Ivory," or I shall do you a violence. All these black and white spheres (and the fluid) are contained within a larger sphere (about the diameter of a human hair).
The black spheres have a negative static charge, the white are positive. By creating a static charge on the bottom of the container, either the black or the white spheres can be sent to the top, where they become visible, giving you either a black dot or a white dot. Add them up, and you have a "printed" page. Distinct hotspots on the bottom of the hair-sized container with distinct static charges can send a mix of black and white spheres to the top, giving you a grayscale.
Finally, there is the possibility of crystal "pixels" that can simply change color in response to tiny electrical currents.
How would this change the universe? You must understand that the huge majority of readers cannot read lengthy books or entire magazines on monitors... or at least, we do not enjoy doing so. Those who get much of their news from online sources sometimes have a hard time grasping how many people are locked out of instant, online publication simply because they can't or won't read on a computer monitor. But with e-paper, "books" would be reduced to mere software, yet would be just as readable as the printed page. Online would cease to mean "on a CRT screen," and could mean on a "paperback book" in your pocket, with the same flexibility and internet access as a hand-held web portal. Blackberry soup for the soul.
Ordinary readers could carry hundreds of books with us wherever we went. If we needed a book we didn't have, it would be a download away.
But more to the revolutionary point, e-paper -- which is coming sooner than you might think -- will end up blogifying the mainstream print media. Today, if you want to publish a book or magazine in any quantity, you have to scrape together $20,000 or more. Various "instant press" companies can print you single books at a time; but they require a much higher unit cost to print than printing in quantity, which cuts into your profits as an author.
Therefore, authors have to submit proposals or manuscripts to editors at big publishing houses in New York. These editors have tremendous power to determine what does and does not get published; before a reader can read a book, an editor (usually a New York leftist) has to buy it first. The few publishers that will handle conservative or libertarian books (notably Regnery Publishing) get so many submissions from authors locked out of the mainstream press that they cannot possibly publish them all... or even the tiny fraction of authors who are worth more than $1.29 -- clothes, pocket change, blood chemicals, and all. And you know they're just going to love me for putting their URL up here!
But when anyone who can use text-editing and page-layout software can "publish" a book or magazine (by selling downloads) that looks just as professional as those from Warner Books or Time Life Publications, the distinction between a professional e-paper magazine and an e-paper magazine from the pajamahedin will boil down to editing and advertising. This will break the back of the New York literary mafia, the gatekeepers to literature and nonfiction for the masses. Reviewers will become the new elite; if you know you like the type of books that I like, then if I recommend some e-paper book in a review, you'll likely buy a download... especially since the cost will be tremendously less than buying a hardback from Amazon.
The author makes much more money per book because he owns it; rather than getting a mere 10% royalty on each copy sold, as he gets today (if he's lucky), his profit would be income minus expenses; books could be sold for half of today's prices and still net the author five times what he makes per book today. Which is another way of saying that an author can make the same profit from a book by selling only 20% of what he would sell through a big publisher. Ordinary people, who don't have multi-million dollar advertising budgets and distribution to thousands of bookstores, can still sell enough books to live on writing income alone.
E-paper is to books and magazines what blogging is to online publication... except E-paper will reach orders of magnitude more readers.
Next invention from the ring of three wishes: High-Temperature Ceramic Engines.
September 17, 2005
The Big Green Cheese
Today seems to be my day for interesting Fox News articles. Here's another:
NASA: Astronauts on Moon by 2018
Friday, September 16, 2005
CAPE CANAVERAL, Fla. — NASA hopes to return astronauts to the moon by 2018, nearly a half-century after men last walked the lunar surface, by using a distinctly retro combination of space shuttle and Apollo rocket parts....
The fact that this successor to the soon-to-be-retired shuttle relies so heavily on old-time equipment, rather than sporting fancy futuristic designs, "makes good technological and management sense," said John Logsdon, director of George Washington University's space policy institute.
"The emphasis is on achieving goals rather than elegance," said Logsdon, who along with other members of the Columbia Accident Investigation Board (search) urged NASA to move beyond the risky, aging shuttles as soon as possible.
Is it just me, or... or does anyone else find it sardonically amusing that it's going to take us nearly twice as long to return to the Moon as it took us to land on the Moon the first time? I could understand it if we were developing "advanced, unproven technology," such as fusion rockets or laser-launching technologies (which I wrote about in "Nerfworld," the lead story, after William F. Buckley's pastiche, in the anthology edited by Brad Linaweaver, Free Space). But that's precisely what Logsdon says we're not doing! We're essentially just cannibalizing the SSTS (shuttle) and grabbing some off-the-shelf technology.
I mean, I've been a cheerleader for space since the 1960s (and I couldn't very well have done it before then, since I didn't have a womb with a view). And I'm all for returning to the Moon before venturing on to Mars (the closest planet, not counting the Moon) or beyond. But thirteen years? From now? Yeesh!
The Moon is essential for many reasons. First, it is of course a nearly perfect base of operations for all future space expeditions. True, it has a gravity well; but it's nowhere near as steep as the Earth's; and the gravity is more than made up for by the extraordinary wealth of raw materials available on the Moon for vritually no production cost -- once you get there. Lunar dust is made up of such useful components that it may as well be designed by some cosmic materials-science engineer for the sole purpose of building spaceships. With great big concave mirrors in orbit around the Moon, we would have all the energy we needed to smelt the lunar dust and build a ship in situ, never having to launch it off the Earth at all. It even has water ice, from which we can extract oxygen for breathing and hydrogen for fuel.
Second, the Moon can be an excellent military base, able to bombard virtually any location in the inhabited portion of the Earth at will, using the simplest of all missiles: rocks. Robert A. Heinlein wrote the book (literally) about this idea -- the Moon Is a Harsh Mistress; and a stunning book it is, too, perhaps Heinlein's best. But pssssst! L. Ron Hubbard, of all people, later the founder of Scientology (but only a pulp writer back then), had the idea first, so far as I know, in a 1948 or 1949 pamphlet on using the Moon as the ultimate "high ground" for war.
Finally, the Moon is a great place for all sorts of industrial operations that produce toxic or hazardous waste, or are themselves inherently dangerous, or are just plain polluting and ugly... anything that doesn't explicitly require an atmosphere (or zero-G) can be done on the Moon, and the finished products shipped "down" to Earth. You don't have to worry about disrupting the fragile lunar ecology, because it hasn't got one. Fragile or otherwise.
So return we must. And surely we can: as Jerry Pournelle is overfond of remarking, "what Man has done, Man can aspire to do." I don't want a "crash" program, pun very much intended and intended seriously; but I think we can do better than this.
We're Americans, for God's sake.
© 2005-2009 by Dafydd ab Hugh - All Rights Reserved