Molecular Assemblers (Advanced Nanotech)

Book review: Radical Abundance: How a Revolution in Nanotechnology Will Change Civilization, by K. Eric Drexler.

Radical Abundance is more cautious than his prior books, and targeted at a very nontechnical audience. It accurately describes many likely ways in which technology will create orders of magnitude more material wealth.

Much of it repackages old ideas, and it focuses too much on the history of nanotechnology.

He defines the subject of the book to be atomically precise manufacturing (APM), and doesn’t consider nanobots to have much relevance to the book.

One new idea that I liked is that rare elements will become unimportant to manufacturing. In particular, solar energy can be made entirely out of relatively common elements (unlike current photovoltaics). Alas, he doesn’t provide enough detail for me to figure out how confident I should be about that.

He predicts that progress toward APM will accelerate someday, but doesn’t provide convincing arguments. I don’t recall him pointing out the likelihood that investment in APM companies will increase dramatically when VCs realize that a few years of effort will produce commercial products.

He doesn’t do a good jobs of documenting his claims that APM has advanced far. I’m pretty sure that the million atom DNA scaffolds he mentions have as much programmable complexity as he hints, but if I only relied on this book to analyze that I’d suspect that those structures were simpler and filled with redundancy.

He wants us to believe that APM will largely eliminate pollution, and that waste heat will “have little adverse impact”. I’m disappointed that he doesn’t quantify the global impact of increasing waste heat. Why does he seem to disagree with Rob Freitas about this?

Rob Freitas has a good report analyzing how to use molecular nanotechnology to return atmospheric CO2 levels to pre-industrial levels by about 2060 or 2070.

My only complaint is that his attempt to estimate the equivalent of Moore’s Law for photovoltaics looks too optimistic, as it puts too much weight on the 2006-2008 trend, which was influenced by an abnormal rise in energy prices. If the y-axis on that graph were logarithmic instead of linear, it would be easier to visualize the lower long-term trend.

(HT Brian Wang).

The Global Catastrophic Risks conference last Friday was a mix of good and bad talks.
By far the most provocative was Josh‘s talk about “the Weather Machine”. This would consist of small (under 1 cm) balloons made of material a few atoms thick (i.e. needed nanotechnology that won’t be available for a couple of decades) filled with hydrogen and having a mirror in the equatorial plane. They would have enough communications and orientation control to be individually pointed wherever the entity in charge of them wants. They would float 20 miles above the earth’s surface and form a nearly continuous layer surrounding the planet.
This machine would have a few orders of magnitude more power over atmospheric temperatures to compensate for the warming caused by greenhouse gasses this century, although it would only be a partial solution to the waste heat farther in the future that Freitas worries about in his discussion of the global hypsithermal limit.
The military implications make me wish it won’t be possible to make it as powerful as Josh claims. If 10 percent of the mirrors target one location, it would be difficult for anyone in the target area to survive. I suspect defensive mirrors would be of some use, but there would still be serious heating of the atmosphere near the mirrors. Josh claims that it could be designed with a deadman switch that would cause a snowball earth effect if the entity in charge were destroyed, but it’s not obvious why the balloons couldn’t be destroyed in that scenario. Later in the weekend Chris Hibbert raised concerns about how secure it would be against unauthorized people hacking into it, and I wasn’t reassured by Josh’s answer.

James Hughes gave a talk advocating world government. I was disappointed with his inability to imagine that that would result in power becoming too centralized. Nick Bostrom’s discussions of this subject are much more thoughtful.

Alan Goldstein gave a talk about the A-Prize and defining a concept called the carbon barrier to distinguish biological from non-biological life. Josh pointed out that as stated all life fit Goldstein’s definition of biological (since any information can be encoded in DNA). Goldstein modified his definition to avoid that, and then other people mentioned reports such as this which imply that humans don’t fall within Goldstein’s definition of biological due to inheritance of information through means other than DNA. Goldstein seemed unable to understand that objection.

Book review: Global Catastrophic Risks by Nick Bostrom, and Milan Cirkovic.
This is a relatively comprehensive collection of thoughtful essays about the risks of a major catastrophe (mainly those that would kill a billion or more people).
Probably the most important chapter is the one on risks associated with AI, since few people attempting to create an AI seem to understand the possibilities it describes. It makes some implausible claims about the speed with which an AI could take over the world, but the argument they are used to support only requires that a first-mover advantage be important, and that is only weakly dependent on assumptions about that speed with which AI will improve.
The risks of a large fraction of humanity being killed by a super-volcano is apparently higher than the risk from asteroids, but volcanoes have more of a limit on their maximum size, so they appear to pose less risk of human extinction.
The risks of asteroids and comets can’t be handled as well as I thought by early detection, because some dark comets can’t be detected with current technology until it’s way too late. It seems we ought to start thinking about better detection systems, which would probably require large improvements in the cost-effectiveness of space-based telescopes or other sensors.
Many of the volcano and asteroid deaths would be due to crop failures from cold weather. Since mid-ocean temperatures are more stable that land temperatures, ocean based aquaculture would help mitigate this risk.
The climate change chapter seems much more objective and credible than what I’ve previously read on the subject, but is technical enough that it won’t be widely read, and it won’t satisfy anyone who is looking for arguments to justify their favorite policy. The best part is a list of possible instabilities which appear unlikely but which aren’t understood well enough to evaluate with any confidence.
The chapter on plagues mentions one surprising risk – better sanitation made polio more dangerous by altering the age at which it infected people. If I’d written the chapter, I’d have mentioned Ewald’s analysis of how human behavior influences the evolution of strains which are more or less virulent.
There’s good news about nuclear proliferation which has been under-reported – a fair number of countries have abandoned nuclear weapons programs, and a few have given up nuclear weapons. So if there’s any trend, it’s toward fewer countries trying to build them, and a stable number of countries possessing them. The bad news is we don’t know whether nanotechnology will change that by drastically reducing the effort needed to build them.
The chapter on totalitarianism discusses some uncomfortable tradeoffs between the benefits of some sort of world government and the harm that such government might cause. One interesting claim:

totalitarian regimes are less likely to foresee disasters, but are in some ways better-equipped to deal with disasters that they take seriously.

Molecular nanotechnology is likely to be heavily regulated when it first reaches the stage where it can make a wide variety of products without requiring unusual expertise and laboratories. The main justification for the regulation will be the risk of dangerous products (e.g. weapons). That justification will provide a cover for people who get money from existing manufacturing techniques to use the regulation to prevent typical manufacturing from becoming as cheap as software.
One way to minimize the harm of this special-interest would be to create an industry now that will have incentives to lobby in favor of making most benefits of cheap manufacturing available to the public. I have in mind a variation on a company like Kinko’s that uses ideas from the book Fab and the rapid prototyping industry to provide general purpose 3-D copying and printing services in stores that could be as widespread as photocopying/printing stores. It would then be a modest, natural, and not overly scary step for these stores to start using molecular assemblers to perform services similar to what they’re already doing.
The custom fabrication services of TAP Plastics sound like they might be a small step in this direction.
One example of a potentially lucrative service that such a store could provide in the not-too-distant future would be cheap custom-fit footwear. Trying to fit a nonstandard foot into one of a small number of standard shoes/boots that a store stocks can be time consuming and doesn’t always produce satisfying results. Why not replace that process with one that does a 3-D scan of each foot and prints out footwear that fits that specific shape (or at least a liner that customizes the inside of a standard shoe/boot)? Once that process is done for a large volume of footwear, the costs should drop below that of existing footwear, due to reduced inventory costs and reduced time for salespeople to search the inventory multiple times per customer.

I had thought that Rothemund’s DNA origami was enough to make this an unusually good year for advances in molecular nanotechnology, but now there are more advances that look possibly as important.
Ned Seeman’s lab has inserted robotic arms into specific locations in DNA arrays (more here) which look like they ought to be able to become independently controllable (they haven’t yet produced independently controlled arms, but it looks like they’ve done the hardest steps to get to that result).
Erik Winfree’s lab has built logic gates out of DNA.
Brian Wang has more info about both reports.
And finally, a recent article in Nature alerted me to a not-so-new discovery of a DNA variant called xDNA, containing an extra benzene ring in one base of each base pair. This provides slightly different shapes that could be added to DNA-based machines, with most of the advantages that DNA has (but presumably not low costs of synthesis).

I went to an interesting talk Wednesday by the CTO of D-Wave. He indicated that their quantum computing hardware is working well enough that their biggest problems are understanding how to use them and explaining that to potential customers.
This implies that they are much further advanced than the impressions I’ve gotten from sources unconnected with D-Wave suggest is plausible. D-Wave is being sufficiently secretive that I can’t put too much confidence in what they imply, but the degree of secrecy doesn’t seem unusual, and I don’t see any major reasons to doubt them other than the fact that they’re way ahead of what I gather many experts in the field think is possible. Steve Jurvetson’s investment in D-Wave several years ago is grounds for taking them fairly seriously.
The implications if this is real are concentrated in a few special applications (quantum computing sounds even more special purpose than I had previously realized), but for molecular modelling (and fields that depend on it such as drug discovery) it means some really important changes. Modelling that previously required enormous amounts of cpu power and expertise to produce imperfect approximations will apparently now require little more than the time and expertise needed to program a quantum computer (plus whatever exorbitant fees D-Wave charges).

Paul W.K. Rothemund’s cover article on DNA origami in the March 16 issue of Nature appears to represent an order of magnitude increase in the complexity of objects that can self-assemble to roughly atomic precision (whether it’s really atomic precision depends in part on the purposes you’re using it for – every atom is put in a predictable bond connecting it to neighbors, but there’s enough flexibility in the system that the distances between distant atoms generally aren’t what would be considered atomically precise).
It was interesting watching the delayed reaction in the stock price of Nanoscience Technologies Inc. (symbol NANS), which holds possibly relevant patents. Even though I’m a NANS stockholder, have been following the work in the field carefully, and was masochistic enough to read important parts of the relevant patents produced by Ned Seeman several years ago, I have little confidence in my ability to determine whether the Seeman patents cover Rothemund’s design. (If the patents were worded as broadly as many aggressive patents are these days, the answer would probably be yes, but they’re worded fairly responsibly to cover Seeman’s inventions fairly specifically. It’s clear that Seeman’s inventions at least had an important influence on Rothemund’s design.)
It’s pretty rare for a stock price to take days to start reacting to news, but this was an unusual case. Someone reading the Nature article would think the probability of the technique being covered by patents owned by a publicly traded company to be too small to justify a nontrivial search. Hardly anyone was following the company (which I think is a one-person company). I put in bids on the 20th and 21st for some of the stock at prices that were cautious enough not to signal that I was reacting to potentially important news, and picked up a modest number of shares from people who seemed to not know the news or think it irrelevant. Then late on the 21st some heavy buying started. Now it looks like there’s massive uncertainty about what the news means.

Book Review: The Singularity Is Near : When Humans Transcend Biology by Ray Kurzweil
Kurzweil does a good job of arguing that extrapolating trends such as Moore’s Law works better than most alternative forecasting methods, and he does a good job of describing the implications of those trends. But he is a bit long-winded, and tries to hedge his methodology by pointing to specific research results which he seems to think buttress his conclusions. He neither convinces me that he is good at distinguishing hype from value when analyzing current projects, nor that doing so would help with the longer-term forecasting that constitutes the important aspect of the book.
Given the title, I was slightly surprised that he predicts that AIs will become powerful slightly more gradually than I recall him suggesting previously (which is a good deal more gradual than most Singulitarians). He offsets this by predicting more dramatic changes in the 22nd century than I imagined could be extrapolated from existing trends.
His discussion of the practical importance of reversible computing is clearer than anything else I’ve read on this subject.
When he gets specific, large parts of what he says seem almost right, but there are quite a few details that are misleading enough that I want to quibble with them.
For instance (on page 244, talking about the world circa 2030): “The bulk of the additional energy needed is likely to come from new nanoscale solar, wind, and geothermal technologies.” Yet he says little to justify this, and most of what I know suggests that wind and geothermal have little hope of satisfying more than 1 or 2 percent of new energy demand.
His reference on page 55 to “the devastating effect that illegal file sharing has had on the music-recording industry” seems to say something undesirable about his perspective.
His comments on economists thoughts about deflation are confused and irrelevant.
On page 92 he says “Is the problem that we are not running the evolutionary algorithms long enough? … This won’t work, however, because conventional genetic algorithms reach an asymptote in their level of performance, so running them for a longer period of time won’t help.” If “conventional” excludes genetic programming, then maybe his claim is plausible. But genetic programming originator John Koza claims his results keep improving when he uses more computing power.
His description of nanotech progress seems naive. (page 228) “Drexler’s dissertation … laid out the foundation and provided the road map still being followed today.” (page 234): “each aspect of Drexler’s conceptual designs has been validated”. I’ve been following this area pretty carefully, and I’m aware of some computer simulations which do a tiny fraction of what is needed, but if any lab research is being done that could be considered to follow Drexler’s road map, it’s a well kept secret. Kurzweil then offsets his lack of documentation for those claims by going overboard about documenting his accurate claim that “no serious flaw in Drexler’s nanoassembler concept has been described”.
Kurzweil argues that self-replicating nanobots will sometimes be desirable. I find this poorly thought out. His reasons for wanting them could be satisfied by nanobots that replicate under the control of a responsible AI.
I’m bothered by his complacent attitude toward the risks of AI. He sometimes hints that he is concerned, but his suggestions for dealing with the risks don’t indicate that he has given much thought to the subject. He has a footnote that mentions Yudkowsky’s Guidelines on Friendly AI. The context could lead readers to think they are comparable to the Foresight Guidelines on Molecular Nanotechnology. Alas, Yudkowsky’s guidelines depend on concepts which are hard enough to understand that few researchers are likely to comprehend them, and the few who have tried disagree about their importance.
Kurzweil’s thoughts on the risks that the simulation we may live in will be turned off are somewhat interesting, but less thoughtful than Robin Hanson’s essay on How To Live In A Simulation.
A couple of nice quotes from the book:
(page 210): “It’s mostly in your genes” is only true if you take the usual passive attitude toward health and aging.
(page 301): Sex has largely been separated from its biological function. … So why don’t we provide the same for … another activity that also provides both social intimacy and sensual pleasure – namely, eating?

Book Review: Nanofuture: What’s Next For Nanotechnology by J. Storrs Hall
This book provides some rather well informed insights into what molecular engineering will be able to do in a few decades. It isn’t as thoughtful as Drexler’s Engines of Creation, but it has many ideas that seem new to this reader who has been reading similar essays for many years, such as a solar energy collector that looks and feels like grass.
The book is somewhat eccentric in it’s choice of what to emphasize, devoting three pages to the history of the steam engine, but describing the efficiency of nanotech batteries in a footnote that is a bit too cryptic to be convincing.
The chapter on economics is better than I expected, but I’m still not satisfied. The prediction that interest rates will be much higher sounds correct for the period in which we transition to widespread use of general purpose assemblers, since investing capital in producing more machines will be very productive. But once the technology is widespread and mature, the value of additional manufacturing will decline rapidly to the point where it ceases to put upward pressure on interest rates.
The chapter on AI is disappointing, implying that the main risks of AI are to the human ego. For some better clues about the risks of AI, see Yudkowsky’s essay on Creating Friendly AI.