Really big data centers (June 2015)

There are a couple challenges in building really big data centers, where by big, I mean many solar masses.

This particular design solves all of those at close to the theoretical limits, and solves a handful of other sticky issues as well:

Basics

First item: energy supply. If you use entirely reversible computing, you don't need energy, everything just keeps bouncing along. But you can't have inputs and outputs, so what's the point?

Second item: heat dissipation. And a carryover, how do you get reversible computing to do useful work? Well, irreversible computing can be done as reversible computing plus bit clearing. All the energy requirements and heat dissipation are associated with bit clearing, and ultimately you only have to clear bits in order to make room for new outputs. First item now complete. Landauer's principle says you can clear 1e23 bits per watt per second, divided by the temperature (in Kelvin) you do it at.

When you accelerate electrons or atoms, say by having them bounce off each other, they always radiate. They can absorb that radiation too, not necessarily during collisions. That's what blackbody radiation is. So atomic reversible computing won't be entropy free. The amount of radiation varies with the fourth power of the temperature, 5.6e-8*W*m-2*K-4. This is expressed in terms of surface area instead of per collision ... I don't know how to convert. Wikipedia says the human body radiates 100W at 300K, so at 1/1000 that temperature (.3K) a human body would radiate at 1e-10W. If the temperature is reduced enough then blackbody radiation is not significant and reversible computing again looks plausible.

(Side note. If blackbody radiation is energy emitted during collisions, and absorbed at random times, that isn't time-reversible. I don't know if that means radiation is only absorbed during collisions, or radiation is also emitted when there are no collisions, or some other option I'm not imagining, but a system at maximum entropy has to be time reversible or it isn't really at maximum entropy. There's something here that I don't understand.)

You don't have to clear the bits in the same place as the computation. Suppose you have 1-bit variables x and y. You can replace (x,y) with (x, x XOR y) reversibly, because the operation is its own reverse: (x, x XOR x XOR y) == (x, y). So you can do this sequence of reversible transforms:

  1. (x, y)
  2. (x, x XOR y)
  3. (x XOR y XOR x, x XOR y) == (y, x XOR y)
  4. (y, y XOR x XOR y) == (y, x)
Voila, you exchanged x and y reversibly. If x was 0 before and y wasn't, now y is 0 and x isn't. You only have to be able to clear the first bit irreversibly, you never have to irreversibly clear the second bit.

If you fill a storage unit (looks something like a rock) with a bunch of filled bits, you can physically throw it a long distance, clear it to all zeros (using energy and releasing heat), then throw it back. You could split the rock into lots of little grains of sand and put them really close to the computation units, which consume the zeros and replace them with garbage. Then you put the grains of sand back together into a rock then throw it again. Computation in one place, energy and heat dissipation in another.

Third item: gravitational collapse. See my dense matter page. You make a torus, rotating and also curling in on itself, so that each spot on the torus is in an elliptical orbit. There would be no central object (it is orbiting itself), so the ellipses would precess, so points go around the ring faster than they complete a full orbit. You can pack a pretty arbitrary amount of matter together, about 1/5th as well packed as a solid, yet keeping it all in microgravity. This scales to hundreds of solar masses before you have to start worrying a lot about relativity and black holes. A hundred solar masses would form a ring about 15 times the radius of the sun.

Sketch of a data center

Make a computing torus of several solar masses at near zero degrees kelvin. Helium is a liquid below 1 degree Kelvin and Hydrogen is a solid. Other elements that can be used as spices. Probably they can be used to store data or even do computation. Use that as the computation unit. You wouldn't need to worry about the hydrogen and helium escaping as a gas, it's too cold, likewise you don't have to worry about containing an atmosphere because there won't be any. Error correction isn't much of a problem because of the extremely low entropy of the system. Can its computations be as fast as the speed of light throughout all the mass? Or only the speed of sound? They'd all be reversible, so it would stay cold. But it would consume zeros and produce random bits. There would be a thin insulating/refrigerating layer around it, to keep it colder than the universe at large.

Surround the computational core with a large torus with a thin outer layer, like a mylar balloon, perhaps as big as the orbit of Pluto. When mass in the outer layer is on the portion of the orbit on the inside of the torus, pack it into large balls instead of a thin sheet, so it's easy to get past them to the outer thin sheet. Big empty space between the core and this outer torus. Toss rocks back and forth between the computation core and the outer torus. The outer torus is in charge of producing energy, radiating heat, and replacing the random bits with more zeros. The size and temperature of the outer torus determine how quickly zeros are produced and how much heat gets radiated.

Stationkeeping of the outer torus can be adjusted by choosing where to throw the rocks full of bits. It doesn't matter whether the outer torus has a relatively thin tube or fat tube, because the inner core isn't radiating anything (other than rocks). Fatter might be better, to shield the inner core from the blinding heat of the universe's 3 degree Kelvin background radiation.

How do you toss rocks without using much energy? Toss them only a little, then use gravity assist after that. Gravity assist on what? The balls of the outer shell on the inside of their torus. Probably have additional toruses in between for more opportunities for gravitational assist. You need equal assists accelerating out and decelerating in, so this doesn't need to affect the outer toruses any. Perhaps use skyhooks or a pinwheel for the initial toss.

How do you aim your tosses so well that you get gravitational assists all the way out by Pluto? You don't, at least not at first. After the initial toss, you split the rock so that the two or three pieces all get gravitational assists. You'll need splits of splits of splits by the time you get out by Pluto.

How rapidly would the core need to consume zeros? Depends how clever it is. It might be very clever indeed.

Data storage

All data centers work about the same way. Every day they produce new data and discard old data. New data is hotter (used more) than old data, but most hot data is actually old just because there's so much more old data. A large sort consists of sending all data everywhere, then half the data to the nearest half, then a quarter of the data to the nearest quarter, on down to 2-n of the data to the nearest 2-n of the storage units, for a total of n passes. Communication between nearer units is cheaper (linear with distance). Nearer units also tend to fail together.

Old data is erasure encoded across units that are unlikely to fail together. If there's only one torus, this means dividing the torus into say 16 sections where each is about equally wide and tall and deep, and storing pieces of each erasure encoding group in different sections. New data is necessarily localized (speed of light), so there is a period of time where it is less reliable than old data because it hasn't been adequately spread out yet. Most new data is almost immediately erased, so you can save work by not erasure encoding it in the first place.

Most data is very cold (not accessed at all each day); this trend grows as the data center grows. At this size it's likely only 1/10000 of the data is accessed each day. The hot old data can be duplicated around the torus, so it can be accessed faster, like Akamai-d webpages. There could be additional groups of very local chunks that are XORed, so local losses could be recovered quickly without waiting for distant chunks for the global erasure encoding.

The basic big data operations are: scan everything, filter, collect the bits you care about in one small place then churn on those awhile, write the results as new global data. Look things up in an index. Sort. Join by key. Data ingress and egress. That's about it. Most new data is old data, mixed with a little new ingressed data. The basic data structures are unstructured tables, B-trees, and distributed hash tables. A few small metadata tables are widely duplicated, and maintained by a central service. Log-structured file systems are a common pattern (they are built on sort and key lookup).

Performance vs a Dyson Swarm

Restrict the designs to our solar system: one solar mass, 2.1% of the total mass is not hydrogen or helium, and only 0.2% of the total mass is outside our sun.

A Dyson swarm at the distance of Mars can capture all the sun's energy (3.8e26 watts). It would have 2.1% of the mass of the solar system (assume you mined all the heavy elements out of the sun, and put most of the hydrogen and helium in the gas planets back into the sun). At one bit per atom, it can store 2.5e55 bits. The shell would stay at about 300 degrees Kelvin, so it can clear 1.3e47 bits per second. Maximum network latency (at the speed of light) is 1600 seconds, and many orders of magnitude slower by bulk moving of rocks (which has higher throughput). The sun can power the Dyson swarm for another 9 billion years. 1/9 of the data would be overwritten each year.

A central core of the sun's mass and a 1/5th the sun's density would look like an inner tube 2.4 million miles in diameter, with a ring cross section of .6 million miles. It would be kept near 0.5 Kelvin. It would have 1.2e57 atoms (mostly hydrogen and helium) and store 1.2e57 bits. An outer shell at the radius of Pluto, at 8 degrees Kelvin, would use fusion to power clearing bits. It would run at 1e23 watts and clear 1.25e45 bits per second. Throwing rocks back and forth (even with gravitational assists) takes 200 years; this would require about 0.6% of the mass dedicated to rocks being thrown back and forth. Maximum network latency at lightspeed for a compact 2.5e55 subset of the bits is 1.2 seconds. Maximum latency for the whole 1.2e57 bits is 13 seconds. A fraction of 1/30000 of the bits would be overwritten each year. It could be powered for 34 trillion years years.

The Margolus-Levitin theorem limits how fast computation can proceed. If I'm reading it right, it is 6e33 operations per second per kilogram per degree Kelvin, even if the computations are reversible. A dense core of the whole solar system (2e30 kg) at 0.5 Kelvin could do 6e63 operations per second, so the dense core could do 4.8e18 operations per bit written. Computation is not expected to be the bottleneck.

Comparing the dense core to the Dyson sphere:

Roughly, the dense core can run jobs faster than a Dyson sphere and on bigger datasets, but the final results from each job must be smaller. And the dense core can also keep doing it longer. The dense core can run 2.6e8x more equivalent jobs overall, but each job has to produce 6.6e6x smaller results.

Summary

You can pack computation in huge masses, close to as dense as a planet but still in microgravity. You can run it at arbitrarily low temperatures, and radiating heat isn't a problem. Probably could make it out of mostly hydrogen and helium. The data structures and algorithms stay about the same no matter how big you make it. Converting this scheme to quantum computation, or neutronium, or other exotic techniques and materials probably doesn't change the design much at all.


Bob Predicts the Future

A story set in a dense torus core

Table of Contents