Computer Rough Sizing
in
Science Fiction

(v0.9 – 13 November 2010)

One of the biggest problems in writing science fiction is the fact that computer technology is marching ever forward.

Remember in 1995 when we thrilled at Johnny Mnemonic carrying around 80 GB of data in his brain...when now in 2010, we'd just be saying “Look, instead of cutting your brain up, why not just carry three 32-GB SD Flash cards in your underwear? It's safer for you and you can conceal the data elsewhere, which is kind of impossible with a neural storage device in your head.”

This is why doing a little homework can make your story's technical details not break suspension of belief several decades down the line.

Other BBOW Pages of interest for this topic:

Computing Power throughout History
Video Bitrates
Digital Elevation Map Rough Sizing

Baseline Operating Assumptions

Star Maps

In order for your starship to navigate amongst the stars, you're going to need a star map. The sizes of some of the more popular comprehensive digital star catalogs available online from the U.S. Naval Observatory are:

USNO-B1.0 (1,042,618,261 Objects – 80 Bytes Per Record – 83.4 GB)
USNO-A2.0 (526,230,881 Objects – 12 Bytes Per Record - 6.314 GB)
USNO-SA2.0 (54,787,624 Objects – 12 Bytes Per Record - 657.45 MB)

The big discrepancy in the sizes between the USNO-A and USNO-B series catalogs is that significantly more information is available per star, allowing a more accurate star catalog.

So let's assume that Humanity has visited 35 star systems by the year 2500. Then you'd have a star catalog approximately 2.9~ TB in size.

Remember that if you're using a jump drive system that means you don't have to visit every system in between the 35 visited ones, the star fields will be vastly different between each system; requiring a comprehensive survey of each system's starfield to enable precise navigation.

Also; different types of craft will need different amounts of starfield detail. For example a short range Emergency Escape Vehicle (EEV) that is capable of only sublight travel or very short range FTL won't need a full billion-object catalog per system. It could probably get around with only 54 million objects per system; for a total star-chart size of only 140.8 GB.

Project Apollo got by with just 37 stars and three other objects for celestial navigation and to realign the IMU if it lost it's position via an optical telescope.

They were:

Alpheratz (Alpha Andromedae)
Acamar (Theta Eridani)
Achernar (Alpha Eridani)
Acrux (Alpha Crucis)
Aldebaran (Alpha Tauri)
Alkaid (Eta Ursae Majoris)
Alphard (Alpha Hydrae)
Alphecca (Alpha Coronae Borealis)
Altair (Alpha Aquilae)
Antares (Alpha Scorpii)
Arcturus (Alpha Boötes)
Atria (Alpha Trianguli Australis)
Canopus (Alpha Carinae)
Capella (Alpha Aurigae)
Dabih (Beta Capricorni)
Deneb (Alpha Cygni)
Denebola (Beta Leonis)
Diphda (Beta Ceti)
Dnoces (Iota Ursae Majoris)
Enif (Epsilon Pegasi)
Fomalhaut (Alpha Piscis Austrini)
Gienah (Gamma Corvus)
Menkar (Alpha Ceti)
Menkent (Theta Centauri)
Mirfak (Alpha Pegasi)
Navi (Gamma Cassiopeiae)
Nunki (Sigma Sagittari)
Peacock (Alpha Pavonis)
Polaris (Alpha Ursae Minoris)
Procyon (Alpha Canis Minoris)
Rasalhague (Alpha Ophiuchi)
Regor (Sigma Puppis)
Regulus (Alpha Leonis)
Rigel (Beta Orionis)
Sirius (Alpha Canis Majoris)
Spica (Alpha Virginis)
Vega (Alpha Lyrae)

The three other objects used for IMU alignment were:

The Sun
The Earth
The Moon

Mission Planning Assumptions

Raw versus Finished Data

Currently, the LHC produces a petabyte a second of raw data when it is running. This is filtered down to about a gigabyte a second for transmittal to the storage archives. A typical years' worth use of the LHC generates 15-20 PB of filtered data.

In 2000, the Shuttle Radar Topography Mission generated 12 TB of raw data as it mapped 80% of the world's land mass at 30 meter resolution in 16-bit word format. This was then processed down to about 7.22 TB of finished data by my rough calculations.

Obviously, your typical SCIENCE! payload will fall somewhere between these two extremes. A good rule of thumb might be multiplying the finished scientific product's file size by about three times; and for video, using the uncompressed raw video's size.

Of course, you may have to bring back your raw scientific data for verification purposes, and so that others can take a look at it for their own experiments.

Planetary Mapping

So let's say you want to map ten earth-sized planets down to 1m2 resolution with 32-bit words during each voyage. Each finished map at 1m2 resolution would be 1.8 PB.

If we went with our assumption of raw data being three times as numerous as the finished product; you would need a computer network with about 5.4 PB of high speed memory/storage capacity to crunch the raw data into a finished product.

Compression can reduce file sizes significantly for data that you don't need to access in real time – a 25.9 MB DEM tile compresses to about 7.3 MB, a reduction of 3.5!

So the final data “footprint” of a planetary 1m2 survey would be 2.05 PB. At ten planets, that's 20.5 PB of storage you need. You'll also need to figure in the fact that researchers on your ship will want to have access to uncompressed data to speed up their models. So double the storage space you need.

Finally, you will need to have independent storage systems to act as backups so that you don't suffer from loss of vital data if something bad happens. Two backups is a good middle ground between expense and protection of the data.

So you'd have three independent computer systems with a capacity of 41 PB for the Planetary mapping division alone.

Video Filming

For video; a likely future format is Ultra High Definition TV (7680 x 4320), which produces a raw bandwidth of 24 GB/sec at 30 FPS in true color. If you stayed in orbit around a planet for 15 days to complete a planetary mapping survey, you'd generate 1.3 million seconds of footage from each camera, or about 30 PB.

If we assumed we had to film ten planets, with five cameras pointing at the planet and the customary two backups; then 3,000 PB of storage would be needed by the Video Division.