We are working on a new version of TARDIS that massively simplifies the software requirements (no database needed), using Web Stores. We are planning to release this at the beginning of April (but not the 1st!)

See http://tardis.edu.au/wiki/index.php/TARDIS_Web_Stores

In a nutshell:

TARDIS Web Stores takes the original federated approach and makes it far more powerful, flexible and easy to set up in individual labs and institutions. Instead of the current requirement for data/metadata to reside in a Fedora Digital Repository, TARDIS Web Stores indexes files stored on any simple web server (and optional additional FTP server).

Aside from the greatly simplified storage setup, added bonuses of this approach involve data sets no longer residing in large archives - one can download individual files or entire data sets at once. Metadata will be storable/searchable on any level (experiment/dataset/datafile) meaning the flexibility of what metadata can be stored for display on the TARDIS site is virtually infinite. Shifting data from server to server, or changes in web address pointing to data is no problem, as all that needs to be done for data to show up in TARDIS is a link to an XML manifest residing next to the data itself. A program to scan files for metadata and produce a tardis-compatible manifest file for registration will also be distributed. We believe this added functionality, coupled with the ease of making data known to TARDIS will greatly increase the data indexed once this next iteration is released.

cheers
Ashley

On 12/03/2009, at 5:07 AM, James Holton wrote:

At ALS beamlines 8.3.1 and 12.3.1 we use a combination of DVD-R and LTO-4 tapes for long-term backup, and have the entire data collection history of each beamline backed up on DVD-R disks. This is at about 50 TB for 8.3.1 (built in 2001) and 30 TB for 12.3.1 (built in 2004). We also make a DVD of the user's data automatically and near-real-time using a ~$4k robot that inkjet prints the user's name and dataset summary onto each disk. Portable hard disk drives for "sneakernet" are also popular, but so is transferring the data over the internet, which can also be done in near-real time.

I started using LTO-4 tapes recently for two reasons: 1) the price per TB became competitive with DVD-R and the tape drive is only ~ $4k. 2) I used to keep two copies of each DVD, but found this was not really "redundant" because if you write two DVD's one after the other on the same day with the same writer using media from the same batch, then if you can't read one of these disks 4 years later, the chances of not being able to read the other disk are pretty high. So, a lesson I learned is to store data on two very different media types so you get "orthogonal" failure modes.

I can also tell you that it is a good idea to erase your LTO tapes 2-3 times before writing any data to them. I think this is because the primary source of error on these tapes is the roughness of the edge of the tape itself (which is used for alignment) and running it back and forth a few times probably wears/folds down any big bumps. Sounds strange, but I had some tapes I initially thought had "bad spots" on them, but upon erasing and re-writing the data to them again, the "bad spots" are now gone, and have remained gone each time I have checked those tapes over the last year. Subsequent tapes that I have erased 3x before use have never had "bad spots". Also, you need to write data to them at a minimum of 80 MB/s, or you can actually have problems reading back the tape. I do my writes in 2 GB chunks from the system RAM. ALWAYS test reading back the tape. Preferably more than once.

DVD-R media should also be verified and preferably in a low-quality DVD drive. This is because writers tend to have much higher quality than average drive mechanisms and I have seen many DVDs that read back just fine in the drive that wrote them, but throw all kinds of media errors when you take them home to a dusty old DVD reader.



As for getting the PDB to do image backup for us, I don't think that will be easy.

The average data collection rate at 8.3.1 is 2 GB/hour or ~10 TB/ year. So I imagine storing all of the data from the ~100 MX beamlines around the world would be a ~1 PetaB/year proposition. Since an average of 25 to 50 data sets are collected for every one that is published, the storage demand on the PDB would be ~30 TB/ year. Why only 1 in 50 you ask? That is a very good question, and it will probably never be answered unless the 49 of 50 unsolved data sets can be made available to methods developers.

I just now Froogled for media prices and got this:

$33/TB    LTO-4
$60/TB    DVD-R
$100/TB  hard disks
$400/TB  Blue-Ray
$3000/TB  Solid-state drives (such as USB thumbdrives)
$3M/TB   clay tablets

So PDB will only need to find an "extra" ~$1k/year to buy the media for 1 dataset/structure, or $30k/year for all of the data. Unfortunately, the media is not nearly as expensive as access to it. An LTO tape library with ~50 TB storage capacity is ~$20k on eBay, but this is EMPTY! You have to fill it with tapes, and then write software to make the data sets available on the web. Tape librarys in the multiple PetaB range are available, but not their prices. Clearly this represents a non-trivial investment in resources and effort for the PDB. The central problem is that the per-GB prices of storage do not scale well to PetaB-class systems. However, there is now Stimulus Package money available in the US for large equipment investments like this. Perhaps someone at Rutgers could submit one? I, for one, am very willing to write them a letter of support.

Another approach is to try and spread the storage out across the world and create a central registry for finding it. The TARDIS initiative in Australia (Androulakis et al. Acta D 2008) seems to be an important step in that direction, but I haven't been able to test it since I don't have a Fedora Repository Server. I do, however, have a web server, and I think a repository of URLs is probably better than nothing.

-James Holton
MAD Scientist


David Aragao wrote:
Dear All,

I wonder how people currently do their long term backups. I see DATs/DLTs being slowly dropped off at the beamlines and most people brings their data home in external HDs.

Anyone using blue-ray or double layer DVDs for long term backups? If so what kind of hardware? Do you use HDs for long term storage? If so, do you do a second copy and how do you store them?

I will try to compile the answers and relay back to the list a resume.

Thanks,
David




Associate Professor Ashley M Buckle
NHMRC Senior Research Fellow
The Department of Biochemistry and Molecular Biology
School of Biomedical Sciences, Faculty of Medicine &
Victorian Bioinformatics Consortium (VBC)
Monash University, Clayton, Vic 3800
Australia

http://www.med.monash.edu.au/biochem/staff/abuckle.html
iChat/AIM: blindcaptaincat
skype: ashley.buckle
Tel: (613) 9902 0269 (office)
Tel: (613) 9905 1653 (lab)

Fax : (613) 9905 4699




Reply via email to