On 22 Sep 2010, at 20:22, Sands Alden Fish wrote: > (2) We currently don't have a centralized server with enough test data > to run many of these memory or scalability tests on our own. I think > this is something we could look into improving upon (especially if > anyone has test data to donate to the cause).
There is a lot of public domain data available online. I spent some time collecting some of this in a variety of formats (text, images, movies, sound, datasets) and then wrote something to use a word list (e.g. /usr/share/dict on most Linux systems) to create random metadata for them. After all, it doesn't matter that many bitstreams will be identical. That is how we populated our test environment here so we could replicate the problems we were seeing on the live system. Best regards, -- Tom De Mulder <[email protected]> - Cambridge University Computing Service +44 1223 3 31843 - New Museums Site, Pembroke Street, Cambridge CB2 3QH ------------------------------------------------------------------------------ Start uncovering the many advantages of virtual appliances and start using them to simplify application deployment and accelerate your shift to cloud computing. http://p.sf.net/sfu/novell-sfdev2dev _______________________________________________ DSpace-tech mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/dspace-tech

