Re: Large Data Set In Mod_Perl

2003-06-13 Thread Perrin Harkins
On Fri, 2003-06-13 at 12:02, Patrick Mulvany wrote: > However If I ever heard of a case for use of a fixed width ascii file using spacing > records this is it. Why make your life difficult? Just use a dbm file. - Perrin

Re: Large Data Set In Mod_Perl

2003-06-13 Thread Patrick Mulvany
On Wed, May 28, 2003 at 10:07:39PM -0400, Dale Lancaster wrote: > For the perl hash, I would key the hash on the combo of planet and date, > something like: > > my %Planets = ( > > jupiter=> { > "1900-01-01"=> ( "5h 39m 18s", "+22o > 4.0'", 28.922,

Re: Large Data Set In Mod_Perl

2003-05-30 Thread Ranga Nathan
Perrin Harkins wrote: simran wrote: I need to be able to say: * Lookup the _distance_ for the planet _mercury_ on the date _1900-01-01_ On the face of it, a relational database is best for that kind of query. However, if you won't get any fancier than that, you can get by with MLDBM or som

RE: Large Data Set In Mod_Perl

2003-05-30 Thread Perrin Harkins
On Thu, 2003-05-29 at 13:10, Marc M. Adkins wrote: > My original comment was regarding threads, not processes. I run on Windows > and see only two Apache processes, yet I have a number of Perl interpreters > running in their own ithreads. My understanding of Perl ithreads is that > while the synt

RE: Large Data Set In Mod_Perl

2003-05-30 Thread Marc M. Adkins
> On Thu, 2003-05-29 at 12:59, Marc M. Adkins wrote: > > That's news to me (not being facetious). I was under the > impression that > > cloning Perl 5.8 ithreads cloned everything, that there was no > sharing of > > read-only data. > > We're not talking about ithreads here, just processes. The da

FW: Large Data Set In Mod_Perl

2003-05-30 Thread Marc M. Adkins
> On Thu, 2003-05-29 at 11:59, Marc M. Adkins wrote: > > > > perhaps something such as copying the whole 800,000 rows to > > > > memory (as a hash?) on apache startup? > > > > > > That would be the fastest by far, but it will use a boatload of RAM. > > > It's pretty easy to try, so test it and see

RE: Large Data Set In Mod_Perl

2003-05-30 Thread Perrin Harkins
On Thu, 2003-05-29 at 11:59, Marc M. Adkins wrote: > > > perhaps something such as copying the whole 800,000 rows to > > > memory (as a hash?) on apache startup? > > > > That would be the fastest by far, but it will use a boatload of RAM. > > It's pretty easy to try, so test it and see if you can s

RE: Large Data Set In Mod_Perl

2003-05-30 Thread Marc M. Adkins
> > perhaps something such as copying the whole 800,000 rows to > > memory (as a hash?) on apache startup? > > That would be the fastest by far, but it will use a boatload of RAM. > It's pretty easy to try, so test it and see if you can spare the RAM it > requires. Always one of my favorite soluti

Re: Large Data Set In Mod_Perl

2003-05-29 Thread Ged Haywood
Hi there, On Wed, 28 May 2003, Perrin Harkins wrote: > simran wrote: [snip] > > * Lookup the _distance_ for the planet _mercury_ on the date _1900-01-01_ [snip] > you can get by with MLDBM or something similar. You might also want to investigate using a compiled C Btree library which could be t

Re: Large Data Set In Mod_Perl

2003-05-29 Thread Dale Lancaster
g DB_file, it would probably be somewhere between the Perl hash approach and using the standard SQL database interface. dale - Original Message - From: "simran" <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]> Sent: Wednesday, May 28, 2003 9:29 PM Subject: Large Data Set In

Re: Large Data Set In Mod_Perl

2003-05-29 Thread Perrin Harkins
simran wrote: I need to be able to say: * Lookup the _distance_ for the planet _mercury_ on the date _1900-01-01_ On the face of it, a relational database is best for that kind of query. However, if you won't get any fancier than that, you can get by with MLDBM or something similar. Currently

Large Data Set In Mod_Perl

2003-05-29 Thread simran
Hi All, For one of the websites i have developed (/am developing), i have a dataset that i must refer to for some of the dynamic pages. The data is planetary data that is pretty much in spreadsheet format, aka, i have just under 800,000 "rows" of data. I don't do any copmlex searches or functio