> Hi list,

Howdy

> 
> I have a bilingual dictionary in a file with the following format:
> 
> word1:sense1 sense2 ... senseN
> word2:sense1 sense2 ... senseN
> ...
> 
> I'm doing a simple script which looks for a word and returns 
> all the possible translations. What's the faster (or the more 
> efficient) way to do that when the dictionary gets bigger? A 
> while loop reading line by line and evaluating which is the 
> current word, or loading the dictionary in a hash of arrays 
> before the search?
> 

Probably about the same, I'd have to run them both and perhaps use Benchmark; on them.
Your best bet is to avoid flatfile databases if your data will get very big.

I'd recommend using mysql (or whatever) and the DBI module to interact with it.
Not only will it make managing the data easier, your program will be 
easier to write and maintain since you won't have to worry about filesize or format.

HTH

DMuey

> Thanks in advance.
> 
> --
> Víctor.
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 
> 

--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to