Re: [sqlite] Index creation on huge table will never finish.

2007-03-22 Thread Chris Jones
, but similarly to my last response, it probably makes sense for me to add it to my schema as a sanity check if nothing else. Thanks, Chris Derrell.Lipman wrote: > > Chris Jones <[EMAIL PROTECTED]> writes: > >> Derrell.Lipman wrote: > > > So to guarantee that the *strings*

Re: [sqlite] Index creation on huge table will never finish.

2007-03-22 Thread Chris Jones
Thanks everyone for your feedback. I ended up doing a presort on the data, and then adding the data in order. At first I was a little concerned about how I was going to implement an external sort on a data set that huge, and realized that the unix "sort" command can handle large files, and in

Re: [sqlite] Index creation on huge table will never finish.

2007-03-21 Thread Chris Jones
I don't think that solves my problem. Sure, it guarantees that the IDs are unique, but not the strings. My whole goal is to be able to create a unique identifier for each string, in such a way that I dont have the same string listed twice, with different identifiers. In your solution, there

[sqlite] Index creation on huge table will never finish.

2007-03-21 Thread Chris Jones
Hi all, I have a very simple schema. I need to assign a unique identifier to a large collection of strings, each at most 80-bytes, although typically shorter. The problem is I have 112 million of them. My schema looks as follows: CREATE TABLE rawfen ( fen VARCHAR(80) ); CREATE INDEX