, but similarly to my last response, it probably makes
sense for me to add it to my schema as a sanity check if nothing else.
Thanks,
Chris
Derrell.Lipman wrote:
>
> Chris Jones <[EMAIL PROTECTED]> writes:
>
>> Derrell.Lipman wrote:
>
>
> So to guarantee that the *strings*
Thanks everyone for your feedback.
I ended up doing a presort on the data, and then adding the data in order.
At first I was a little concerned about how I was going to implement an
external sort on a data set that huge, and realized that the unix "sort"
command can handle large files, and in
I don't think that solves my problem. Sure, it guarantees that the IDs are
unique, but not the strings.
My whole goal is to be able to create a unique identifier for each string,
in such a way that I dont have the same string listed twice, with different
identifiers.
In your solution, there
Hi all,
I have a very simple schema. I need to assign a unique identifier to a
large collection of strings, each at most 80-bytes, although typically
shorter.
The problem is I have 112 million of them.
My schema looks as follows:
CREATE TABLE rawfen ( fen VARCHAR(80) );
CREATE INDEX
4 matches
Mail list logo