Michael Scharf wrote:

Hi,

I have been playing with sqlite, but it seems to have a real
performance problem, when there are millions of rows in the
database (and the database is not in the disk cache of the OS).
Actually i found sqlite scales a bit better for huge numbers of rows, after i created indices for the things i needed, but not so much better than metakit...

What I essentially need is:

- a huge index (a few million keys) with md5 keys (preferably a hash table)
- the index points to some data structure with meta info (meta_data)
(about 10 attributes with 300-500 bytes per row)
- most of those data structures point to a ~3kb blob field.
- some of those data structures point to a big file like
data structure (a blob with varying sizes 20kb-10MB)
- some data structure to maintain tree and graph like
structures

Questions:
- Can metakit deal with such data (if 50 GB is to big, I could
move the blob like data into a flat file)?
As far as i know from talks with JCW last summer metakit doesn't play in the Gigabytes of Data League. So moving the large blobs out would probably help.

- Can I quickly sort on some columns of the meta_data?
- How would you model a tree/graph like data structure in
  metakit?
Do you know E4graph? It is designed on top of metakit for those kind of structures.
http://www.e4graph.com/e4graph/index.html

- Can I store binary data with metakit (null characters in
  strings)?
At least the tcl binding can, so i assume the other bindings can do it as well.

Michael Schlenker


_______________________________________________
metakit mailing list - [EMAIL PROTECTED]
http://www.equi4.com/mailman/listinfo/metakit

Reply via email to