Thanks for all this Eryksun (and Mark!), but... I don't understand why you
brought gdbm in? Is it something underlying shelve, or a better approach,
or something else? That last part really puts me in a pickle, and I don't
understand why.

Separately, I'm also curious about how to process big files. For example, I
was trying to play 100 million games of chutes & ladders, and I crashed my
machine, I believe: the game results, including 4 ints & 2 short lists of
ints per game, are gathered into a list, so it can become a pretty big
list. I need to do stats and other analyses on it in the end (okay, I
really don't NEED to play 100 million games of chutes & ladders, but as
long as I have...): I suppose I could break it into manageable (maybe 1
million games each), but that will make some of the stats either clunky or
wrong (I haven't really attacked that part yet).

And since I'm not REALLY ready to ask this question, I'll tack it on to the
end... I'm also beginning to think about how to speed it up: I'm imagining
my two options are going to be to code some sections in a faster language
(i.e. C), or maybe to introduce multi-threading since I'm working on a
multicore machine generally (core I7), and I'm doing a lot of iterations of
the same thing with no important order... seems like a good candidate. Now,
I'm probably pretty far from that piece (in my learning process), but this
is moving along pretty well so I'm open to suggestions about how to
proceed. I've started switching up my code a fair bit to try to make it
more OOP, though I'm still rough on that part.

K
_______________________________________________
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor

Reply via email to