Reporting back results...

As I mentioned earlier, I decided to try SQLAlchemy on top of Postgres, and I finally found some time today to try it out. After some googling and playing around, it looks like it is going to work. I have cobbled together hunks of demo code that do all the key things I need.

I also put together a little test program that creates a million records and then does a few of my little limited queries (where my criteria match effectively the entire table, but are count-limited to just a few).

It is fast. Queries are pretty much constant time (that is, independent of table size) taking well under a millisecond on a cheap x86 box. Building my table of fake data I discovered that doing a commit every 4k records is pretty fast and doesn't use terribly much memory, and that averages well under a millisecond each, also. Looking at top while building my table of fake data, the CPU is split 60:40 with my python program taking the bigger chunk of the time. There are certainly ways to improve my code, but with this crude program I am keeping postgres fairly busy, all from the comfort of python. Not bad.

At the moment I am waiting on building a table that is bigger than RAM, then I'll see what the penalty is to hop around in it with my queries, but so far SQLAlchemy looks good and is already much faster than was mongodb.

Thanks for all the help,

-kb, the Kent who wishes his weekends were longer, but he has hopes the coming Boston snowstorm with give him some bonus time.

_______________________________________________
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss

Reply via email to