Something that I have done is to store a node path in the table. This
might not be the best
from a normalization perspective, but in my case I am storing file paths
so one might argue the full path is the key.
The advantage is that it becomes very easy/fast to query for all the
nodes below or at a certain level using a like operator.
On 09/05/2010 04:41 PM, Alec Munro wrote:
Hi list,
I've got a tree-type of data, representing somewhat of a method call
history. When this is loaded, in an instance where it represents about
60 entries total, up to 2 deep, representing 14k of data, it takes
about 9 seconds (up from ~0). Now, my laptop HDD where I am testing
now is pretty terrible, but there's probably also some kind of
optimization I could do.
When I retrieve the data, I convert it to JSON, with a call like this
(where json_test_case_method is a recursive function I defined):
"methods":[json_test_case_method(method) for method in
test_case.methods]}
The hierarchical relationship is defined thusly:
TestCaseMethod.children = relation(TestCaseMethod, cascade="all",
backref=backref("parent",
remote_side=[TestCaseMethod.id]),
order_by=TestCaseMethod.id)
I've done zero SQLAlchemy optimization in my career thus far, and very
little ORM optimization (a bit with Hibernate several years ago), so
any advice is appreciated. :)
Thanks,
Alec
--
David Gardner
Pipeline Tools Programmer
Jim Henson Creature Shop
dgard...@creatureshop.com
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/sqlalchemy?hl=en.