Warning, python code follows :)i have a database which is built a row at a time, and may be updated with random insertions. it can get large, so i want to use a blocked view. i gather that blocking is good for updates but may slow traversals of the whole database, which i do often. so, is it reasonable to do something like this:
Why gather, just time the difference:
Adding 100,000 folders to your database (using in-memory storage on windows2000 2ghz )
0.731000065804 seconds to iterate through blocked view 0.53100001812 seconds to iterate through non-blocked view
Same thing for 1000 folders 0.0210000276566 seconds to iterate through blocked view 0.00999999046326 seconds to iterate through non-blocked view
I don't think this will really be an issue. However, you can use the following trick:
vw = st.getas(view...) bvw = vw.blocked() if len(vw) == 1: iterate through vw[0]._B
but remember, you always need to add items to bvw (the blocked view)
You can also block subviews which, since you might have a million files might be important
vw = st.getas("stuff[_B[folder:S,files[_B[name:S]]]]").blocked()
And when you get a files subview use files = vw[0].files.blocked()
Brian
_____________________________________________ Metakit mailing list - [EMAIL PROTECTED] http://www.equi4.com/mailman/listinfo/metakit
