I would highly recommend looking in to the cProfile and pstat module and profile the code that is going slow. If your using the protocol buffer client it could possibly be related to the fact that python protocol buffers is extraordinarily slow and is well known to be slow. Profile until proven guilty though.
Tom Burdick On Mon, Feb 14, 2011 at 7:09 AM, Mike Stoddart <[email protected]> wrote: > I added some code to my system to test writing data into Riak. I'm > using the Python client library with protocol buffers. I'm writing a > snapshot of my current data, which is one json object containing on > average 60 individual json sub-objects. Each sub object contains about > 22 values. > > # Archived entry. ts is a formatted timestamp. > entry = self._bucket.new(ts, data=data) > entry.store() > > # Now write the current entry. > entry = self._bucket.new("current", data=data) > entry.store() > > I'm writing the same data twice; the archived copy and the current > copy, which I can easily retrieve later. Performance is lower than > expected; top is showing a constant cpu usage of 10-12%. > > I haven't decided to use Riak; this is to help me decide. But for now > are there any optimisations I can do here? A similiar test with > MongoDB shows a steady cpu usage of 1%. The cpu usages are for my > client, not Riak's own processes. The only differences in my test app > is the code that writes the data to the database. Otherwise all other > code is 100% the same between these two test apps. > > Any suggestions appreciated. > Thanks > Mike > > _______________________________________________ > riak-users mailing list > [email protected] > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com >
_______________________________________________ riak-users mailing list [email protected] http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
