Hard to say why your code performs that way, it may not be creating as many objects for example strings may not be re-created just referenced. Are your creating new objects for every column returned?

Bring 600,000 to 10M columns back at once is always going to take time. I think any python database client would take a while to create objects for 600,000 rows. Do you have an example of pulling 600,000 rows through MySQL into python to compare against?

Is it possible to break up the get_slice into chunks of 10,000 or 100,000? IMHO you will get more consistent performance if you bound the requests, so you have an idea of the upper level of latency for each request and create a more consistent memory footprint. 

For example in the rough test below, 100,000 objects takes 0.75 secs but 600,000 takes 13. 

As an example of reprocessing the results, i called go2 with the output of go below.

def go2(buffer):
    start = timetime()
    buffer2 = [
        {"name" : csc.column.name, "value" : csc.column.value}
        for csc in buffer
    ]
    print "Done2 in %s" % (time.time() -start)

{977} > python decode_test.py 100000
Done in 0.75460100174
Done2 in 0.314303874969

 {978} > python decode_test.py 600000
Done in 13.2945489883
Done2 in 7.32861185074

My general advice is to pull back less data in a single request. 

Aaron

On 20 Oct, 2010,at 11:30 AM, Wayne <wav...@gmail.com> wrote:

I am not sure how many bytes, but we do convert the cassandra object that is returned in 3s into a dictionary in ~1s and then again into a custom python object in about ~1.5s. Expectations are based on this timing. If we can convert what thrift returns into a completely new python object in 1s why does thrift need 3s to give it to us?

To us it is like the MySQL client we use in python. It is really C wrapped in python and adds almost zero overhead to the time it takes mysql to return the data. That is the expectation we have and the performance we are looking to get to. Disk I/O + 20%.

We are returning one big row and this is not our normal use case but a requirement for us to use Cassandra. We need to get all data for a specific value, as this is a secondary index. It is like getting all users in the state of CA. CA is the key and there is a column for every user id. We are testing with 600,000 but this will grow to 10+ million in the future.

We can not test .7 as we are only using .6.6. We are trying to evaluate Cassandra and stability is one concern so .7 is definitely not for us at this point.

Thanks.


On Tue, Oct 19, 2010 at 4:27 PM, Aaron Morton <aa...@thelastpickle.com> wrote:

 Just wondering how many bytes you are returning to the client to get an idea of how slow it is. 

The call to fastbinary is decoding the wireformat and creating the Python objects. When you ask for 600,000 columns your are creating a lot of python objects. Each column will be a ColumnOrSuperColumn, wrapping a Column, which has probably 2 Strings. So 2.4 million python objects.

Here's  my rough test script. 

def go(count):
    start = time.time()
    buffer = [
        ttypes.ColumnOrSuperColumn(column=ttypes.Column(
            "column_name_%s" % i, "row_size of something something", 0, 0))
        for i in range(count)
    ]
    print "Done in %s" % (time.time() - start)

On my machine that takes 13 seconds for 600,000 and 0.04 for 10,000. The fastbinary module is running a lot faster because it's all in c.  It's not a great test but I think it gives an idea of what you are asking for.

I think there is an element of python been slower than other languages. But IMHO you are asking for a lot of data. Can you ask for less data? 

Out of interest are you able to try the avro client? It's still experimental (0.7 only) but may give you something to compare it against. 

Aaron

On 20 Oct, 2010,at 07:23 AM, Wayne <wav...@gmail.com> wrote:


It is an entire row which is 600,000 cols. We pass a limit of 10million to make sure we get it all. Our issue is that it seems Thrift itself has more overhead/latency added to a read that Cassandra takes itself to do the read. If cfstats for the slowest node reports 2.25s to us it is not acceptable that the data comes back to the client in 5.5s. After working with Jonathon we have optimized Cassandra itself to return the quorum read in 2.7s but we still have 3s getting lost in the thrift call (fastbinary.decode_binary).

We have seen this pattern totally hold for ms reads as well for a few cols, but it is easier to look at things in seconds. If Cassandra can get the data off of the disks in 2.25s we expect to have the data in a Python object in under 3s. That is a totally realistic expectation from our experience. All latency needs to be pushed down to disk random read latency as that should always be what takes the longest. Everything else is passing through memory.



On Tue, Oct 19, 2010 at 2:06 PM, aaron morton <aa...@thelastpickle.com> wrote:

Wayne,
I'm calling cassandra from Python and have not seen too many 3 second reads.

Your last email with log messages in it looks like your are asking for 10,000,000 columns. How much data is this request actually transferring to the client? The column names suggest only a few.

DEBUG [pool-1-thread-64] 2010-10-18 19:25:28,867 StorageProxy.java (line 471) strongread reading data for SliceFromReadCommand(table='table', key='key1', column_parent='QueryPath(columnFamilyName='fact', superColumnName='null', columnName='null')', start='503a', finish='503a7c', reversed=false, count=10000000) from 698@/x.x.x.6

Aaron



On 20 Oct 2010, at 06:18, Jonathan Ellis wrote:

> I would expect C++ or Java to be substantially faster than Python.
> However, I note that Hector (and I believe Pelops) don't yet use the
> newest, fastest Thrift library.
>
> On Tue, Oct 19, 2010 at 8:21 AM, Wayne <wav...@gmail.com> wrote:
>> The changes seems to do the trick. We are down to about 1/2 of the original
>> quorum read performance. I did not see any more errors.
>>
>> More than 3 seconds on the client side is still not acceptable to us. We
>> need the data in Python, but would we be better off going through Java or
>> something else to increase performance? All three seconds are taken up in
>> Thrift itself (fastbinary.decode_binary(self, iprot.trans, (self.__class__,
>> self.thrift_spec))) so I am not sure what other options we have.
>>
>> Thanks for your help.
>>
>
>
>
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder of Riptano, the source for professional Cassandra support
> http://riptanocom



Reply via email to