Re: Performance Question

2016-07-06 Thread Dan Burkert
On Wed, Jul 6, 2016 at 7:05 AM, Benjamin Kim  wrote:

> Over the weekend, the row count is up to <500M. I will give it another few
> days to get to 1B rows. I still get consistent times ~15s for doing row
> counts despite the amount of data growing.
>
> On another note, I got a solicitation email from SnappyData to evaluate
> their product. They claim to be the “Spark Data Store” with tight
> integration with Spark executors. It claims to be an OLTP and OLAP system
> with being an in-memory data store first then to disk. After going to
> several Spark events, it would seem that this is the new “hot” area for
> vendors. They all (MemSQL, Redis, Aerospike, Datastax, etc.) claim to be
> the best "Spark Data Store”. I’m wondering if Kudu will become this too?
> With the performance I’ve seen so far, it would seem that it can be a
> contender. All that is needed is a hardened Spark connector package, I
> would think. The next evaluation I will be conducting is to see if
> SnappyData’s claims are valid by doing my own tests.
>

It's hard to compare Kudu against any other data store without a lot of
analysis and thorough benchmarking, but it is certainly a goal of Kudu to
be a great platform for ingesting and analyzing data through Spark.  Up
till this point most of the Spark work has been community driven, but more
thorough integration testing of the Spark connector is going to be a focus
going forward.

- Dan



> Cheers,
> Ben
>
>
>
> On Jun 15, 2016, at 12:47 AM, Todd Lipcon  wrote:
>
> Hi Benjamin,
>
> What workload are you using for benchmarks? Using spark or something more
> custom? rdd or data frame or SQL, etc? Maybe you can share the schema and
> some queries
>
> Todd
>
> Todd
> On Jun 15, 2016 8:10 AM, "Benjamin Kim"  wrote:
>
>> Hi Todd,
>>
>> Now that Kudu 0.9.0 is out. I have done some tests. Already, I am
>> impressed. Compared to HBase, read and write performance are better. Write
>> performance has the greatest improvement (> 4x), while read is > 1.5x.
>> Albeit, these are only preliminary tests. Do you know of a way to really do
>> some conclusive tests? I want to see if I can match your results on my 50
>> node cluster.
>>
>> Thanks,
>> Ben
>>
>> On May 30, 2016, at 10:33 AM, Todd Lipcon  wrote:
>>
>> On Sat, May 28, 2016 at 7:12 AM, Benjamin Kim  wrote:
>>
>>> Todd,
>>>
>>> It sounds like Kudu can possibly top or match those numbers put out by
>>> Aerospike. Do you have any performance statistics published or any
>>> instructions as to measure them myself as good way to test? In addition,
>>> this will be a test using Spark, so should I wait for Kudu version 0.9.0
>>> where support will be built in?
>>>
>>
>> We don't have a lot of benchmarks published yet, especially on the write
>> side. I've found that thorough cross-system benchmarks are very difficult
>> to do fairly and accurately, and often times users end up misguided if they
>> pay too much attention to them :) So, given a finite number of developers
>> working on Kudu, I think we've tended to spend more time on the project
>> itself and less time focusing on "competition". I'm sure there are use
>> cases where Kudu will beat out Aerospike, and probably use cases where
>> Aerospike will beat Kudu as well.
>>
>> From my perspective, it would be great if you can share some details of
>> your workload, especially if there are some areas you're finding Kudu
>> lacking. Maybe we can spot some easy code changes we could make to improve
>> performance, or suggest a tuning variable you could change.
>>
>> -Todd
>>
>>
>>> On May 27, 2016, at 9:19 PM, Todd Lipcon  wrote:
>>>
>>> On Fri, May 27, 2016 at 8:20 PM, Benjamin Kim 
>>> wrote:
>>>
 Hi Mike,

 First of all, thanks for the link. It looks like an interesting read. I
 checked that Aerospike is currently at version 3.8.2.3, and in the article,
 they are evaluating version 3.5.4. The main thing that impressed me was
 their claim that they can beat Cassandra and HBase by 8x for writing and
 25x for reading. Their big claim to fame is that Aerospike can write 1M
 records per second with only 50 nodes. I wanted to see if this is real.

>>>
>>> 1M records per second on 50 nodes is pretty doable by Kudu as well,
>>> depending on the size of your records and the insertion order. I've been
>>> playing with a ~70 node cluster recently and seen 1M+ writes/second
>>> sustained, and bursting above 4M. These are 1KB rows with 11 columns, and
>>> with pretty old HDD-only nodes. I think newer flash-based nodes could do
>>> better.
>>>
>>>

 To answer your questions, we have a DMP with user profiles with many
 attributes. We create segmentation information off of these attributes to
 classify them. Then, we can target advertising appropriately for our sales
 department. Much of the data processing is for applying 

Re: Performance Question

2016-07-06 Thread Dan Burkert
On Mon, Jul 4, 2016 at 2:46 AM, 袁康(梓悠)  wrote:

> How can I delete data in kudu table wiht spark  (not delete the table at
> all)?
>

We do not currently have a way to delete a Kudu table through the spark
connector, but you should be able to instantiate a Kudu client and delete
the table that way.  We have discussed making one of the spark write modes
do a truncate operation, but nothing has been implemented.

 - Dan


> --
> 发件人:Todd Lipcon 
> 发送时间:2016年7月2日(星期六) 02:44
> 收件人:user 
> 主 题:Re: Performance Question
>
> On Thu, Jun 30, 2016 at 5:39 PM, Benjamin Kim  wrote:
> Hi Todd,
>
> I changed the key to be what you suggested, and I can’t tell the
> difference since it was already fast. But, I did get more numbers.
>
> Yea, you won't see a substantial difference until you're inserting
> billions of rows, etc, and the keys and/or bloom filters no longer fit in
> cache.
>
>
> > 104M rows in Kudu table
> - read: 8s
> - count: 16s
> - aggregate: 9s
>
> The time to read took much longer from 0.2s to 8s, counts were the same
> 16s, and aggregate queries look longer from 6s to 9s.
>
> I’m still impressed.
>
> We aim to please ;-) If you have any interest in writing up these
> experiments as a blog post, would be cool to post them for others to learn
> from.
>
> -Todd
>
> On Jun 15, 2016, at 12:47 AM, Todd Lipcon  wrote:
>
> Hi Benjamin,
>
> What workload are you using for benchmarks? Using spark or something more
> custom? rdd or data frame or SQL, etc? Maybe you can share the schema and
> some queries
>
> Todd
>
> Todd
> On Jun 15, 2016 8:10 AM, "Benjamin Kim"  wrote:
> Hi Todd,
>
> Now that Kudu 0.9.0 is out. I have done some tests. Already, I am
> impressed. Compared to HBase, read and write performance are better. Write
> performance has the greatest improvement (> 4x), while read is > 1.5x.
> Albeit, these are only preliminary tests. Do you know of a way to really do
> some conclusive tests? I want to see if I can match your results on my 50
> node cluster.
>
> Thanks,
> Ben
>
> On May 30, 2016, at 10:33 AM, Todd Lipcon  wrote:
>
> On Sat, May 28, 2016 at 7:12 AM, Benjamin Kim  wrote:
> Todd,
>
> It sounds like Kudu can possibly top or match those numbers put out by
> Aerospike. Do you have any performance statistics published or any
> instructions as to measure them myself as good way to test? In addition,
> this will be a test using Spark, so should I wait for Kudu version 0.9.0
> where support will be built in?
>
> We don't have a lot of benchmarks published yet, especially on the write
> side. I've found that thorough cross-system benchmarks are very difficult
> to do fairly and accurately, and often times users end up misguided if they
> pay too much attention to them :) So, given a finite number of developers
> working on Kudu, I think we've tended to spend more time on the project
> itself and less time focusing on "competition". I'm sure there are use
> cases where Kudu will beat out Aerospike, and probably use cases where
> Aerospike will beat Kudu as well.
>
> From my perspective, it would be great if you can share some details of
> your workload, especially if there are some areas you're finding Kudu
> lacking. Maybe we can spot some easy code changes we could make to improve
> performance, or suggest a tuning variable you could change.
>
> -Todd
>
>
> On May 27, 2016, at 9:19 PM, Todd Lipcon  wrote:
>
> On Fri, May 27, 2016 at 8:20 PM, Benjamin Kim  wrote:
> Hi Mike,
>
> First of all, thanks for the link. It looks like an interesting read. I
> checked that Aerospike is currently at version 3.8.2.3, and in the article,
> they are evaluating version 3.5.4. The main thing that impressed me was
> their claim that they can beat Cassandra and HBase by 8x for writing and
> 25x for reading. Their big claim to fame is that Aerospike can write 1M
> records per second with only 50 nodes. I wanted to see if this is real.
>
> 1M records per second on 50 nodes is pretty doable by Kudu as well,
> depending on the size of your records and the insertion order. I've been
> playing with a ~70 node cluster recently and seen 1M+ writes/second
> sustained, and bursting above 4M. These are 1KB rows with 11 columns, and
> with pretty old HDD-only nodes. I think newer flash-based nodes could do
> better.
>
>
> To answer your questions, we have a DMP with user profiles with many
> attributes. We create segmentation information off of these attributes to
> classify them. Then, we can target advertising appropriately for our sales
> department. Much of the data processing is for applying models on all or if
> not most of every profile’s attributes to find similarities (nearest
> neighbor/clustering) over a large number of rows when batch 

Re: Performance Question

2016-07-06 Thread Benjamin Kim
Over the weekend, the row count is up to <500M. I will give it another few days 
to get to 1B rows. I still get consistent times ~15s for doing row counts 
despite the amount of data growing.

On another note, I got a solicitation email from SnappyData to evaluate their 
product. They claim to be the “Spark Data Store” with tight integration with 
Spark executors. It claims to be an OLTP and OLAP system with being an 
in-memory data store first then to disk. After going to several Spark events, 
it would seem that this is the new “hot” area for vendors. They all (MemSQL, 
Redis, Aerospike, Datastax, etc.) claim to be the best "Spark Data Store”. I’m 
wondering if Kudu will become this too? With the performance I’ve seen so far, 
it would seem that it can be a contender. All that is needed is a hardened 
Spark connector package, I would think. The next evaluation I will be 
conducting is to see if SnappyData’s claims are valid by doing my own tests.

Cheers,
Ben


> On Jun 15, 2016, at 12:47 AM, Todd Lipcon  wrote:
> 
> Hi Benjamin,
> 
> What workload are you using for benchmarks? Using spark or something more 
> custom? rdd or data frame or SQL, etc? Maybe you can share the schema and 
> some queries
> 
> Todd
> 
> Todd
> 
> On Jun 15, 2016 8:10 AM, "Benjamin Kim"  > wrote:
> Hi Todd,
> 
> Now that Kudu 0.9.0 is out. I have done some tests. Already, I am impressed. 
> Compared to HBase, read and write performance are better. Write performance 
> has the greatest improvement (> 4x), while read is > 1.5x. Albeit, these are 
> only preliminary tests. Do you know of a way to really do some conclusive 
> tests? I want to see if I can match your results on my 50 node cluster.
> 
> Thanks,
> Ben
> 
>> On May 30, 2016, at 10:33 AM, Todd Lipcon > > wrote:
>> 
>> On Sat, May 28, 2016 at 7:12 AM, Benjamin Kim > > wrote:
>> Todd,
>> 
>> It sounds like Kudu can possibly top or match those numbers put out by 
>> Aerospike. Do you have any performance statistics published or any 
>> instructions as to measure them myself as good way to test? In addition, 
>> this will be a test using Spark, so should I wait for Kudu version 0.9.0 
>> where support will be built in?
>> 
>> We don't have a lot of benchmarks published yet, especially on the write 
>> side. I've found that thorough cross-system benchmarks are very difficult to 
>> do fairly and accurately, and often times users end up misguided if they pay 
>> too much attention to them :) So, given a finite number of developers 
>> working on Kudu, I think we've tended to spend more time on the project 
>> itself and less time focusing on "competition". I'm sure there are use cases 
>> where Kudu will beat out Aerospike, and probably use cases where Aerospike 
>> will beat Kudu as well.
>> 
>> From my perspective, it would be great if you can share some details of your 
>> workload, especially if there are some areas you're finding Kudu lacking. 
>> Maybe we can spot some easy code changes we could make to improve 
>> performance, or suggest a tuning variable you could change.
>> 
>> -Todd
>> 
>> 
>>> On May 27, 2016, at 9:19 PM, Todd Lipcon >> > wrote:
>>> 
>>> On Fri, May 27, 2016 at 8:20 PM, Benjamin Kim >> > wrote:
>>> Hi Mike,
>>> 
>>> First of all, thanks for the link. It looks like an interesting read. I 
>>> checked that Aerospike is currently at version 3.8.2.3, and in the article, 
>>> they are evaluating version 3.5.4. The main thing that impressed me was 
>>> their claim that they can beat Cassandra and HBase by 8x for writing and 
>>> 25x for reading. Their big claim to fame is that Aerospike can write 1M 
>>> records per second with only 50 nodes. I wanted to see if this is real.
>>> 
>>> 1M records per second on 50 nodes is pretty doable by Kudu as well, 
>>> depending on the size of your records and the insertion order. I've been 
>>> playing with a ~70 node cluster recently and seen 1M+ writes/second 
>>> sustained, and bursting above 4M. These are 1KB rows with 11 columns, and 
>>> with pretty old HDD-only nodes. I think newer flash-based nodes could do 
>>> better.
>>>  
>>> 
>>> To answer your questions, we have a DMP with user profiles with many 
>>> attributes. We create segmentation information off of these attributes to 
>>> classify them. Then, we can target advertising appropriately for our sales 
>>> department. Much of the data processing is for applying models on all or if 
>>> not most of every profile’s attributes to find similarities (nearest 
>>> neighbor/clustering) over a large number of rows when batch processing or a 
>>> small subset of rows for quick online scoring. So, our use case is a 
>>> typical advanced analytics scenario. We have tried HBase, but it doesn’t 
>>> work well