Re: Spark on Kudu

2016-05-28 Thread Jean-Daniel Cryans
It will be in 0.9.0.

J-D

On Sat, May 28, 2016 at 8:31 AM, Benjamin Kim  wrote:

> Hi Chris,
>
> Will all this effort be rolled into 0.9.0 and be ready for use?
>
> Thanks,
> Ben
>
>
> On May 18, 2016, at 9:01 AM, Chris George 
> wrote:
>
> There is some code in review that needs some more refinement.
> It will allow upsert/insert from a dataframe using the datasource api. It
> will also allow the creation and deletion of tables from a dataframe
> http://gerrit.cloudera.org:8080/#/c/2992/
>
> Example usages will look something like:
> http://gerrit.cloudera.org:8080/#/c/2992/5/docs/developing.adoc
>
> -Chris George
>
>
> On 5/18/16, 9:45 AM, "Benjamin Kim"  wrote:
>
> Can someone tell me what the state is of this Spark work?
>
> Also, does anyone have any sample code on how to update/insert data in
> Kudu using DataFrames?
>
> Thanks,
> Ben
>
>
> On Apr 13, 2016, at 8:22 AM, Chris George 
> wrote:
>
> SparkSQL cannot support these type of statements but we may be able to
> implement similar functionality through the api.
> -Chris
>
> On 4/12/16, 5:19 PM, "Benjamin Kim"  wrote:
>
> It would be nice to adhere to the SQL:2003 standard for an “upsert” if it
> were to be implemented.
>
> MERGE INTO table_name USING table_reference ON (condition)
>  WHEN MATCHED THEN
>  UPDATE SET column1 = value1 [, column2 = value2 ...]
>  WHEN NOT MATCHED THEN
>  INSERT (column1 [, column2 ...]) VALUES (value1 [, value2 …])
>
> Cheers,
> Ben
>
> On Apr 11, 2016, at 12:21 PM, Chris George 
> wrote:
>
> I have a wip kuduRDD that I made a few months ago. I pushed it into gerrit
> if you want to take a look. http://gerrit.cloudera.org:8080/#/c/2754/
> It does pushdown predicates which the existing input formatter based rdd
> does not.
>
> Within the next two weeks I’m planning to implement a datasource for spark
> that will have pushdown predicates and insertion/update functionality (need
> to look more at cassandra and the hbase datasource for best way to do this)
> I agree that server side upsert would be helpful.
> Having a datasource would give us useful data frames and also make spark
> sql usable for kudu.
>
> My reasoning for having a spark datasource and not using Impala is: 1. We
> have had trouble getting impala to run fast with high concurrency when
> compared to spark 2. We interact with datasources which do not integrate
> with impala. 3. We have custom sql query planners for extended sql
> functionality.
>
> -Chris George
>
>
> On 4/11/16, 12:22 PM, "Jean-Daniel Cryans"  wrote:
>
> You guys make a convincing point, although on the upsert side we'll need
> more support from the servers. Right now all you can do is an INSERT then,
> if you get a dup key, do an UPDATE. I guess we could at least add an API on
> the client side that would manage it, but it wouldn't be atomic.
>
> J-D
>
> On Mon, Apr 11, 2016 at 9:34 AM, Mark Hamstra 
> wrote:
>
>> It's pretty simple, actually.  I need to support versioned datasets in a
>> Spark SQL environment.  Instead of a hack on top of a Parquet data store,
>> I'm hoping (among other reasons) to be able to use Kudu's write and
>> timestamp-based read operations to support not only appending data, but
>> also updating existing data, and even some schema migration.  The most
>> typical use case is a dataset that is updated periodically (e.g., weekly or
>> monthly) in which the the preliminary data in the previous window (week or
>> month) is updated with values that are expected to remain unchanged from
>> then on, and a new set of preliminary values for the current window need to
>> be added/appended.
>>
>> Using Kudu's Java API and developing additional functionality on top of
>> what Kudu has to offer isn't too much to ask, but the ease of integration
>> with Spark SQL will gate how quickly we would move to using Kudu and how
>> seriously we'd look at alternatives before making that decision.
>>
>> On Mon, Apr 11, 2016 at 8:14 AM, Jean-Daniel Cryans 
>> wrote:
>>
>>> Mark,
>>>
>>> Thanks for taking some time to reply in this thread, glad it caught the
>>> attention of other folks!
>>>
>>> On Sun, Apr 10, 2016 at 12:33 PM, Mark Hamstra 
>>> wrote:
>>>
 Do they care being able to insert into Kudu with SparkSQL


 I care about insert into Kudu with Spark SQL.  I'm currently delaying a
 refactoring of some Spark SQL-oriented insert functionality while trying to
 evaluate what to expect from Kudu.  Whether Kudu does a good job supporting
 inserts with Spark SQL will be a key consideration as to whether we adopt
 Kudu.

>>>
>>> I'd like to know more about why SparkSQL inserts in necessary for you.
>>> Is it just that you currently do it that way into some database or parquet
>>> so with minimal refactoring you'd be able 

Re: Spark on Kudu

2016-05-28 Thread Benjamin Kim
Hi Chris,

Will all this effort be rolled into 0.9.0 and be ready for use?

Thanks,
Ben

> On May 18, 2016, at 9:01 AM, Chris George  wrote:
> 
> There is some code in review that needs some more refinement.
> It will allow upsert/insert from a dataframe using the datasource api. It 
> will also allow the creation and deletion of tables from a dataframe
> http://gerrit.cloudera.org:8080/#/c/2992/ 
> 
> 
> Example usages will look something like:
> http://gerrit.cloudera.org:8080/#/c/2992/5/docs/developing.adoc 
> 
> 
> -Chris George
> 
> 
> On 5/18/16, 9:45 AM, "Benjamin Kim"  > wrote:
> 
> Can someone tell me what the state is of this Spark work?
> 
> Also, does anyone have any sample code on how to update/insert data in Kudu 
> using DataFrames?
> 
> Thanks,
> Ben
> 
> 
>> On Apr 13, 2016, at 8:22 AM, Chris George > > wrote:
>> 
>> SparkSQL cannot support these type of statements but we may be able to 
>> implement similar functionality through the api.
>> -Chris
>> 
>> On 4/12/16, 5:19 PM, "Benjamin Kim" > > wrote:
>> 
>> It would be nice to adhere to the SQL:2003 standard for an “upsert” if it 
>> were to be implemented.
>> 
>> MERGE INTO table_name USING table_reference ON (condition)
>>  WHEN MATCHED THEN
>>  UPDATE SET column1 = value1 [, column2 = value2 ...]
>>  WHEN NOT MATCHED THEN
>>  INSERT (column1 [, column2 ...]) VALUES (value1 [, value2 …])
>> 
>> Cheers,
>> Ben
>> 
>>> On Apr 11, 2016, at 12:21 PM, Chris George >> > wrote:
>>> 
>>> I have a wip kuduRDD that I made a few months ago. I pushed it into gerrit 
>>> if you want to take a look. http://gerrit.cloudera.org:8080/#/c/2754/ 
>>> 
>>> It does pushdown predicates which the existing input formatter based rdd 
>>> does not.
>>> 
>>> Within the next two weeks I’m planning to implement a datasource for spark 
>>> that will have pushdown predicates and insertion/update functionality (need 
>>> to look more at cassandra and the hbase datasource for best way to do this) 
>>> I agree that server side upsert would be helpful.
>>> Having a datasource would give us useful data frames and also make spark 
>>> sql usable for kudu.
>>> 
>>> My reasoning for having a spark datasource and not using Impala is: 1. We 
>>> have had trouble getting impala to run fast with high concurrency when 
>>> compared to spark 2. We interact with datasources which do not integrate 
>>> with impala. 3. We have custom sql query planners for extended sql 
>>> functionality.
>>> 
>>> -Chris George
>>> 
>>> 
>>> On 4/11/16, 12:22 PM, "Jean-Daniel Cryans" >> > wrote:
>>> 
>>> You guys make a convincing point, although on the upsert side we'll need 
>>> more support from the servers. Right now all you can do is an INSERT then, 
>>> if you get a dup key, do an UPDATE. I guess we could at least add an API on 
>>> the client side that would manage it, but it wouldn't be atomic.
>>> 
>>> J-D
>>> 
>>> On Mon, Apr 11, 2016 at 9:34 AM, Mark Hamstra >> > wrote:
>>> It's pretty simple, actually.  I need to support versioned datasets in a 
>>> Spark SQL environment.  Instead of a hack on top of a Parquet data store, 
>>> I'm hoping (among other reasons) to be able to use Kudu's write and 
>>> timestamp-based read operations to support not only appending data, but 
>>> also updating existing data, and even some schema migration.  The most 
>>> typical use case is a dataset that is updated periodically (e.g., weekly or 
>>> monthly) in which the the preliminary data in the previous window (week or 
>>> month) is updated with values that are expected to remain unchanged from 
>>> then on, and a new set of preliminary values for the current window need to 
>>> be added/appended.
>>> 
>>> Using Kudu's Java API and developing additional functionality on top of 
>>> what Kudu has to offer isn't too much to ask, but the ease of integration 
>>> with Spark SQL will gate how quickly we would move to using Kudu and how 
>>> seriously we'd look at alternatives before making that decision. 
>>> 
>>> On Mon, Apr 11, 2016 at 8:14 AM, Jean-Daniel Cryans >> > wrote:
>>> Mark,
>>> 
>>> Thanks for taking some time to reply in this thread, glad it caught the 
>>> attention of other folks!
>>> 
>>> On Sun, Apr 10, 2016 at 12:33 PM, Mark Hamstra >> > wrote:
>>> Do they care being able to insert into Kudu with SparkSQL
>>> 
>>> I care about insert into Kudu with Spark SQL.  I'm currently 

Re: Performance Question

2016-05-28 Thread Benjamin Kim
Todd,

It sounds like Kudu can possibly top or match those numbers put out by 
Aerospike. Do you have any performance statistics published or any instructions 
as to measure them myself as good way to test? In addition, this will be a test 
using Spark, so should I wait for Kudu version 0.9.0 where support will be 
built in?

Thanks,
Ben


> On May 27, 2016, at 9:19 PM, Todd Lipcon  wrote:
> 
> On Fri, May 27, 2016 at 8:20 PM, Benjamin Kim  > wrote:
> Hi Mike,
> 
> First of all, thanks for the link. It looks like an interesting read. I 
> checked that Aerospike is currently at version 3.8.2.3, and in the article, 
> they are evaluating version 3.5.4. The main thing that impressed me was their 
> claim that they can beat Cassandra and HBase by 8x for writing and 25x for 
> reading. Their big claim to fame is that Aerospike can write 1M records per 
> second with only 50 nodes. I wanted to see if this is real.
> 
> 1M records per second on 50 nodes is pretty doable by Kudu as well, depending 
> on the size of your records and the insertion order. I've been playing with a 
> ~70 node cluster recently and seen 1M+ writes/second sustained, and bursting 
> above 4M. These are 1KB rows with 11 columns, and with pretty old HDD-only 
> nodes. I think newer flash-based nodes could do better.
>  
> 
> To answer your questions, we have a DMP with user profiles with many 
> attributes. We create segmentation information off of these attributes to 
> classify them. Then, we can target advertising appropriately for our sales 
> department. Much of the data processing is for applying models on all or if 
> not most of every profile’s attributes to find similarities (nearest 
> neighbor/clustering) over a large number of rows when batch processing or a 
> small subset of rows for quick online scoring. So, our use case is a typical 
> advanced analytics scenario. We have tried HBase, but it doesn’t work well 
> for these types of analytics.
> 
> I read, that Aerospike in the release notes, they did do many improvements 
> for batch and scan operations.
> 
> I wonder what your thoughts are for using Kudu for this.
> 
> Sounds like a good Kudu use case to me. I've heard great things about 
> Aerospike for the low latency random access portion, but I've also heard that 
> it's _very_ expensive, and not particularly suited to the columnar scan 
> workload. Lastly, I think the Apache license of Kudu is much more appealing 
> than the AGPL3 used by Aerospike. But, that's not really a direct answer to 
> the performance question :)
>  
> 
> Thanks,
> Ben
> 
> 
>> On May 27, 2016, at 6:21 PM, Mike Percy > > wrote:
>> 
>> Have you considered whether you have a scan heavy or a random access heavy 
>> workload? Have you considered whether you always access / update a whole row 
>> vs only a partial row? Kudu is a column store so has some awesome 
>> performance characteristics when you are doing a lot of scanning of just a 
>> couple of columns.
>> 
>> I don't know the answer to your question but if your concern is performance 
>> then I would be interested in seeing comparisons from a perf perspective on 
>> certain workloads.
>> 
>> Finally, a year ago Aerospike did quite poorly in a Jepsen test: 
>> https://aphyr.com/posts/324-jepsen-aerospike 
>> 
>> 
>> I wonder if they have addressed any of those issues.
>> 
>> Mike
>> 
>> On Friday, May 27, 2016, Benjamin Kim > > wrote:
>> I am just curious. How will Kudu compare with Aerospike 
>> (http://www.aerospike.com )? I went to a Spark 
>> Roadshow and found out about this piece of software. It appears to fit our 
>> use case perfectly since we are an ad-tech company trying to leverage our 
>> user profiles data. Plus, it already has a Spark connector and has a 
>> SQL-like client. The tables can be accessed using Spark SQL DataFrames and, 
>> also, made into SQL tables for direct use with Spark SQL ODBC/JDBC 
>> Thriftserver. I see from the work done here 
>> http://gerrit.cloudera.org:8080/#/c/2992/ 
>>  that the Spark integration is 
>> well underway and, from the looks of it lately, almost complete. I would 
>> prefer to use Kudu since we are already a Cloudera shop, and Kudu is easy to 
>> deploy and configure using Cloudera Manager. I also hope that some of 
>> Aerospike’s speed optimization techniques can make it into Kudu in the 
>> future, if they have not been already thought of or included.
>> 
>> Just some thoughts…
>> 
>> Cheers,
>> Ben
>> 
>> 
>> -- 
>> --
>> Mike Percy
>> Software Engineer, Cloudera
>> 
>> 
> 
> 
> 
> 
> -- 
> Todd Lipcon
> Software Engineer, Cloudera