Re: Spark on Kudu

2016-05-18 Thread Chris George
There is some code in review that needs some more refinement.
It will allow upsert/insert from a dataframe using the datasource api. It will 
also allow the creation and deletion of tables from a dataframe
http://gerrit.cloudera.org:8080/#/c/2992/

Example usages will look something like:
http://gerrit.cloudera.org:8080/#/c/2992/5/docs/developing.adoc

-Chris George


On 5/18/16, 9:45 AM, "Benjamin Kim" 
mailto:bbuil...@gmail.com>> wrote:

Can someone tell me what the state is of this Spark work?

Also, does anyone have any sample code on how to update/insert data in Kudu 
using DataFrames?

Thanks,
Ben


On Apr 13, 2016, at 8:22 AM, Chris George 
mailto:christopher.geo...@rms.com>> wrote:

SparkSQL cannot support these type of statements but we may be able to 
implement similar functionality through the api.
-Chris

On 4/12/16, 5:19 PM, "Benjamin Kim" 
mailto:bbuil...@gmail.com>> wrote:

It would be nice to adhere to the SQL:2003 standard for an “upsert” if it were 
to be implemented.

MERGE INTO table_name USING table_reference ON (condition)
 WHEN MATCHED THEN
 UPDATE SET column1 = value1 [, column2 = value2 ...]
 WHEN NOT MATCHED THEN
 INSERT (column1 [, column2 ...]) VALUES (value1 [, value2 …])

Cheers,
Ben

On Apr 11, 2016, at 12:21 PM, Chris George 
mailto:christopher.geo...@rms.com>> wrote:

I have a wip kuduRDD that I made a few months ago. I pushed it into gerrit if 
you want to take a look. http://gerrit.cloudera.org:8080/#/c/2754/
It does pushdown predicates which the existing input formatter based rdd does 
not.

Within the next two weeks I’m planning to implement a datasource for spark that 
will have pushdown predicates and insertion/update functionality (need to look 
more at cassandra and the hbase datasource for best way to do this) I agree 
that server side upsert would be helpful.
Having a datasource would give us useful data frames and also make spark sql 
usable for kudu.

My reasoning for having a spark datasource and not using Impala is: 1. We have 
had trouble getting impala to run fast with high concurrency when compared to 
spark 2. We interact with datasources which do not integrate with impala. 3. We 
have custom sql query planners for extended sql functionality.

-Chris George


On 4/11/16, 12:22 PM, "Jean-Daniel Cryans" 
mailto:jdcry...@apache.org>> wrote:

You guys make a convincing point, although on the upsert side we'll need more 
support from the servers. Right now all you can do is an INSERT then, if you 
get a dup key, do an UPDATE. I guess we could at least add an API on the client 
side that would manage it, but it wouldn't be atomic.

J-D

On Mon, Apr 11, 2016 at 9:34 AM, Mark Hamstra 
mailto:m...@clearstorydata.com>> wrote:
It's pretty simple, actually.  I need to support versioned datasets in a Spark 
SQL environment.  Instead of a hack on top of a Parquet data store, I'm hoping 
(among other reasons) to be able to use Kudu's write and timestamp-based read 
operations to support not only appending data, but also updating existing data, 
and even some schema migration.  The most typical use case is a dataset that is 
updated periodically (e.g., weekly or monthly) in which the the preliminary 
data in the previous window (week or month) is updated with values that are 
expected to remain unchanged from then on, and a new set of preliminary values 
for the current window need to be added/appended.

Using Kudu's Java API and developing additional functionality on top of what 
Kudu has to offer isn't too much to ask, but the ease of integration with Spark 
SQL will gate how quickly we would move to using Kudu and how seriously we'd 
look at alternatives before making that decision.

On Mon, Apr 11, 2016 at 8:14 AM, Jean-Daniel Cryans 
mailto:jdcry...@apache.org>> wrote:
Mark,

Thanks for taking some time to reply in this thread, glad it caught the 
attention of other folks!

On Sun, Apr 10, 2016 at 12:33 PM, Mark Hamstra 
mailto:m...@clearstorydata.com>> wrote:
Do they care being able to insert into Kudu with SparkSQL

I care about insert into Kudu with Spark SQL.  I'm currently delaying a 
refactoring of some Spark SQL-oriented insert functionality while trying to 
evaluate what to expect from Kudu.  Whether Kudu does a good job supporting 
inserts with Spark SQL will be a key consideration as to whether we adopt Kudu.

I'd like to know more about why SparkSQL inserts in necessary for you. Is it 
just that you currently do it that way into some database or parquet so with 
minimal refactoring you'd be able to use Kudu? Would re-writing those SQL lines 
into Scala and directly use the Java API's KuduSession be too much work?

Additionally, what do you expect to gain from using Kudu VS your current 
solution? If it's not completely clear, I'd love to help you think through it.


On Sun, Apr 10, 2016 at 12:23 PM, Jea

Re: Sparse Data

2016-05-12 Thread Chris George
I've used kudu with an EAV model for sparse data and that worked extremely well 
for us with billions of rows and the correct partitioning.
-Chris

On 5/12/16, 3:21 PM, "Dan Burkert" 
mailto:d...@cloudera.com>> wrote:

Hi Ben,

Kudu doesn't support sparse datasets with many columns very well.  Kudu's data 
model looks much more like the relational, structured data model of a 
traditional SQL database than HBase's data model.  Kudu doesn't yet have a map 
column type (or any nested column types), but we do have BINARY typed columns 
if you can handle your own serialization. Oftentimes, however, it's better to 
restructure the data so that it can fit Kudu's structure better.  If you can 
give more information about your usage patterns (especially details queries you 
wish to optimize for) I can perhaps give better info.

- Dan

On Thu, May 12, 2016 at 2:08 PM, Benjamin Kim 
mailto:bbuil...@gmail.com>> wrote:
Can Kudu handle the use case where sparse data is involved? In many of our 
processes, we deal with data that can have any number of columns and many 
previously unknown column names depending on what attributes are brought in at 
the time. Currently, we use HBase to handle this. Since Kudu is based on HBase, 
can it do the same? Or, do we have to use a map data type column for this?

Thanks,
Ben




Re: best practices to remove/retire data

2016-05-12 Thread Chris George
How hard would a predicate based delete be?
Ie ScanDelete or something.
-Chris George

On 5/12/16, 9:24 AM, "Jean-Daniel Cryans" 
mailto:jdcry...@apache.org>> wrote:

Hi,

Right now this use case is more difficult than it needs to be. In your previous 
thread, "Partition and Split rows", we talked about non-covering range 
partition and this is something that would help your use case a lot. Basically, 
you could create partitions that cover full days, and everyday you could delete 
the old partitions while creating the next day's. Deleting a partition is 
really quick and efficient compared to manually deleting individual rows.

Until this is available I'd do this with multiple table, but it's a mess to 
handle as you described.

Hope this helps,

J-D

On Thu, May 12, 2016 at 8:16 AM, Sand Stone 
mailto:sand.m.st...@gmail.com>> wrote:
Hi. Presumably I need to write a program to delete the unwanted rows, say, 
remove all data older than 3 days, while the table is still ingesting new data.

How well will this perform for large tables? Both deletion and ingestion wise.

Or for this specific case that I retire data by day, I should create a new 
table per day. However then the users have to be aware of the table naming 
scheme somehow. If a mention policy is changed. all the client side code might 
have to change (sure we can have one level of indirection to minimize the pain).

Thanks.



Re: why boolean type mapping is missing in Spark datasource

2016-04-25 Thread Chris George
Neglected probably. I'll ad it in.

Sent using CloudMagic 
Email
On Mon, Apr 25, 2016 at 7:30 PM, Darren Hoo 
mailto:darren@gmail.com>> wrote:


I was looking at Chris George's improved  Spark DataSource implementation.
kudu supports boolean type, but boolean type mapping here is missing,
is it on purpose or just neglected?


On Tue, Apr 26, 2016 at 1:54 AM, Todd Lipcon  wrote:

> Hey Kudu-ers,
>
> For the last month and a half, I've been posting weekly summaries of
> community development activity on the Kudu blog. In case you aren't on
> twitter or slack you might not have seen the posts, so I'm going to start
> emailing them to the list as well.
>
> Here's this week's update:
> http://getkudu.io/2016/04/25/weekly-update.html
>
> Feel free to reply to this mail if you have any questions or would like to
> get involved in development.
>
> -Todd
>


Re: Spark on Kudu

2016-04-13 Thread Chris George
SparkSQL cannot support these type of statements but we may be able to 
implement similar functionality through the api.
-Chris

On 4/12/16, 5:19 PM, "Benjamin Kim" 
mailto:bbuil...@gmail.com>> wrote:

It would be nice to adhere to the SQL:2003 standard for an “upsert” if it were 
to be implemented.

MERGE INTO table_name USING table_reference ON (condition)
 WHEN MATCHED THEN
 UPDATE SET column1 = value1 [, column2 = value2 ...]
 WHEN NOT MATCHED THEN
 INSERT (column1 [, column2 ...]) VALUES (value1 [, value2 …])

Cheers,
Ben

On Apr 11, 2016, at 12:21 PM, Chris George 
mailto:christopher.geo...@rms.com>> wrote:

I have a wip kuduRDD that I made a few months ago. I pushed it into gerrit if 
you want to take a look. http://gerrit.cloudera.org:8080/#/c/2754/
It does pushdown predicates which the existing input formatter based rdd does 
not.

Within the next two weeks I’m planning to implement a datasource for spark that 
will have pushdown predicates and insertion/update functionality (need to look 
more at cassandra and the hbase datasource for best way to do this) I agree 
that server side upsert would be helpful.
Having a datasource would give us useful data frames and also make spark sql 
usable for kudu.

My reasoning for having a spark datasource and not using Impala is: 1. We have 
had trouble getting impala to run fast with high concurrency when compared to 
spark 2. We interact with datasources which do not integrate with impala. 3. We 
have custom sql query planners for extended sql functionality.

-Chris George


On 4/11/16, 12:22 PM, "Jean-Daniel Cryans" 
mailto:jdcry...@apache.org>> wrote:

You guys make a convincing point, although on the upsert side we'll need more 
support from the servers. Right now all you can do is an INSERT then, if you 
get a dup key, do an UPDATE. I guess we could at least add an API on the client 
side that would manage it, but it wouldn't be atomic.

J-D

On Mon, Apr 11, 2016 at 9:34 AM, Mark Hamstra 
mailto:m...@clearstorydata.com>> wrote:
It's pretty simple, actually.  I need to support versioned datasets in a Spark 
SQL environment.  Instead of a hack on top of a Parquet data store, I'm hoping 
(among other reasons) to be able to use Kudu's write and timestamp-based read 
operations to support not only appending data, but also updating existing data, 
and even some schema migration.  The most typical use case is a dataset that is 
updated periodically (e.g., weekly or monthly) in which the the preliminary 
data in the previous window (week or month) is updated with values that are 
expected to remain unchanged from then on, and a new set of preliminary values 
for the current window need to be added/appended.

Using Kudu's Java API and developing additional functionality on top of what 
Kudu has to offer isn't too much to ask, but the ease of integration with Spark 
SQL will gate how quickly we would move to using Kudu and how seriously we'd 
look at alternatives before making that decision.

On Mon, Apr 11, 2016 at 8:14 AM, Jean-Daniel Cryans 
mailto:jdcry...@apache.org>> wrote:
Mark,

Thanks for taking some time to reply in this thread, glad it caught the 
attention of other folks!

On Sun, Apr 10, 2016 at 12:33 PM, Mark Hamstra 
mailto:m...@clearstorydata.com>> wrote:
Do they care being able to insert into Kudu with SparkSQL

I care about insert into Kudu with Spark SQL.  I'm currently delaying a 
refactoring of some Spark SQL-oriented insert functionality while trying to 
evaluate what to expect from Kudu.  Whether Kudu does a good job supporting 
inserts with Spark SQL will be a key consideration as to whether we adopt Kudu.

I'd like to know more about why SparkSQL inserts in necessary for you. Is it 
just that you currently do it that way into some database or parquet so with 
minimal refactoring you'd be able to use Kudu? Would re-writing those SQL lines 
into Scala and directly use the Java API's KuduSession be too much work?

Additionally, what do you expect to gain from using Kudu VS your current 
solution? If it's not completely clear, I'd love to help you think through it.


On Sun, Apr 10, 2016 at 12:23 PM, Jean-Daniel Cryans 
mailto:jdcry...@apache.org>> wrote:
Yup, starting to get a good idea.

What are your DS folks looking for in terms of functionality related to Spark? 
A SparkSQL integration that's as fully featured as Impala's? Do they care being 
able to insert into Kudu with SparkSQL or just being able to query real fast? 
Anything more specific to Spark that I'm missing?

FWIW the plan is to get to 1.0 in late Summer/early Fall. At Cloudera all our 
resources are committed to making things happen in time, and a more fully 
featured Spark integration isn't in our plans during that period. I'm really 
hoping someone in the community will help with Spark, the same way we got a big 
con

Re: Spark on Kudu

2016-04-11 Thread Chris George
I have a wip kuduRDD that I made a few months ago. I pushed it into gerrit if 
you want to take a look. http://gerrit.cloudera.org:8080/#/c/2754/
It does pushdown predicates which the existing input formatter based rdd does 
not.

Within the next two weeks I’m planning to implement a datasource for spark that 
will have pushdown predicates and insertion/update functionality (need to look 
more at cassandra and the hbase datasource for best way to do this) I agree 
that server side upsert would be helpful.
Having a datasource would give us useful data frames and also make spark sql 
usable for kudu.

My reasoning for having a spark datasource and not using Impala is: 1. We have 
had trouble getting impala to run fast with high concurrency when compared to 
spark 2. We interact with datasources which do not integrate with impala. 3. We 
have custom sql query planners for extended sql functionality.

-Chris George


On 4/11/16, 12:22 PM, "Jean-Daniel Cryans" 
mailto:jdcry...@apache.org>> wrote:

You guys make a convincing point, although on the upsert side we'll need more 
support from the servers. Right now all you can do is an INSERT then, if you 
get a dup key, do an UPDATE. I guess we could at least add an API on the client 
side that would manage it, but it wouldn't be atomic.

J-D

On Mon, Apr 11, 2016 at 9:34 AM, Mark Hamstra 
mailto:m...@clearstorydata.com>> wrote:
It's pretty simple, actually.  I need to support versioned datasets in a Spark 
SQL environment.  Instead of a hack on top of a Parquet data store, I'm hoping 
(among other reasons) to be able to use Kudu's write and timestamp-based read 
operations to support not only appending data, but also updating existing data, 
and even some schema migration.  The most typical use case is a dataset that is 
updated periodically (e.g., weekly or monthly) in which the the preliminary 
data in the previous window (week or month) is updated with values that are 
expected to remain unchanged from then on, and a new set of preliminary values 
for the current window need to be added/appended.

Using Kudu's Java API and developing additional functionality on top of what 
Kudu has to offer isn't too much to ask, but the ease of integration with Spark 
SQL will gate how quickly we would move to using Kudu and how seriously we'd 
look at alternatives before making that decision.

On Mon, Apr 11, 2016 at 8:14 AM, Jean-Daniel Cryans 
mailto:jdcry...@apache.org>> wrote:
Mark,

Thanks for taking some time to reply in this thread, glad it caught the 
attention of other folks!

On Sun, Apr 10, 2016 at 12:33 PM, Mark Hamstra 
mailto:m...@clearstorydata.com>> wrote:
Do they care being able to insert into Kudu with SparkSQL

I care about insert into Kudu with Spark SQL.  I'm currently delaying a 
refactoring of some Spark SQL-oriented insert functionality while trying to 
evaluate what to expect from Kudu.  Whether Kudu does a good job supporting 
inserts with Spark SQL will be a key consideration as to whether we adopt Kudu.

I'd like to know more about why SparkSQL inserts in necessary for you. Is it 
just that you currently do it that way into some database or parquet so with 
minimal refactoring you'd be able to use Kudu? Would re-writing those SQL lines 
into Scala and directly use the Java API's KuduSession be too much work?

Additionally, what do you expect to gain from using Kudu VS your current 
solution? If it's not completely clear, I'd love to help you think through it.


On Sun, Apr 10, 2016 at 12:23 PM, Jean-Daniel Cryans 
mailto:jdcry...@apache.org>> wrote:
Yup, starting to get a good idea.

What are your DS folks looking for in terms of functionality related to Spark? 
A SparkSQL integration that's as fully featured as Impala's? Do they care being 
able to insert into Kudu with SparkSQL or just being able to query real fast? 
Anything more specific to Spark that I'm missing?

FWIW the plan is to get to 1.0 in late Summer/early Fall. At Cloudera all our 
resources are committed to making things happen in time, and a more fully 
featured Spark integration isn't in our plans during that period. I'm really 
hoping someone in the community will help with Spark, the same way we got a big 
contribution for the Flume sink.

J-D

On Sun, Apr 10, 2016 at 11:29 AM, Benjamin Kim 
mailto:bbuil...@gmail.com>> wrote:
Yes, we took Kudu for a test run using 0.6 and 0.7 versions. But, since it’s 
not “production-ready”, upper management doesn’t want to fully deploy it yet. 
They just want to keep an eye on it though. Kudu was so much simpler and easier 
to use in every aspect compared to HBase. Impala was great for the report 
writers and analysts to experiment with for the short time it was up. But, once 
again, the only blocker was the lack of Spark support for our Data 
Developers/Scientists. So, production-level data population