There is some code in review that needs some more refinement.
It will allow upsert/insert from a dataframe using the datasource api. It will
also allow the creation and deletion of tables from a dataframe
http://gerrit.cloudera.org:8080/#/c/2992/
Example usages will look something like:
http://gerrit.cloudera.org:8080/#/c/2992/5/docs/developing.adoc
-Chris George
On 5/18/16, 9:45 AM, "Benjamin Kim"
<bbuil...@gmail.com<mailto:bbuil...@gmail.com>> wrote:
Can someone tell me what the state is of this Spark work?
Also, does anyone have any sample code on how to update/insert data in Kudu
using DataFrames?
Thanks,
Ben
On Apr 13, 2016, at 8:22 AM, Chris George
<christopher.geo...@rms.com<mailto:christopher.geo...@rms.com>> wrote:
SparkSQL cannot support these type of statements but we may be able to
implement similar functionality through the api.
-Chris
On 4/12/16, 5:19 PM, "Benjamin Kim"
<bbuil...@gmail.com<mailto:bbuil...@gmail.com>> wrote:
It would be nice to adhere to the SQL:2003 standard for an “upsert” if it were
to be implemented.
MERGE INTO table_name USING table_reference ON (condition)
WHEN MATCHED THEN
UPDATE SET column1 = value1 [, column2 = value2 ...]
WHEN NOT MATCHED THEN
INSERT (column1 [, column2 ...]) VALUES (value1 [, value2 …])
Cheers,
Ben
On Apr 11, 2016, at 12:21 PM, Chris George
<christopher.geo...@rms.com<mailto:christopher.geo...@rms.com>> wrote:
I have a wip kuduRDD that I made a few months ago. I pushed it into gerrit if
you want to take a look. http://gerrit.cloudera.org:8080/#/c/2754/
It does pushdown predicates which the existing input formatter based rdd does
not.
Within the next two weeks I’m planning to implement a datasource for spark that
will have pushdown predicates and insertion/update functionality (need to look
more at cassandra and the hbase datasource for best way to do this) I agree
that server side upsert would be helpful.
Having a datasource would give us useful data frames and also make spark sql
usable for kudu.
My reasoning for having a spark datasource and not using Impala is: 1. We have
had trouble getting impala to run fast with high concurrency when compared to
spark 2. We interact with datasources which do not integrate with impala. 3. We
have custom sql query planners for extended sql functionality.
-Chris George
On 4/11/16, 12:22 PM, "Jean-Daniel Cryans"
<jdcry...@apache.org<mailto:jdcry...@apache.org>> wrote:
You guys make a convincing point, although on the upsert side we'll need more
support from the servers. Right now all you can do is an INSERT then, if you
get a dup key, do an UPDATE. I guess we could at least add an API on the client
side that would manage it, but it wouldn't be atomic.
J-D
On Mon, Apr 11, 2016 at 9:34 AM, Mark Hamstra
<m...@clearstorydata.com<mailto:m...@clearstorydata.com>> wrote:
It's pretty simple, actually. I need to support versioned datasets in a Spark
SQL environment. Instead of a hack on top of a Parquet data store, I'm hoping
(among other reasons) to be able to use Kudu's write and timestamp-based read
operations to support not only appending data, but also updating existing data,
and even some schema migration. The most typical use case is a dataset that is
updated periodically (e.g., weekly or monthly) in which the the preliminary
data in the previous window (week or month) is updated with values that are
expected to remain unchanged from then on, and a new set of preliminary values
for the current window need to be added/appended.
Using Kudu's Java API and developing additional functionality on top of what
Kudu has to offer isn't too much to ask, but the ease of integration with Spark
SQL will gate how quickly we would move to using Kudu and how seriously we'd
look at alternatives before making that decision.
On Mon, Apr 11, 2016 at 8:14 AM, Jean-Daniel Cryans
<jdcry...@apache.org<mailto:jdcry...@apache.org>> wrote:
Mark,
Thanks for taking some time to reply in this thread, glad it caught the
attention of other folks!
On Sun, Apr 10, 2016 at 12:33 PM, Mark Hamstra
<m...@clearstorydata.com<mailto:m...@clearstorydata.com>> wrote:
Do they care being able to insert into Kudu with SparkSQL
I care about insert into Kudu with Spark SQL. I'm currently delaying a
refactoring of some Spark SQL-oriented insert functionality while trying to
evaluate what to expect from Kudu. Whether Kudu does a good job supporting
inserts with Spark SQL will be a key consideration as to whether we adopt Kudu.
I'd like to know more about why SparkSQL inserts in necessary for you. Is it
just that you currently do it that way into some database or parquet so with
minimal refactoring you'd be able to use Kudu? Would re-writing those SQL lines
into Scala and directly use the Java API's KuduSession be too much work?
Additionally, what do