Sorry, but Nabble and ML suck

2015-10-31 Thread Martin Senne
Having written a post on last Tuesday, I'm still not able to see my post under nabble. And yeah, subscription to u...@apache.spark.org was successful (rechecked a minute ago) Even more, I have no way (and no confirmation) that my post was accepted, rejected, whatever. This is very L4M3 and so

Re: Sorry, but Nabble and ML suck

2015-10-31 Thread Martin Senne
Apache archives > (which is precisely one of the motivations for proposing to migrate to > Discourse), but there is a more readable archive on another unofficial > site > <http://search-hadoop.com/m/q3RTtzu5vu1tD3w52=Discourse+A+proposed+alternative+to+the+Spark+User+list> > . >

Re: Sorry, but Nabble and ML suck

2015-10-31 Thread Martin Senne
Ted, thx. Should I repost? Am 31.10.2015 17:41 schrieb "Ted Yu" <yuzhih...@gmail.com>: > From the result of http://search-hadoop.com/?q=spark+Martin+Senne , > Martin's post Tuesday didn't go through. > > FYI > > On Sat, Oct 31, 2015 at 9:34 AM, Nicholas Cham

Why does predicate pushdown not work on HiveContext (concrete HiveThriftServer2) ?

2015-10-31 Thread Martin Senne
Hi all, # Programm Sketch I create a HiveContext `hiveContext` With that context, I create a DataFrame `df` from a JDBC relational table.I register the DataFrame `df` viadf.registerTempTable("TESTTABLE")I start a HiveThriftServer2 via HiveThriftServer2.startWithContext(hiveContext) The

Why is no predicate pushdown performed, when using Hive (HiveThriftServer2) ?

2015-10-28 Thread Martin Senne
Hi all, # Programm Sketch 1. I create a HiveContext `hiveContext` 2. With that context, I create a DataFrame `df` from a JDBC relational table. 3. I register the DataFrame `df` via df.registerTempTable("TESTTABLE") 4. I start a HiveThriftServer2 via

When will window ....

2015-08-10 Thread Martin Senne
When will window functions be integrated into Spark (without HiveContext?) Gesendet mit AquaMail für Android http://www.aqua-mail.com Am 10. August 2015 23:04:22 schrieb Michael Armbrust mich...@databricks.com: You will need to use a HiveContext for window functions to work. On Mon, Aug 10,

Re: Spark SQL DataFrame: Nullable column and filtering

2015-08-01 Thread Martin Senne
DataFrame.printSchema. (or at least I did not find a way of how to) Cheers, Martin 2015-07-31 22:51 GMT+02:00 Martin Senne martin.se...@googlemail.com: Dear Michael, dear all, a minimal example is listed below. After some further analysis I could figure out, that the problem is related

Re: Spark SQL DataFrame: Nullable column and filtering

2015-07-31 Thread Martin Senne
) //|-- y: integer (nullable = true) // //+-+---+-+-+ //|x| a|x|y| //+-+---+-+-+ //|2|bob|2|5| //+-+---+-+-+ } } 2015-07-31 1:56 GMT+02:00 Martin Senne martin.se...@googlemail.com: Dear Michael, dear all, distinguishing those records that have a match in mapping from those

Re: Spark SQL DataFrame: Nullable column and filtering

2015-07-30 Thread Martin Senne
anything to do with nullable, which is only a hint to the system so that we can avoid null checking when we know that there are no null values. If you provide the full code i can try and see if this is a bug. On Thu, Jul 30, 2015 at 11:53 AM, Martin Senne martin.se...@googlemail.com wrote: Dear

Re: Spark SQL DataFrame: Nullable column and filtering

2015-07-30 Thread Martin Senne
Dear Michael, dear all, motivation: object OtherEntities { case class Record( x:Int, a: String) case class Mapping( x: Int, y: Int ) val records = Seq( Record(1, hello), Record(2, bob)) val mappings = Seq( Mapping(2, 5) ) } Now I want to perform an *left outer join* on records and