Master options Cluster/Client descrepencies.

2016-03-28 Thread satyajit vegesna
Hi All, I have written a spark program on my dev box , IDE:Intellij scala version:2.11.7 spark verison:1.6.1 run fine from IDE, by providing proper input and output paths including master. But when i try to deploy the code in my cluster made of below, Spark

Re: SPARK-13843 Next steps

2016-03-28 Thread Sean Owen
I tend to agree. If it's going to present a significant technical hurdle and the software is clearly non ASF like via a different artifact, there's a decent argument the namespace should stay. The artifact has to change though and that is what David was referring to in his other message. On Mon,

Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-28 Thread Kostas Sakellis
Also, +1 on dropping jdk7 in Spark 2.0. Kostas On Mon, Mar 28, 2016 at 2:01 PM, Marcelo Vanzin wrote: > Finally got some internal feedback on this, and we're ok with > requiring people to deploy jdk8 for 2.0, so +1 too. > > On Mon, Mar 28, 2016 at 1:15 PM, Luciano Resende

Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-28 Thread Marcelo Vanzin
Finally got some internal feedback on this, and we're ok with requiring people to deploy jdk8 for 2.0, so +1 too. On Mon, Mar 28, 2016 at 1:15 PM, Luciano Resende wrote: > +1, I also checked with few projects inside IBM that consume Spark and they > seem to be ok with the

Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-28 Thread Luciano Resende
+1, I also checked with few projects inside IBM that consume Spark and they seem to be ok with the direction of droping JDK 7. On Mon, Mar 28, 2016 at 11:24 AM, Michael Gummelt wrote: > +1 from Mesosphere > > On Mon, Mar 28, 2016 at 5:12 AM, Steve Loughran

Re: OOM and "spark.buffer.pageSize"

2016-03-28 Thread Steve Johnston
Yes I have. That’s the best source of information at the moment. Thanks. -- View this message in context: http://apache-spark-developers-list.1001551.n3.nabble.com/OOM-and-spark-buffer-pageSize-tp16890p16892.html Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

Re: OOM and "spark.buffer.pageSize"

2016-03-28 Thread Ted Yu
I guess you have looked at MemoryManager#pageSizeBytes where the "spark.buffer.pageSize" config can override default page size. FYI On Mon, Mar 28, 2016 at 12:07 PM, Steve Johnston < sjohns...@algebraixdata.com> wrote: > I'm attempting to address an OOM issue. I saw referenced in >

OOM and "spark.buffer.pageSize"

2016-03-28 Thread Steve Johnston
I'm attempting to address an OOM issue. I saw referenced in java.lang.OutOfMemoryError: Unable to acquire bytes of memory the configuration setting

Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-28 Thread Michael Gummelt
+1 from Mesosphere On Mon, Mar 28, 2016 at 5:12 AM, Steve Loughran wrote: > > > On 25 Mar 2016, at 01:59, Mridul Muralidharan wrote: > > > > Removing compatibility (with jdk, etc) can be done with a major release- > given that 7 has been EOLed a while

Re: SPARK-13843 Next steps

2016-03-28 Thread Marcelo Vanzin
On Mon, Mar 28, 2016 at 8:33 AM, Cody Koeninger wrote: > There are compatibility problems with the java namespace changing > (e.g. access to private[spark]) I think it would be fine to keep the package names for backwards compatibility, but I think if these external projects

Re: SPARK-13843 and future of streaming backends

2016-03-28 Thread Cody Koeninger
Are you talking about group/identifier name, or contained classes? Because there are plenty of org.apache.* classes distributed via maven with non-apache group / identifiers. On Fri, Mar 25, 2016 at 6:54 PM, David Nalley wrote: > >> As far as group / artifact name

Re: SPARK-13843 Next steps

2016-03-28 Thread Cody Koeninger
I really think the only thing that should have to change is the maven group and identifier, not the java namespace. There are compatibility problems with the java namespace changing (e.g. access to private[spark]), and I don't think that someone who takes the time to change their build file to

Re: [discuss] ending support for Java 7 in Spark 2.0

2016-03-28 Thread Steve Loughran
> On 25 Mar 2016, at 01:59, Mridul Muralidharan wrote: > > Removing compatibility (with jdk, etc) can be done with a major release- > given that 7 has been EOLed a while back and is now unsupported, we have to > decide if we drop support for it in 2.0 or 3.0 (2+ years from

Re: Any plans to migrate Transformer API to Spark SQL (closer to DataFrames)?

2016-03-28 Thread Michał Zieliński
Hi Maciej, Absolutely. We had to copy HasInputCol/s, HasOutputCol/s (along with a couple of others like HasProbabilityCol) to our repo. Which for most use-cases is good enough, but for some (e.g. operating on any Transformer that accepts either our or Sparks HasInputCol) makes the code clunky.

Re: Any plans to migrate Transformer API to Spark SQL (closer to DataFrames)?

2016-03-28 Thread Jacek Laskowski
Hi, Never develop any custom Transformer (or UnaryTransformer in particular), but I'd be for it if that's the case. Jacek 28.03.2016 6:54 AM "Maciej Szymkiewicz" napisał(a): > Hi Jacek, > > In this context, don't you think it would be useful, if at least some > traits