Hi Kostas

With regards to your *second* point. I believe that requiring from the user
apps to explicitly declare their dependencies is the most clear API
approach when it comes to classpath and classloading.

However what about the following API: *SparkContext.addJar(String
pathToJar)* . *Is this going to change or affected in someway?*
Currently i use spark 1.5.2 in a Java application and i have built a
utility class that finds the correct path of a Dependency
(myPathOfTheJarDependency=Something like SparkUtils.getJarFullPathFromClass
(EsSparkSQL.class, "^elasticsearch-hadoop-2.2.0-beta1.*\\.jar$");), Which
is not something beatiful but i can live with.

Then i  use *javaSparkContext.addJar(myPathOfTheJarDependency)* ; after i
have initiated the javaSparkContext. In that way i do not require my
SparkCluster to have configuration on the classpath of my application and i
explicitly define the dependencies during runtime of my app after each time
i initiate a sparkContext.
I would be happy and i believe many other users also if i could could
continue having the same or similar approach with regards to dependencies


Regards

2015-12-08 23:40 GMT+02:00 Kostas Sakellis <kos...@cloudera.com>:

> I'd also like to make it a requirement that Spark 2.0 have a stable
> dataframe and dataset API - we should not leave these APIs experimental in
> the 2.0 release. We already know of at least one breaking change we need to
> make to dataframes, now's the time to make any other changes we need to
> stabilize these APIs. Anything we can do to make us feel more comfortable
> about the dataset and dataframe APIs before the 2.0 release?
>
> I've also been thinking that in Spark 2.0, we might want to consider
> strict classpath isolation for user applications. Hadoop 3 is moving in
> this direction. We could, for instance, run all user applications in their
> own classloader that only inherits very specific classes from Spark (ie.
> public APIs). This will require user apps to explicitly declare their
> dependencies as there won't be any accidental class leaking anymore. We do
> something like this for *userClasspathFirst option but it is not as strict
> as what I described. This is a breaking change but I think it will help
> with eliminating weird classpath incompatibility issues between user
> applications and Spark system dependencies.
>
> Thoughts?
>
> Kostas
>
>
> On Fri, Dec 4, 2015 at 3:28 AM, Sean Owen <so...@cloudera.com> wrote:
>
>> To be clear-er, I don't think it's clear yet whether a 1.7 release
>> should exist or not. I could see both making sense. It's also not
>> really necessary to decide now, well before a 1.6 is even out in the
>> field. Deleting the version lost information, and I would not have
>> done that given my reply. Reynold maybe I can take this up with you
>> offline.
>>
>> On Thu, Dec 3, 2015 at 6:03 PM, Mark Hamstra <m...@clearstorydata.com>
>> wrote:
>> > Reynold's post fromNov. 25:
>> >
>> >> I don't think we should drop support for Scala 2.10, or make it harder
>> in
>> >> terms of operations for people to upgrade.
>> >>
>> >> If there are further objections, I'm going to bump remove the 1.7
>> version
>> >> and retarget things to 2.0 on JIRA.
>> >
>> >
>> > On Thu, Dec 3, 2015 at 12:47 AM, Sean Owen <so...@cloudera.com> wrote:
>> >>
>> >> Reynold, did you (or someone else) delete version 1.7.0 in JIRA? I
>> >> think that's premature. If there's a 1.7.0 then we've lost info about
>> >> what it would contain. It's trivial at any later point to merge the
>> >> versions. And, since things change and there's not a pressing need to
>> >> decide one way or the other, it seems fine to at least collect this
>> >> info like we have things like "1.4.3" that may never be released. I'd
>> >> like to add it back?
>> >>
>> >> On Thu, Nov 26, 2015 at 9:45 AM, Sean Owen <so...@cloudera.com> wrote:
>> >> > Maintaining both a 1.7 and 2.0 is too much work for the project,
>> which
>> >> > is over-stretched now. This means that after 1.6 it's just small
>> >> > maintenance releases in 1.x and no substantial features or evolution.
>> >> > This means that the "in progress" APIs in 1.x that will stay that
>> way,
>> >> > unless one updates to 2.x. It's not unreasonable, but means the
>> update
>> >> > to the 2.x line isn't going to be that optional for users.
>> >> >
>> >> > Scala 2.10 is already EOL right? Supporting it in 2.x means
>> supporting
>> >> > it for a couple years, note. 2.10 is still used today, but that's the
>> >> > point of the current stable 1.x release in general: if you want to
>> >> > stick to current dependencies, stick to the current release. Although
>> >> > I think that's the right way to think about support across major
>> >> > versions in general, I can see that 2.x is more of a required update
>> >> > for those following the project's fixes and releases. Hence may
>> indeed
>> >> > be important to just keep supporting 2.10.
>> >> >
>> >> > I can't see supporting 2.12 at the same time (right?). Is that a
>> >> > concern? it will be long since GA by the time 2.x is first released.
>> >> >
>> >> > There's another fairly coherent worldview where development continues
>> >> > in 1.7 and focuses on finishing the loose ends and lots of bug
>> fixing.
>> >> > 2.0 is delayed somewhat into next year, and by that time supporting
>> >> > 2.11+2.12 and Java 8 looks more feasible and more in tune with
>> >> > currently deployed versions.
>> >> >
>> >> > I can't say I have a strong view but I personally hadn't imagined 2.x
>> >> > would start now.
>> >> >
>> >> >
>> >> > On Thu, Nov 26, 2015 at 7:00 AM, Reynold Xin <r...@databricks.com>
>> >> > wrote:
>> >> >> I don't think we should drop support for Scala 2.10, or make it
>> harder
>> >> >> in
>> >> >> terms of operations for people to upgrade.
>> >> >>
>> >> >> If there are further objections, I'm going to bump remove the 1.7
>> >> >> version
>> >> >> and retarget things to 2.0 on JIRA.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>> For additional commands, e-mail: dev-h...@spark.apache.org
>>
>>
>

Reply via email to