Re: sbt publish-local fails with 2.0.0-SNAPSHOT

2016-02-01 Thread Mike Hynes
Thank you Saisai for the JIRA/PR; I'm glad to see it is a one-line
fix, and will try this locally in the interim.
Mike

On 2/1/16, Saisai Shao  wrote:
> I think it is due to our recent changes to override the external resolvers
> in sbt building profile, I just created a JIRA (
> https://issues.apache.org/jira/browse/SPARK-13109) to track this.
>
>
> On Mon, Feb 1, 2016 at 3:01 PM, Mike Hynes <91m...@gmail.com> wrote:
>
>> Hi devs,
>>
>> I used to be able to do some local development from the upstream
>> master branch and run the publish-local command in an sbt shell to
>> publish the modified jars to the local ~/.ivy2 repository.
>>
>> I relied on this behaviour, since I could write other local packages
>> that had my local 1.X.0-SNAPSHOT dependencies in the build.sbt file,
>> such that I could run distributed tests from outside the spark source.
>>
>> However, having just pulled from the upstream master on
>> 2.0.0-SNAPSHOT, I can *not* run publish-local with sbt, with the
>> following error messages:
>>
>> [...]
>> java.lang.RuntimeException: Undefined resolver 'local'
>> at scala.sys.package$.error(package.scala:27)
>> at sbt.IvyActions$$anonfun$publish$1.apply(IvyActions.scala:120)
>> at sbt.IvyActions$$anonfun$publish$1.apply(IvyActions.scala:117)
>> at sbt.IvySbt$Module$$anonfun$withModule$1.apply(Ivy.scala:155)
>> at sbt.IvySbt$Module$$anonfun$withModule$1.apply(Ivy.scala:155)
>> at sbt.IvySbt$$anonfun$withIvy$1.apply(Ivy.scala:132)
>> at sbt.IvySbt.sbt$IvySbt$$action$1(Ivy.scala:57)
>> at sbt.IvySbt$$anon$4.call(Ivy.scala:65)
>> at xsbt.boot.Locks$GlobalLock.withChannel$1(Locks.scala:93)
>> at
>> xsbt.boot.Locks$GlobalLock.xsbt$boot$Locks$GlobalLock$$withChannelRetries$1(Locks.scala:78)
>> at
>> xsbt.boot.Locks$GlobalLock$$anonfun$withFileLock$1.apply(Locks.scala:97)
>> at xsbt.boot.Using$.withResource(Using.scala:10)
>> at xsbt.boot.Using$.apply(Using.scala:9)
>> at
>> xsbt.boot.Locks$GlobalLock.ignoringDeadlockAvoided(Locks.scala:58)
>> at xsbt.boot.Locks$GlobalLock.withLock(Locks.scala:48)
>> at xsbt.boot.Locks$.apply0(Locks.scala:31)
>> at xsbt.boot.Locks$.apply(Locks.scala:28)
>> at sbt.IvySbt.withDefaultLogger(Ivy.scala:65)
>> at sbt.IvySbt.withIvy(Ivy.scala:127)
>> at sbt.IvySbt.withIvy(Ivy.scala:124)
>> at sbt.IvySbt$Module.withModule(Ivy.scala:155)
>> at sbt.IvyActions$.publish(IvyActions.scala:117)
>> at
>> sbt.Classpaths$$anonfun$publishTask$1.apply(Defaults.scala:1298)
>> at
>> sbt.Classpaths$$anonfun$publishTask$1.apply(Defaults.scala:1297)
>> at scala.Function3$$anonfun$tupled$1.apply(Function3.scala:35)
>> at scala.Function3$$anonfun$tupled$1.apply(Function3.scala:34)
>> at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
>> at
>> sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:40)
>> at sbt.std.Transform$$anon$4.work(System.scala:63)
>> at
>> sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
>> at
>> sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
>> at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
>> at sbt.Execute.work(Execute.scala:235)
>> at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
>> at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
>> at
>> sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
>> at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> at
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> at java.lang.Thread.run(Thread.java:745)
>> [...]
>> [error] (spark/*:publishLocal) Undefined resolver 'local'
>> [error] (hive/*:publishLocal) Undefined resolver 'local'
>> [error] (streaming-kafka-assembly/*:publishLocal) Undefined resolver
>> 'local'
>> [error] (unsafe/*:publishLocal) Undefined resolver 'local'
>> [error] (streaming-twitter/*:publishLocal) Undefined resolver 'local'
>> [error] (streaming-flume/*:publishLocal) Undefined resolver 'local'
>> [error] (streaming-kafka/*:publishLocal) Undefined resolver 'local'
>> [error] (catalyst/*:publishLocal) Undefined resolver 'local'
>> [error] (streaming-akka/*:publishLocal) Undefined resolver 'local'
>> [error] (streaming-flume-sink/*:publishLocal) Undefined resolver 'local'
>> [error] (streaming-zeromq/*:publishLocal) Undefined resolver 'local'
>> [error] (test-tags/*:publishLocal) Undefined resolver 'local'
>>

Re: Secure multi tenancy on in stand alone mode

2016-02-01 Thread Ted Yu
w.r.t. running Spark on YARN, there are a few outstanding issues. e.g.

SPARK-11182 HDFS Delegation Token

See also the comments under SPARK-12279

FYI

On Mon, Feb 1, 2016 at 1:02 PM, eugene miretsky 
wrote:

> When having multiple users sharing the same Spark cluster, it's a good
> idea to isolate the users - make sure that each users runs under a
> different Linux account and prevent them from accessing data in jobs
> submitted by other users. Is it currently possible to do with Spark?
>
> The only thing I found about it online is
> http://rnowling.github.io/spark/2015/04/07/multiuser-spark-mesos.html,
> and some older Jira about adding support to YARN.
>
> Cheers,
> Eugene
>
>
>
>


Secure multi tenancy on in stand alone mode

2016-02-01 Thread eugene miretsky
When having multiple users sharing the same Spark cluster, it's a good idea
to isolate the users - make sure that each users runs under a different
Linux account and prevent them from accessing data in jobs submitted by
other users. Is it currently possible to do with Spark?

The only thing I found about it online is
http://rnowling.github.io/spark/2015/04/07/multiuser-spark-mesos.html, and
some older Jira about adding support to YARN.

Cheers,
Eugene


Encrypting jobs submitted by the client

2016-02-01 Thread eugene miretsky
Spark supports client authentication via shared secret or kerberos (on
YARN). However, the job itself is sent unencrypted over the network.  Is
there a way to encrypt the jobs the client submits to cluster?
The rational for this is very similar to  encrypting the HTTP file server
traffic - Jars may have sensitive data.

Cheers,
Eugene


Re: Spark 1.6.1

2016-02-01 Thread Michael Armbrust
We typically do not allow changes to the classpath in maintenance releases.

On Mon, Feb 1, 2016 at 8:16 AM, Hamel Kothari 
wrote:

> I noticed that the Jackson dependency was bumped to 2.5 in master for
> something spark-streaming related. Is there any reason that this upgrade
> can't be included with 1.6.1?
>
> According to later comments on this thread:
> https://issues.apache.org/jira/browse/SPARK-8332 and my personal
> experience using with Spark with Jackson 2.5 hasn't caused any issues but
> it does have some useful new features. It should be fully backwards
> compatible according to the Jackson folks.
>
> On Mon, Feb 1, 2016 at 10:29 AM Ted Yu  wrote:
>
>> SPARK-12624 has been resolved.
>> According to Wenchen, SPARK-12783 is fixed in 1.6.0 release.
>>
>> Are there other blockers for Spark 1.6.1 ?
>>
>> Thanks
>>
>> On Wed, Jan 13, 2016 at 5:39 PM, Michael Armbrust > > wrote:
>>
>>> Hey All,
>>>
>>> While I'm not aware of any critical issues with 1.6.0, there are several
>>> corner cases that users are hitting with the Dataset API that are fixed in
>>> branch-1.6.  As such I'm considering a 1.6.1 release.
>>>
>>> At the moment there are only two critical issues targeted for 1.6.1:
>>>  - SPARK-12624 - When schema is specified, we should treat undeclared
>>> fields as null (in Python)
>>>  - SPARK-12783 - Dataset map serialization error
>>>
>>> When these are resolved I'll likely begin the release process.  If there
>>> are any other issues that we should wait for please contact me.
>>>
>>> Michael
>>>
>>
>>


Re: Scala 2.11 default build

2016-02-01 Thread Reynold Xin
Yes they do. We haven't dropped 2.10 support yet. There are too many 2.10
active deployments out there.


On Mon, Feb 1, 2016 at 11:33 AM, Jakob Odersky  wrote:

> Awesome!
> +1 on Steve Loughran's question, how does this affect support for
> 2.10? Do future contributions need to work with Scala 2.10?
>
> cheers
>
> On Mon, Feb 1, 2016 at 7:02 AM, Ted Yu  wrote:
> > The following jobs have been established for build against Scala 2.10:
> >
> >
> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/job/SPARK-master-COMPILE-MAVEN-SCALA-2.10/
> >
> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/job/SPARK-master-COMPILE-sbt-SCALA-2.10/
> >
> > FYI
> >
> > On Mon, Feb 1, 2016 at 4:22 AM, Steve Loughran 
> > wrote:
> >>
> >>
> >> On 30 Jan 2016, at 08:22, Reynold Xin  wrote:
> >>
> >> FYI - I just merged Josh's pull request to switch to Scala 2.11 as the
> >> default build.
> >>
> >> https://github.com/apache/spark/pull/10608
> >>
> >>
> >>
> >> does this mean that Spark 2.10 compatibility & testing are no longer
> >> needed?
> >
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
>
>


Re: Scala 2.11 default build

2016-02-01 Thread Jakob Odersky
Awesome!
+1 on Steve Loughran's question, how does this affect support for
2.10? Do future contributions need to work with Scala 2.10?

cheers

On Mon, Feb 1, 2016 at 7:02 AM, Ted Yu  wrote:
> The following jobs have been established for build against Scala 2.10:
>
> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/job/SPARK-master-COMPILE-MAVEN-SCALA-2.10/
> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/job/SPARK-master-COMPILE-sbt-SCALA-2.10/
>
> FYI
>
> On Mon, Feb 1, 2016 at 4:22 AM, Steve Loughran 
> wrote:
>>
>>
>> On 30 Jan 2016, at 08:22, Reynold Xin  wrote:
>>
>> FYI - I just merged Josh's pull request to switch to Scala 2.11 as the
>> default build.
>>
>> https://github.com/apache/spark/pull/10608
>>
>>
>>
>> does this mean that Spark 2.10 compatibility & testing are no longer
>> needed?
>
>

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: Spark 1.6.1

2016-02-01 Thread Hamel Kothari
I noticed that the Jackson dependency was bumped to 2.5 in master for
something spark-streaming related. Is there any reason that this upgrade
can't be included with 1.6.1?

According to later comments on this thread:
https://issues.apache.org/jira/browse/SPARK-8332 and my personal experience
using with Spark with Jackson 2.5 hasn't caused any issues but it does have
some useful new features. It should be fully backwards compatible according
to the Jackson folks.

On Mon, Feb 1, 2016 at 10:29 AM Ted Yu  wrote:

> SPARK-12624 has been resolved.
> According to Wenchen, SPARK-12783 is fixed in 1.6.0 release.
>
> Are there other blockers for Spark 1.6.1 ?
>
> Thanks
>
> On Wed, Jan 13, 2016 at 5:39 PM, Michael Armbrust 
> wrote:
>
>> Hey All,
>>
>> While I'm not aware of any critical issues with 1.6.0, there are several
>> corner cases that users are hitting with the Dataset API that are fixed in
>> branch-1.6.  As such I'm considering a 1.6.1 release.
>>
>> At the moment there are only two critical issues targeted for 1.6.1:
>>  - SPARK-12624 - When schema is specified, we should treat undeclared
>> fields as null (in Python)
>>  - SPARK-12783 - Dataset map serialization error
>>
>> When these are resolved I'll likely begin the release process.  If there
>> are any other issues that we should wait for please contact me.
>>
>> Michael
>>
>
>


Re: Spark 1.6.1

2016-02-01 Thread Ted Yu
SPARK-12624 has been resolved.
According to Wenchen, SPARK-12783 is fixed in 1.6.0 release.

Are there other blockers for Spark 1.6.1 ?

Thanks

On Wed, Jan 13, 2016 at 5:39 PM, Michael Armbrust 
wrote:

> Hey All,
>
> While I'm not aware of any critical issues with 1.6.0, there are several
> corner cases that users are hitting with the Dataset API that are fixed in
> branch-1.6.  As such I'm considering a 1.6.1 release.
>
> At the moment there are only two critical issues targeted for 1.6.1:
>  - SPARK-12624 - When schema is specified, we should treat undeclared
> fields as null (in Python)
>  - SPARK-12783 - Dataset map serialization error
>
> When these are resolved I'll likely begin the release process.  If there
> are any other issues that we should wait for please contact me.
>
> Michael
>


Re: Scala 2.11 default build

2016-02-01 Thread Ted Yu
The following jobs have been established for build against Scala 2.10:

https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/job/SPARK-master-COMPILE-MAVEN-SCALA-2.10/
https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/job/SPARK-master-COMPILE-sbt-SCALA-2.10/

FYI

On Mon, Feb 1, 2016 at 4:22 AM, Steve Loughran 
wrote:

>
> On 30 Jan 2016, at 08:22, Reynold Xin  wrote:
>
> FYI - I just merged Josh's pull request to switch to Scala 2.11 as the
> default build.
>
> https://github.com/apache/spark/pull/10608
>
>
>
> does this mean that Spark 2.10 compatibility & testing are no longer
> needed?
>


Guidelines for writing SPARK packages

2016-02-01 Thread Praveen Devarao
Hi,

Is there any guidelines or specs to write a Spark package? I would 
like to implement a spark package and would like to know the way it needs 
to be structured (implement some interfaces etc) so that it can plug into 
Spark for extended functionality.

Could any one help me point to docs or links on the above?

Thanking You

Praveen Devarao



[ANNOUNCE] New SAMBA Package = Spark + AWS Lambda

2016-02-01 Thread David Russell
Hi all,

Just sharing news of the release of a newly available Spark package, SAMBA
.


https://github.com/onetapbeyond/lambda-spark-executor

SAMBA is an Apache Spark package offering seamless integration with the AWS
Lambda  compute service for Spark batch and
streaming applications on the JVM.

Within traditional Spark deployments RDD tasks are executed using fixed
compute resources on worker nodes within the Spark cluster. With SAMBA,
application developers can delegate selected RDD tasks to execute using
on-demand AWS Lambda compute infrastructure in the cloud.

Not unlike the recently released ROSE
 package that
extends the capabilities of traditional Spark applications with support for
CRAN R analytics, SAMBA provides another (hopefully) useful extension for
Spark application developers on the JVM.

SAMBA Spark Package: https://github.com/onetapbeyond/lambda-spark-executor

ROSE Spark Package: https://github.com/onetapbeyond/opencpu-spark-executor


Questions, suggestions, feedback welcome.

David

-- 
"*All that is gold does not glitter,** Not all those who wander are lost."*


Re: Scala 2.11 default build

2016-02-01 Thread Steve Loughran

On 30 Jan 2016, at 08:22, Reynold Xin 
mailto:r...@databricks.com>> wrote:

FYI - I just merged Josh's pull request to switch to Scala 2.11 as the default 
build.

https://github.com/apache/spark/pull/10608



does this mean that Spark 2.10 compatibility & testing are no longer needed?


Spark Executor retries infinitely

2016-02-01 Thread Prabhu Joseph
Hi All,

  When a Spark job (Spark-1.5.2) is submitted with a single executor and if
user passes some wrong JVM arguments with spark.executor.extraJavaOptions,
the first executor fails. But the job keeps on retrying, creating a new
executor and failing every tim*e, *until CTRL-C is pressed*. *Do we have
configuration to limit the retry attempts.

*Example:*

./spark-submit --class SimpleApp --master "spark://10.10.72.145:7077"
--conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails
-XX:+PrintGCTimeStamps -XX:+UseG1GC -XX:InitiatingHeapOccupancyPercent=35
-XX:ConcGCThreads=16" /SPARK/SimpleApp.jar

Executor fails with

Error occurred during initialization of VM
Can't have more ConcGCThreads than ParallelGCThreads.

But the job does not exit, keeps on creating executors and retrying.
..
16/02/01 06:54:28 INFO SparkDeploySchedulerBackend: *Granted executor ID
app-20160201065319-0014/2846* on hostPort 10.10.72.145:36558 with 12 cores,
2.0 GB RAM
16/02/01 06:54:28 INFO AppClient$ClientEndpoint: Executor updated:
app-20160201065319-0014/2846 is now LOADING
16/02/01 06:54:28 INFO AppClient$ClientEndpoint: Executor updated:
app-20160201065319-0014/2846 is now RUNNING
16/02/01 06:54:28 INFO AppClient$ClientEndpoint: Executor updated:
app-20160201065319-0014/2846 is now EXITED (Command exited with code 1)
16/02/01 06:54:28 INFO SparkDeploySchedulerBackend: Executor
app-20160201065319-0014/2846 removed: Command exited with code 1
16/02/01 06:54:28 INFO SparkDeploySchedulerBackend: Asked to remove
non-existent executor 2846
16/02/01 06:54:28 INFO AppClient$ClientEndpoint: *Executor added:
app-20160201065319-0014/2847* on worker-20160131230345-10.10.72.145-36558 (
10.10.72.145:36558) with 12 cores
16/02/01 06:54:28 INFO SparkDeploySchedulerBackend: Granted executor ID
app-20160201065319-0014/2847 on hostPort 10.10.72.145:36558 with 12 cores,
2.0 GB RAM
16/02/01 06:54:28 INFO AppClient$ClientEndpoint: Executor updated:
app-20160201065319-0014/2847 is now LOADING
16/02/01 06:54:28 INFO AppClient$ClientEndpoint: Executor updated:
app-20160201065319-0014/2847 is now EXITED (Command exited with code 1)
16/02/01 06:54:28 INFO SparkDeploySchedulerBackend: Executor
app-20160201065319-0014/2847 removed: Command exited with code 1
16/02/01 06:54:28 INFO SparkDeploySchedulerBackend: Asked to remove
non-existent executor 2847
16/02/01 06:54:28 INFO AppClient$ClientEndpoint:* Executor added:
app-20160201065319-0014/2848* on worker-20160131230345-10.10.72.145-36558 (
10.10.72.145:36558) with 12 cores
16/02/01 06:54:28 INFO SparkDeploySchedulerBackend: Granted executor ID
app-20160201065319-0014/2848 on hostPort 10.10.72.145:36558 with 12 cores,
2.0 GB RAM
16/02/01 06:54:28 INFO AppClient$ClientEndpoint: Executor updated:
app-20160201065319-0014/2848 is now LOADING
16/02/01 06:54:28 INFO AppClient$ClientEndpoint: Executor updated:
app-20160201065319-0014/2848 is now RUNNING




Thanks,
Prabhu Joseph


Re: sbt publish-local fails with 2.0.0-SNAPSHOT

2016-02-01 Thread Saisai Shao
I think it is due to our recent changes to override the external resolvers
in sbt building profile, I just created a JIRA (
https://issues.apache.org/jira/browse/SPARK-13109) to track this.


On Mon, Feb 1, 2016 at 3:01 PM, Mike Hynes <91m...@gmail.com> wrote:

> Hi devs,
>
> I used to be able to do some local development from the upstream
> master branch and run the publish-local command in an sbt shell to
> publish the modified jars to the local ~/.ivy2 repository.
>
> I relied on this behaviour, since I could write other local packages
> that had my local 1.X.0-SNAPSHOT dependencies in the build.sbt file,
> such that I could run distributed tests from outside the spark source.
>
> However, having just pulled from the upstream master on
> 2.0.0-SNAPSHOT, I can *not* run publish-local with sbt, with the
> following error messages:
>
> [...]
> java.lang.RuntimeException: Undefined resolver 'local'
> at scala.sys.package$.error(package.scala:27)
> at sbt.IvyActions$$anonfun$publish$1.apply(IvyActions.scala:120)
> at sbt.IvyActions$$anonfun$publish$1.apply(IvyActions.scala:117)
> at sbt.IvySbt$Module$$anonfun$withModule$1.apply(Ivy.scala:155)
> at sbt.IvySbt$Module$$anonfun$withModule$1.apply(Ivy.scala:155)
> at sbt.IvySbt$$anonfun$withIvy$1.apply(Ivy.scala:132)
> at sbt.IvySbt.sbt$IvySbt$$action$1(Ivy.scala:57)
> at sbt.IvySbt$$anon$4.call(Ivy.scala:65)
> at xsbt.boot.Locks$GlobalLock.withChannel$1(Locks.scala:93)
> at
> xsbt.boot.Locks$GlobalLock.xsbt$boot$Locks$GlobalLock$$withChannelRetries$1(Locks.scala:78)
> at
> xsbt.boot.Locks$GlobalLock$$anonfun$withFileLock$1.apply(Locks.scala:97)
> at xsbt.boot.Using$.withResource(Using.scala:10)
> at xsbt.boot.Using$.apply(Using.scala:9)
> at
> xsbt.boot.Locks$GlobalLock.ignoringDeadlockAvoided(Locks.scala:58)
> at xsbt.boot.Locks$GlobalLock.withLock(Locks.scala:48)
> at xsbt.boot.Locks$.apply0(Locks.scala:31)
> at xsbt.boot.Locks$.apply(Locks.scala:28)
> at sbt.IvySbt.withDefaultLogger(Ivy.scala:65)
> at sbt.IvySbt.withIvy(Ivy.scala:127)
> at sbt.IvySbt.withIvy(Ivy.scala:124)
> at sbt.IvySbt$Module.withModule(Ivy.scala:155)
> at sbt.IvyActions$.publish(IvyActions.scala:117)
> at sbt.Classpaths$$anonfun$publishTask$1.apply(Defaults.scala:1298)
> at sbt.Classpaths$$anonfun$publishTask$1.apply(Defaults.scala:1297)
> at scala.Function3$$anonfun$tupled$1.apply(Function3.scala:35)
> at scala.Function3$$anonfun$tupled$1.apply(Function3.scala:34)
> at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
> at
> sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:40)
> at sbt.std.Transform$$anon$4.work(System.scala:63)
> at
> sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
> at
> sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
> at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
> at sbt.Execute.work(Execute.scala:235)
> at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
> at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
> at
> sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
> at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> [...]
> [error] (spark/*:publishLocal) Undefined resolver 'local'
> [error] (hive/*:publishLocal) Undefined resolver 'local'
> [error] (streaming-kafka-assembly/*:publishLocal) Undefined resolver
> 'local'
> [error] (unsafe/*:publishLocal) Undefined resolver 'local'
> [error] (streaming-twitter/*:publishLocal) Undefined resolver 'local'
> [error] (streaming-flume/*:publishLocal) Undefined resolver 'local'
> [error] (streaming-kafka/*:publishLocal) Undefined resolver 'local'
> [error] (catalyst/*:publishLocal) Undefined resolver 'local'
> [error] (streaming-akka/*:publishLocal) Undefined resolver 'local'
> [error] (streaming-flume-sink/*:publishLocal) Undefined resolver 'local'
> [error] (streaming-zeromq/*:publishLocal) Undefined resolver 'local'
> [error] (test-tags/*:publishLocal) Undefined resolver 'local'
> [error] (launcher/*:publishLocal) Undefined resolver 'local'
> [error] (network-shuffle/*:publishLocal) Undefined resolver 'local'
> [error] (streaming-mqtt-assembly/*:publishLocal) Undefined resolver 'local'
> [error] (assembly/*:publishLocal) Undefi