Fwd: Can't run Spark Streaming Kinesis example

2015-12-04 Thread Brian London
On my local system (8 core MBP) the Kinesis ASL example isn't working out
of the box on a fresh build (Spark 1.5.2).  I can see records going into
the kinesis stream but the receiver is returning empty DStreams.  The
behavior is similar to an issue that's been discussed previously:

http://stackoverflow.com/questions/26941844/apache-spark-kinesis-sample-not-working

http://apache-spark-user-list.1001560.n3.nabble.com/Having-problem-with-Spark-streaming-with-Kinesis-td19863.html#a19929

In those the feedback was that it is an issue related to there being
sufficient cores allocated to both receive and process the incoming data.
However, in my case I have attempted running with the default (local[*]) as
well as local[2], local[4], and local[8] all with the same results.  Is it
possible that the actual number of worker threads is different from what's
requested?  Is there a way to check how many threads were actually
allocated?


Re: Spark streaming with Kinesis broken?

2015-12-10 Thread Brian London
Nick's symptoms sound identical to mine.  I should mention that I just
pulled the latest version from github and it seems to be working there.  To
reproduce:


   1. Download spark 1.5.2 from http://spark.apache.org/downloads.html
   2. build/mvn -Pyarn -Phadoop-2.6 -Dhadoop.version=2.6.0 -DskipTests
   clean package
   3. build/mvn -Pkinesis-asl -DskipTests clean package
   4. Then run simultaneously:
   1. bin/run-example streaming.KinesisWordCountASL [Kinesis app name]
  [Kinesis stream name] [endpoint URL]
  2.   bin/run-example streaming.KinesisWordProducerASL [Kinesis stream
  name] [endpoint URL] 100 10


On Thu, Dec 10, 2015 at 2:05 PM Jean-Baptiste Onofré <j...@nanthrax.net>
wrote:

> Hi Nick,
>
> Just to be sure: don't you see some ClassCastException in the log ?
>
> Thanks,
> Regards
> JB
>
> On 12/10/2015 07:56 PM, Nick Pentreath wrote:
> > Could you provide an example / test case and more detail on what issue
> > you're facing?
> >
> > I've just tested a simple program reading from a dev Kinesis stream and
> > using stream.print() to show the records, and it works under 1.5.1 but
> > doesn't appear to be working under 1.5.2.
> >
> > UI for 1.5.2:
> >
> > Inline image 1
> >
> > UI for 1.5.1:
> >
> > Inline image 2
> >
> > On Thu, Dec 10, 2015 at 5:50 PM, Brian London <brianmlon...@gmail.com
> > <mailto:brianmlon...@gmail.com>> wrote:
> >
> > Has anyone managed to run the Kinesis demo in Spark 1.5.2?  The
> > Kinesis ASL that ships with 1.5.2 appears to not work for me
> > although 1.5.1 is fine. I spent some time with Amazon earlier in the
> > week and the only thing we could do to make it work is to change the
> > version to 1.5.1.  Can someone please attempt to reproduce before I
> > open a JIRA issue for it?
> >
> >
>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>


Re: Spark streaming with Kinesis broken?

2015-12-10 Thread Brian London
Yes, it worked in the 1.6 branch as of commit
db5165246f2888537dd0f3d4c5a515875c7358ed.  That makes it much less serious
of an issue, although it would be nice to know what the root cause is to
avoid a regression.

On Thu, Dec 10, 2015 at 4:03 PM Burak Yavuz <brk...@gmail.com> wrote:

> I've noticed this happening when there was some dependency conflicts, and
> it is super hard to debug.
> It seems that the KinesisClientLibrary version in Spark 1.5.2 is 1.3.0,
> but it is 1.2.1 in Spark 1.5.1.
> I feel like that seems to be the problem...
>
> Brian, did you verify that it works with the 1.6.0 branch?
>
> Thanks,
> Burak
>
> On Thu, Dec 10, 2015 at 11:45 AM, Brian London <brianmlon...@gmail.com>
> wrote:
>
>> Nick's symptoms sound identical to mine.  I should mention that I just
>> pulled the latest version from github and it seems to be working there.  To
>> reproduce:
>>
>>
>>1. Download spark 1.5.2 from http://spark.apache.org/downloads.html
>>2. build/mvn -Pyarn -Phadoop-2.6 -Dhadoop.version=2.6.0 -DskipTests
>>clean package
>>3. build/mvn -Pkinesis-asl -DskipTests clean package
>>4. Then run simultaneously:
>>1. bin/run-example streaming.KinesisWordCountASL [Kinesis app name]
>>   [Kinesis stream name] [endpoint URL]
>>   2.   bin/run-example streaming.KinesisWordProducerASL [Kinesis
>>   stream name] [endpoint URL] 100 10
>>
>>
>> On Thu, Dec 10, 2015 at 2:05 PM Jean-Baptiste Onofré <j...@nanthrax.net>
>> wrote:
>>
>>> Hi Nick,
>>>
>>> Just to be sure: don't you see some ClassCastException in the log ?
>>>
>>> Thanks,
>>> Regards
>>> JB
>>>
>>> On 12/10/2015 07:56 PM, Nick Pentreath wrote:
>>> > Could you provide an example / test case and more detail on what issue
>>> > you're facing?
>>> >
>>> > I've just tested a simple program reading from a dev Kinesis stream and
>>> > using stream.print() to show the records, and it works under 1.5.1 but
>>> > doesn't appear to be working under 1.5.2.
>>> >
>>> > UI for 1.5.2:
>>> >
>>> > Inline image 1
>>> >
>>> > UI for 1.5.1:
>>> >
>>> > Inline image 2
>>> >
>>> > On Thu, Dec 10, 2015 at 5:50 PM, Brian London <brianmlon...@gmail.com
>>> > <mailto:brianmlon...@gmail.com>> wrote:
>>> >
>>> > Has anyone managed to run the Kinesis demo in Spark 1.5.2?  The
>>> > Kinesis ASL that ships with 1.5.2 appears to not work for me
>>> > although 1.5.1 is fine. I spent some time with Amazon earlier in
>>> the
>>> > week and the only thing we could do to make it work is to change
>>> the
>>> > version to 1.5.1.  Can someone please attempt to reproduce before I
>>> > open a JIRA issue for it?
>>> >
>>> >
>>>
>>> --
>>> Jean-Baptiste Onofré
>>> jbono...@apache.org
>>> http://blog.nanthrax.net
>>> Talend - http://www.talend.com
>>>
>>> -
>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>> For additional commands, e-mail: user-h...@spark.apache.org
>>>
>>>
>


Re: Spark streaming with Kinesis broken?

2015-12-11 Thread Brian London
Yes, it's against master: https://github.com/apache/spark/pull/10256

I'll push the KCL version bump after my local tests finish.

On Fri, Dec 11, 2015 at 10:42 AM Nick Pentreath <nick.pentre...@gmail.com>
wrote:

> Is that PR against master branch?
>
> S3 read comes from Hadoop / jet3t afaik
>
> —
> Sent from Mailbox <https://www.dropbox.com/mailbox>
>
>
> On Fri, Dec 11, 2015 at 5:38 PM, Brian London <brianmlon...@gmail.com>
> wrote:
>
>> That's good news  I've got a PR in to up the SDK version to 1.10.40 and
>> the KCL to 1.6.1 which I'm running tests on locally now.
>>
>> Is the AWS SDK not used for reading/writing from S3 or do we get that for
>> free from the Hadoop dependencies?
>>
>> On Fri, Dec 11, 2015 at 5:07 AM Nick Pentreath <nick.pentre...@gmail.com>
>> wrote:
>>
>>> cc'ing dev list
>>>
>>> Ok, looks like when the KCL version was updated in
>>> https://github.com/apache/spark/pull/8957, the AWS SDK version was not,
>>> probably leading to dependency conflict, though as Burak mentions its hard
>>> to debug as no exceptions seem to get thrown... I've tested 1.5.2 locally
>>> and on my 1.5.2 EC2 cluster, and no data is received, and nothing shows up
>>> in driver or worker logs, so any exception is getting swallowed somewhere.
>>>
>>> Run starting. Expected test count is: 4
>>> KinesisStreamSuite:
>>> Using endpoint URL https://kinesis.eu-west-1.amazonaws.com for creating
>>> Kinesis streams for tests.
>>> - KinesisUtils API
>>> - RDD generation
>>> - basic operation *** FAILED ***
>>>   The code passed to eventually never returned normally. Attempted 13
>>> times over 2.04 minutes. Last failure message: Set() did not equal
>>> Set(5, 10, 1, 6, 9, 2, 7, 3, 8, 4)
>>>   Data received does not match data sent. (KinesisStreamSuite.scala:188)
>>> - failure recovery *** FAILED ***
>>>   The code passed to eventually never returned normally. Attempted 63
>>> times over 2.02863831 minutes. Last failure message:
>>> isCheckpointPresent was true, but 0 was not greater than 10.
>>> (KinesisStreamSuite.scala:228)
>>> Run completed in 5 minutes, 0 seconds.
>>> Total number of tests run: 4
>>> Suites: completed 1, aborted 0
>>> Tests: succeeded 2, failed 2, canceled 0, ignored 0, pending 0
>>> *** 2 TESTS FAILED ***
>>> [INFO]
>>> 
>>> [INFO] BUILD FAILURE
>>> [INFO]
>>> 
>>>
>>>
>>> KCL 1.3.0 depends on *1.9.37* SDK (
>>> https://github.com/awslabs/amazon-kinesis-client/blob/1.3.0/pom.xml#L26)
>>> while the Spark Kinesis dependency was kept at *1.9.16.*
>>>
>>> I've run the integration tests on branch-1.5 (1.5.3-SNAPSHOT) with AWS
>>> SDK 1.9.37 and everything works.
>>>
>>> Run starting. Expected test count is: 28
>>> KinesisBackedBlockRDDSuite:
>>> Using endpoint URL https://kinesis.eu-west-1.amazonaws.com for creating
>>> Kinesis streams for tests.
>>> - Basic reading from Kinesis
>>> - Read data available in both block manager and Kinesis
>>> - Read data available only in block manager, not in Kinesis
>>> - Read data available only in Kinesis, not in block manager
>>> - Read data available partially in block manager, rest in Kinesis
>>> - Test isBlockValid skips block fetching from block manager
>>> - Test whether RDD is valid after removing blocks from block anager
>>> KinesisStreamSuite:
>>> - KinesisUtils API
>>> - RDD generation
>>> - basic operation
>>> - failure recovery
>>> KinesisReceiverSuite:
>>> - check serializability of SerializableAWSCredentials
>>> - process records including store and checkpoint
>>> - shouldn't store and checkpoint when receiver is stopped
>>> - shouldn't checkpoint when exception occurs during store
>>> - should set checkpoint time to currentTime + checkpoint interval upon
>>> instantiation
>>> - should checkpoint if we have exceeded the checkpoint interval
>>> - shouldn't checkpoint if we have not exceeded the checkpoint interval
>>> - should add to time when advancing checkpoint
>>> - shutdown should checkpoint if the reason is TERMINATE
>>> - shutdown should not checkpoint if the reason is something other than
>>> TERMINATE
>>> - retry success on

Re: Spark streaming with Kinesis broken?

2015-12-11 Thread Brian London
 anywhere else (AFAIK it is not, but in case I missed something let me
> know any good reason to keep the explicit dependency)?
>
> N
>
>
>
> On Fri, Dec 11, 2015 at 6:55 AM, Nick Pentreath <nick.pentre...@gmail.com>
> wrote:
>
>> Yeah also the integration tests need to be specifically run - I would
>> have thought the contributor would have run those tests and also tested the
>> change themselves using live Kinesis :(
>>
>> —
>> Sent from Mailbox <https://www.dropbox.com/mailbox>
>>
>>
>> On Fri, Dec 11, 2015 at 6:18 AM, Burak Yavuz <brk...@gmail.com> wrote:
>>
>>> I don't think the Kinesis tests specifically ran when that was merged
>>> into 1.5.2 :(
>>> https://github.com/apache/spark/pull/8957
>>>
>>> https://github.com/apache/spark/commit/883bd8fccf83aae7a2a847c9a6ca129fac86e6a3
>>>
>>> AFAIK pom changes don't trigger the Kinesis tests.
>>>
>>> Burak
>>>
>>> On Thu, Dec 10, 2015 at 8:09 PM, Nick Pentreath <
>>> nick.pentre...@gmail.com> wrote:
>>>
>>>> Yup also works for me on master branch as I've been testing DynamoDB
>>>> Streams integration. In fact works with latest KCL 1.6.1 also which I was
>>>> using.
>>>>
>>>> So theKCL version does seem like it could be the issue - somewhere
>>>> along the line an exception must be getting swallowed. Though the tests
>>>> should have picked this up? Will dig deeper.
>>>>
>>>> —
>>>> Sent from Mailbox <https://www.dropbox.com/mailbox>
>>>>
>>>>
>>>> On Thu, Dec 10, 2015 at 11:07 PM, Brian London <brianmlon...@gmail.com>
>>>> wrote:
>>>>
>>>>> Yes, it worked in the 1.6 branch as of commit
>>>>> db5165246f2888537dd0f3d4c5a515875c7358ed.  That makes it much less
>>>>> serious of an issue, although it would be nice to know what the root cause
>>>>> is to avoid a regression.
>>>>>
>>>>> On Thu, Dec 10, 2015 at 4:03 PM Burak Yavuz <brk...@gmail.com> wrote:
>>>>>
>>>>>> I've noticed this happening when there was some dependency conflicts,
>>>>>> and it is super hard to debug.
>>>>>> It seems that the KinesisClientLibrary version in Spark 1.5.2 is
>>>>>> 1.3.0, but it is 1.2.1 in Spark 1.5.1.
>>>>>> I feel like that seems to be the problem...
>>>>>>
>>>>>> Brian, did you verify that it works with the 1.6.0 branch?
>>>>>>
>>>>>> Thanks,
>>>>>> Burak
>>>>>>
>>>>>> On Thu, Dec 10, 2015 at 11:45 AM, Brian London <
>>>>>> brianmlon...@gmail.com> wrote:
>>>>>>
>>>>>>> Nick's symptoms sound identical to mine.  I should mention that I
>>>>>>> just pulled the latest version from github and it seems to be working
>>>>>>> there.  To reproduce:
>>>>>>>
>>>>>>>
>>>>>>>1. Download spark 1.5.2 from
>>>>>>>http://spark.apache.org/downloads.html
>>>>>>>2. build/mvn -Pyarn -Phadoop-2.6 -Dhadoop.version=2.6.0
>>>>>>>-DskipTests clean package
>>>>>>>3. build/mvn -Pkinesis-asl -DskipTests clean package
>>>>>>>4. Then run simultaneously:
>>>>>>>1. bin/run-example streaming.KinesisWordCountASL [Kinesis app
>>>>>>>   name] [Kinesis stream name] [endpoint URL]
>>>>>>>   2.   bin/run-example streaming.KinesisWordProducerASL
>>>>>>>   [Kinesis stream name] [endpoint URL] 100 10
>>>>>>>
>>>>>>>
>>>>>>> On Thu, Dec 10, 2015 at 2:05 PM Jean-Baptiste Onofré <
>>>>>>> j...@nanthrax.net> wrote:
>>>>>>>
>>>>>>>> Hi Nick,
>>>>>>>>
>>>>>>>> Just to be sure: don't you see some ClassCastException in the log ?
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Regards
>>>>>>>> JB
>>>>>>>>
>>>>>>>> On 12/10/2015 07:56 PM, Nick Pentreath wrote:
>>>>>>>> > Could you provide an example / test case and more detail on what
>>>>>>>> issue
>>>>>>>> > you're facing?
>>>>>>>> >
>>>>>>>> > I've just tested a simple program reading from a dev Kinesis
>>>>>>>> stream and
>>>>>>>> > using stream.print() to show the records, and it works under
>>>>>>>> 1.5.1 but
>>>>>>>> > doesn't appear to be working under 1.5.2.
>>>>>>>> >
>>>>>>>> > UI for 1.5.2:
>>>>>>>> >
>>>>>>>> > Inline image 1
>>>>>>>> >
>>>>>>>> > UI for 1.5.1:
>>>>>>>> >
>>>>>>>> > Inline image 2
>>>>>>>> >
>>>>>>>> > On Thu, Dec 10, 2015 at 5:50 PM, Brian London <
>>>>>>>> brianmlon...@gmail.com
>>>>>>>> > <mailto:brianmlon...@gmail.com>> wrote:
>>>>>>>> >
>>>>>>>> > Has anyone managed to run the Kinesis demo in Spark 1.5.2?
>>>>>>>> The
>>>>>>>> > Kinesis ASL that ships with 1.5.2 appears to not work for me
>>>>>>>> > although 1.5.1 is fine. I spent some time with Amazon earlier
>>>>>>>> in the
>>>>>>>> > week and the only thing we could do to make it work is to
>>>>>>>> change the
>>>>>>>> > version to 1.5.1.  Can someone please attempt to reproduce
>>>>>>>> before I
>>>>>>>> > open a JIRA issue for it?
>>>>>>>> >
>>>>>>>> >
>>>>>>>>
>>>>>>>> --
>>>>>>>> Jean-Baptiste Onofré
>>>>>>>> jbono...@apache.org
>>>>>>>> http://blog.nanthrax.net
>>>>>>>> Talend - http://www.talend.com
>>>>>>>>
>>>>>>>>
>>>>>>>> -
>>>>>>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>>>>>>> For additional commands, e-mail: user-h...@spark.apache.org
>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>>>
>>
>


Spark streaming with Kinesis broken?

2015-12-10 Thread Brian London
Has anyone managed to run the Kinesis demo in Spark 1.5.2?  The Kinesis ASL
that ships with 1.5.2 appears to not work for me although 1.5.1 is fine. I
spent some time with Amazon earlier in the week and the only thing we could
do to make it work is to change the version to 1.5.1.  Can someone please
attempt to reproduce before I open a JIRA issue for it?


Re: SparkSQL integration issue with AWS S3a

2015-12-31 Thread Brian London
Since you're running in standalone mode, can you try it using Spark 1.5.1
please?
On Thu, Dec 31, 2015 at 9:09 AM Steve Loughran 
wrote:

>
> > On 30 Dec 2015, at 19:31, KOSTIANTYN Kudriavtsev <
> kudryavtsev.konstan...@gmail.com> wrote:
> >
> > Hi Jerry,
> >
> > I want to run different jobs on different S3 buckets - different AWS
> creds - on the same instances. Could you shed some light if it's possible
> to achieve with hdfs-site?
> >
> > Thank you,
> > Konstantin Kudryavtsev
> >
>
>
> The Hadoop s3a client doesn't have much (anything?) in the way for
> multiple logins.
>
> It'd be possible to do it by hand (create a Hadoop Configuration object,
> fill with the credential, and set "fs.s3a.impl.disable.cache"= true to make
> sure you weren't getting an existing version.
>
> I don't know how you'd hook that up to spark jobs. maybe try setting the
> credentials and that fs.s3a.impl.disable.cache flag in your spark context
> to see if together they get picked up
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>


DStream keyBy

2015-12-30 Thread Brian London
RDD has a method keyBy[K](f: T=>K) that acts as an alias for map(x =>
(f(x), x)) and is useful for generating pair RDDs.  Is there a reason this
method doesn't exist on DStream?  It's a fairly heavily used method and
allows clearer code than the more verbose map.


map operation clears custom partitioner

2016-02-22 Thread Brian London
It appears that when a custom partitioner is applied in a groupBy
operation, it is not propagated through subsequent non-shuffle operations.
Is this intentional? Is there any way to carry custom partitioning through
maps?

I've uploaded a gist that exhibits the behavior.
https://gist.github.com/BrianLondon/c3c3355d1971971f3ec6


Re: updateStateByKey not persisting in Spark 1.5.1

2016-01-21 Thread Brian London
Thanks. It looks like extending my batch duration to 7 seconds is a
work-around.  I'd like to build a check for the lack of checkpointing in
our integration tests.  Is there a way to parse the DAG at runtime?

On Wed, Jan 20, 2016 at 2:01 PM Ted Yu <yuzhih...@gmail.com> wrote:

> This is related:
>
> SPARK-6847
>
> FYI
>
> On Wed, Jan 20, 2016 at 7:55 AM, Brian London <brianmlon...@gmail.com>
> wrote:
>
>> I'm running a streaming job that has two calls to updateStateByKey.  When
>> run in standalone mode both calls to updateStateByKey behave as expected.
>> When run on a cluster, however, it appears that the first call is not being
>> checkpointed as shown in this DAG image:
>>
>> http://i.imgur.com/zmQ8O2z.png
>>
>> The middle column continues to grow one level deeper every batch until I
>> get a stack overflow error.  I'm guessing its a problem of the stateRDD not
>> being persisted, but I can't imagine why they wouldn't be.  I thought
>> updateStateByKey was supposed to just handle that for you internally.
>>
>> Any ideas?
>>
>> I'll post stack trace excperpts of the stack overflow if anyone is
>> interested below:
>>
>> Job aborted due to stage failure: Task 7 in stage 195811.0 failed 4
>> times, most recent failure: Lost task 7.3 in stage 195811.0 (TID 213529,
>> ip-10-168-177-216.ec2.internal): java.lang.StackOverflowError at
>> java.lang.Exception.(Exception.java:102) at
>> java.lang.ReflectiveOperationException.(ReflectiveOperationException.java:89)
>> at
>> java.lang.reflect.InvocationTargetException.(InvocationTargetException.java:72)
>> at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:606) at
>> java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1058) at
>> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1897) at
>> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
>> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350) at
>> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1997) at
>> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1921) at
>> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
>> ...
>>
>> And
>>
>> scala.collection.immutable.$colon$colon in readObject at line 362
>> scala.collection.immutable.$colon$colon in readObject at line 366
>> scala.collection.immutable.$colon$colon in readObject at line 362
>> scala.collection.immutable.$colon$colon in readObject at line 362
>> scala.collection.immutable.$colon$colon in readObject at line 366
>> scala.collection.immutable.$colon$colon in readObject at line 362
>> scala.collection.immutable.$colon$colon in readObject at line 362
>> ...
>>
>>


updateStateByKey not persisting in Spark 1.5.1

2016-01-20 Thread Brian London
I'm running a streaming job that has two calls to updateStateByKey.  When
run in standalone mode both calls to updateStateByKey behave as expected.
When run on a cluster, however, it appears that the first call is not being
checkpointed as shown in this DAG image:

http://i.imgur.com/zmQ8O2z.png

The middle column continues to grow one level deeper every batch until I
get a stack overflow error.  I'm guessing its a problem of the stateRDD not
being persisted, but I can't imagine why they wouldn't be.  I thought
updateStateByKey was supposed to just handle that for you internally.

Any ideas?

I'll post stack trace excperpts of the stack overflow if anyone is
interested below:

Job aborted due to stage failure: Task 7 in stage 195811.0 failed 4 times,
most recent failure: Lost task 7.3 in stage 195811.0 (TID 213529,
ip-10-168-177-216.ec2.internal): java.lang.StackOverflowError at
java.lang.Exception.(Exception.java:102) at
java.lang.ReflectiveOperationException.(ReflectiveOperationException.java:89)
at
java.lang.reflect.InvocationTargetException.(InvocationTargetException.java:72)
at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606) at
java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1058) at
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1897) at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350) at
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1997) at
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1921) at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
...

And

scala.collection.immutable.$colon$colon in readObject at line 362
scala.collection.immutable.$colon$colon in readObject at line 366
scala.collection.immutable.$colon$colon in readObject at line 362
scala.collection.immutable.$colon$colon in readObject at line 362
scala.collection.immutable.$colon$colon in readObject at line 366
scala.collection.immutable.$colon$colon in readObject at line 362
scala.collection.immutable.$colon$colon in readObject at line 362
...


Re: Using sbt assembly

2016-02-18 Thread Brian London
You need to add the plugin to your plugins.sbt file not your build.sbt
file.  Also, I don't see a 0.13.9 version on Github.  0.14.2 is current.

On Thu, Feb 18, 2016 at 9:50 PM Arko Provo Mukherjee <
arkoprovomukher...@gmail.com> wrote:

> Hello,
>
> I am trying to use sbt assembly to generate a fat JAR.
>
> Here is my \project\assembly.sbt file:
> resolvers += Resolver.url("bintray-sbt-plugins",
> url("http://dl.bintray.com/sbt/sbt-plugin-releases
> "))(Resolver.ivyStylePatterns)
>
> addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.13.9")
>
>
> However, when I run sbt assembly I get the error:
> [error] (*:update) sbt.ResolveException: unresolved dependency:
> com.eed3si9n#sbt-assembly;0.13.9: not found
>
> Anyone faced this issue before?
>
> Thanks & regards
> Arko
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>


Unsubscribe

2016-04-06 Thread Brian London
unsubscribe