Re: Write access to wiki

2016-01-11 Thread Mark Grover
Thanks Sean, I will send you the edit on the JIRA to keep email traffic
low:-)
Thanks Shane, comments in line.

On Mon, Jan 11, 2016 at 2:50 PM, shane knapp  wrote:

> > Shane may be able to fill you in on how the Jenkins build is set up.
> >
> mark:  yes.  yes i can.  :)
>
> currently, we have a set of bash scripts and binary packages on our
> jenkins master that can turn a bare centos install in to a jenkins
> worker.
>

Got it, thanks.

>
> i've also been porting over these bash tools in to ansible playbooks,
> but a lot of development stopped on this after we lost our staging
> instance due to a datacenter fire (yes, really) back in september.
> we're getting a new staging instance (master + slaves) set up in the
> next week or so, and THEN i can finish the ansible port.
>

Ok, sounds good. I think it would be great, if you could add installing the
'docker-engine' package and starting the 'docker' service in there too. I
was planning to update the playbook if there were one in the apache/spark
repo but I didn't see one, hence my question.


> these scripts are checked in to a private AMPLab github repo.
>
> does this help?
>

Yes, it does. Thanks!


>
> shane
>


Re: Write access to wiki

2016-01-11 Thread shane knapp
> Shane may be able to fill you in on how the Jenkins build is set up.
>
mark:  yes.  yes i can.  :)

currently, we have a set of bash scripts and binary packages on our
jenkins master that can turn a bare centos install in to a jenkins
worker.

i've also been porting over these bash tools in to ansible playbooks,
but a lot of development stopped on this after we lost our staging
instance due to a datacenter fire (yes, really) back in september.
we're getting a new staging instance (master + slaves) set up in the
next week or so, and THEN i can finish the ansible port.

these scripts are checked in to a private AMPLab github repo.

does this help?

shane

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: BUILD FAILURE...again?! :( Spark Project External Flume on fire

2016-01-11 Thread Josh Rosen
I've got a hotfix which should address it:
https://github.com/apache/spark/pull/10693



On Sun, Jan 10, 2016 at 11:50 PM, Jacek Laskowski  wrote:

> Hi,
>
> It appears that the last commit [1] broke the build. Is anyone working
> on it? I can when told so.
>
> ➜  spark git:(master) ✗ ./build/mvn -Pyarn -Phadoop-2.6
> -Dhadoop.version=2.7.1 -Dscala-2.11 -Phive -Phive-thriftserver
> -DskipTests clean install
> ...
> [info] Compiling 8 Scala sources and 1 Java source to
> /Users/jacek/dev/oss/spark/external/flume/target/scala-2.11/classes...
> [error]
> /Users/jacek/dev/oss/spark/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeInputDStream.scala:33:
> object jboss is not a member of package org
> [error] import org.jboss.netty.handler.codec.compression._
> [error]^
> [error]
> /Users/jacek/dev/oss/spark/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeInputDStream.scala:31:
> object jboss is not a member of package org
> [error] import org.jboss.netty.channel.{ChannelPipeline,
> ChannelPipelineFactory, Channels}
> [error]^
> [error]
> /Users/jacek/dev/oss/spark/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeInputDStream.scala:32:
> object jboss is not a member of package org
> [error] import
> org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory
> [error]^
> [warn] Class org.jboss.netty.channel.ChannelFactory not found -
> continuing with a stub.
> [warn] Class org.jboss.netty.channel.ChannelFactory not found -
> continuing with a stub.
> [warn] Class org.jboss.netty.channel.ChannelPipelineFactory not found
> - continuing with a stub.
> [warn] Class org.jboss.netty.handler.execution.ExecutionHandler not
> found - continuing with a stub.
> [warn] Class org.jboss.netty.channel.ChannelFactory not found -
> continuing with a stub.
> [warn] Class org.jboss.netty.handler.execution.ExecutionHandler not
> found - continuing with a stub.
> [warn] Class org.jboss.netty.channel.group.ChannelGroup not found -
> continuing with a stub.
> [error]
> /Users/jacek/dev/oss/spark/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeInputDStream.scala:149:
> not found: type NioServerSocketChannelFactory
> [error]   val channelFactory = new
> NioServerSocketChannelFactory(Executors.newCachedThreadPool(),
> [error]^
> [error]
> /Users/jacek/dev/oss/spark/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeInputDStream.scala:196:
> not found: type ChannelPipelineFactory
> [error]   class CompressionChannelPipelineFactory extends
> ChannelPipelineFactory {
> [error]   ^
> [error] Class org.jboss.netty.channel.ChannelFactory not found -
> continuing with a stub.
> [error] Class org.jboss.netty.channel.ChannelPipelineFactory not found
> - continuing with a stub.
> [error] Class org.jboss.netty.handler.execution.ExecutionHandler not
> found - continuing with a stub.
> [error]
> /Users/jacek/dev/oss/spark/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeInputDStream.scala:197:
> not found: type ChannelPipeline
> [error] def getPipeline(): ChannelPipeline = {
> [error]^
> [error]
> /Users/jacek/dev/oss/spark/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeInputDStream.scala:198:
> not found: value Channels
> [error]   val pipeline = Channels.pipeline()
> [error]  ^
> [error]
> /Users/jacek/dev/oss/spark/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeInputDStream.scala:199:
> not found: type ZlibEncoder
> [error]   val encoder = new ZlibEncoder(6)
> [error] ^
> [error]
> /Users/jacek/dev/oss/spark/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumePollingInputDStream.scala:29:
> object jboss is not a member of package org
> [error] import
> org.jboss.netty.channel.socket.nio.NioClientSocketChannelFactory
> [error]^
> [error]
> /Users/jacek/dev/oss/spark/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumePollingInputDStream.scala:73:
> not found: type NioClientSocketChannelFactory
> [error] new NioClientSocketChannelFactory(channelFactoryExecutor,
> channelFactoryExecutor)
> [error] ^
> [warn] Class org.jboss.netty.channel.ChannelFuture not found -
> continuing with a stub.
> [warn] Class org.jboss.netty.channel.ChannelFactory not found -
> continuing with a stub.
> [warn] Class org.jboss.netty.channel.ChannelFactory not found -
> continuing with a stub.
> [warn] Class org.jboss.netty.channel.ChannelFactory not found -
> continuing with a stub.
> [warn] Class org.jboss.netty.channel.ChannelFactory not found -
> continuing with a stub.
> [warn] Class org.jboss.netty.channel.ChannelUpstreamHandler not found
> - continuing with a stub.
> [error] Class org.jboss.netty.channel.ChannelFactory not found -
> 

Re: XML column not supported in Database

2016-01-11 Thread Gaini Rajeshwar
Hi Reynold,

I did create a issue in JIRA. It is SPARK-12764


On Tue, Jan 12, 2016 at 4:55 AM, Reynold Xin  wrote:

> Can you file a JIRA ticket? Thanks.
>
> The URL is issues.apache.org/jira/browse/SPARK
>
> On Mon, Jan 11, 2016 at 1:44 AM, Gaini Rajeshwar <
> raja.rajeshwar2...@gmail.com> wrote:
>
>> Hi All,
>>
>> I am using PostgreSQL database. I am using the following jdbc call to
>> access a customer table (*customer_id int, event text, country text,
>> content xml)* in my database.
>>
>> *val dataframe1 = sqlContext.load("jdbc", Map("url" ->
>> "jdbc:postgresql://localhost/customerlogs?user=postgres=postgres",
>> "dbtable" -> "customer"))*
>>
>> When i run above command in spark-shell i receive the following error.
>>
>> *java.sql.SQLException: Unsupported type *
>> * at
>> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.org$apache$spark$sql$execution$datasources$jdbc$JDBCRDD$$getCatalystType(JDBCRDD.scala:103)*
>> * at
>> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anonfun$1.apply(JDBCRDD.scala:140)*
>> * at
>> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anonfun$1.apply(JDBCRDD.scala:140)*
>> * at scala.Option.getOrElse(Option.scala:120)*
>> * at
>> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:139)*
>> * at
>> org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.(JDBCRelation.scala:91)*
>> * at
>> org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:60)*
>> * at
>> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)*
>> * at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)*
>> * at org.apache.spark.sql.SQLContext.load(SQLContext.scala:1153)*
>> * at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:25)*
>> * at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:30)*
>> * at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:32)*
>> * at $iwC$$iwC$$iwC$$iwC$$iwC.(:34)*
>> * at $iwC$$iwC$$iwC$$iwC.(:36)*
>> * at $iwC$$iwC$$iwC.(:38)*
>> * at $iwC$$iwC.(:40)*
>> * at $iwC.(:42)*
>> * at (:44)*
>> * at .(:48)*
>> * at .()*
>> * at .(:7)*
>> * at .()*
>> * at $print()*
>> * at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)*
>> * at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)*
>> * at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)*
>> * at java.lang.reflect.Method.invoke(Method.java:497)*
>> * at
>> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)*
>> * at
>> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)*
>> * at
>> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)*
>> * at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)*
>> * at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)*
>> * at
>> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)*
>> * at
>> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)*
>> * at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)*
>> * at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)*
>> * at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)*
>> * at org.apache.spark.repl.SparkILoop.org
>> $apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)*
>> * at
>> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)*
>> * at
>> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)*
>> * at
>> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)*
>> * at
>> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)*
>> * at org.apache.spark.repl.SparkILoop.org
>> $apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)*
>> * at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)*
>> * at org.apache.spark.repl.Main$.main(Main.scala:31)*
>> * at org.apache.spark.repl.Main.main(Main.scala)*
>> * at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)*
>> * at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)*
>> * at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)*
>> * at java.lang.reflect.Method.invoke(Method.java:497)*
>> * at
>> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)*
>> * at
>> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)*
>> * at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)*
>> * at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)*
>> 

Write access to wiki

2016-01-11 Thread Mark Grover
Hi all,
May I please get write access to the useful tools

wiki page?

I did some investigation

related to docker integration tests and want to list out the pre-requisites
required on the machine for those tests to pass, on that page.

On a related note, I was trying to search for any puppet recipes we
maintain for setting up build slaves. If our Jenkins infra were wiped out,
how do we rebuild the slave?

Thanks in advance!

Mark


Re: BUILD FAILURE...again?! :( Spark Project External Flume on fire

2016-01-11 Thread Jean-Baptiste Onofré

I confirm: I have the same issue.

I tried Josh's PR and but the branch is not found:

git pull https://github.com/JoshRosen/spark netty-hotfix

As the issue is on Flume external, I'm not sure it's related.

Let me take a look and eventually provide a fix.

Regards
JB
--
Jean-Baptiste Onofré
jbono...@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: BUILD FAILURE...again?! :( Spark Project External Flume on fire

2016-01-11 Thread Jacek Laskowski
Hi,

I've just git pull and it worked for me. Looks like
https://github.com/apache/spark/commit/f13c7f8f7dc8766b0a42406b5c3639d6be55cf33
fixed the issue (or something in-between).

Thanks for such a quick fix!

p.s. Had time for swimming :-)

Pozdrawiam,
Jacek

Jacek Laskowski | https://medium.com/@jaceklaskowski/
Mastering Apache Spark
==> https://jaceklaskowski.gitbooks.io/mastering-apache-spark/
Follow me at https://twitter.com/jaceklaskowski


On Mon, Jan 11, 2016 at 2:05 PM, Jean-Baptiste Onofré  wrote:
> Heads up: I just updated my local copy, it looks better now (the build is so
> far so good). I keep you posted.
>
> Regards
> JB
>
>
> On 01/11/2016 01:59 PM, Jean-Baptiste Onofré wrote:
>>
>> I confirm: I have the same issue.
>>
>> I tried Josh's PR and but the branch is not found:
>>
>> git pull https://github.com/JoshRosen/spark netty-hotfix
>>
>> As the issue is on Flume external, I'm not sure it's related.
>>
>> Let me take a look and eventually provide a fix.
>>
>> Regards
>> JB
>
>
> --
> Jean-Baptiste Onofré
> jbono...@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [discuss] dropping Python 2.6 support

2016-01-11 Thread David Chin
FWIW, RHEL 6 still uses Python 2.6, although 2.7.8 and 3.3.2 are available
through Red Hat Software Collections. See:
https://www.softwarecollections.org/en/

I run an academic compute cluster on RHEL 6. We do, however, provide Python
2.7.x and 3.5.x via modulefiles.

On Tue, Jan 5, 2016 at 8:45 AM, Nicholas Chammas  wrote:

> +1
>
> Red Hat supports Python 2.6 on REHL 5 until 2020
> , but
> otherwise yes, Python 2.6 is ancient history and the core Python developers
> stopped supporting it in 2013. REHL 5 is not a good enough reason to
> continue support for Python 2.6 IMO.
>
> We should aim to support Python 2.7 and Python 3.3+ (which I believe we
> currently do).
>
> Nick
>
> On Tue, Jan 5, 2016 at 8:01 AM Allen Zhang  wrote:
>
>> plus 1,
>>
>> we are currently using python 2.7.2 in production environment.
>>
>>
>>
>>
>>
>> 在 2016-01-05 18:11:45,"Meethu Mathew"  写道:
>>
>> +1
>> We use Python 2.7
>>
>> Regards,
>>
>> Meethu Mathew
>>
>> On Tue, Jan 5, 2016 at 12:47 PM, Reynold Xin  wrote:
>>
>>> Does anybody here care about us dropping support for Python 2.6 in Spark
>>> 2.0?
>>>
>>> Python 2.6 is ancient, and is pretty slow in many aspects (e.g. json
>>> parsing) when compared with Python 2.7. Some libraries that Spark depend on
>>> stopped supporting 2.6. We can still convince the library maintainers to
>>> support 2.6, but it will be extra work. I'm curious if anybody still uses
>>> Python 2.6 to run Spark.
>>>
>>> Thanks.
>>>
>>>
>>>
>>


-- 
David Chin, Ph.D.
david.c...@drexel.eduSr. Systems Administrator, URCF, Drexel U.
http://www.drexel.edu/research/urcf/
https://linuxfollies.blogspot.com/
+1.215.221.4747 (mobile)
https://github.com/prehensilecode


Python custom partitioning

2016-01-11 Thread Dženan Softić
Hi,

I am trying to implement Gap statistics on Spark, which aims to determine
actual number of clusters for KMeans. Giving the range of possible Ks (e.g.
K = 10), Gap will run KMeans for each K in the range. Since computation of
Kmeans for K=10 takes more time than K=1,2,3,4 together, I would like to
partition computation so K=10 is on one node and K=1,2,3,4 is together on
another node.

As an example, if I want 3 partitions and if I have the following Ks
[1,2,3,4,5,6,7,8,9,10], I would like to get [1,2,3,4,5,6] [7,8] [9,10]
after partitionBy(3, partitionFunc).

Any suggestions how could I implement partition function for PartitionBy in
order to achieve something like this? I've implemented partition function
to divide Ks into 2 buckets, based on Greedy algorithm. But I don't get how
to control which element goes into which bucket from partition function.

Any suggestions would be very helpful.

Thank you.


Re: Dataset throws: Task not serializable

2016-01-11 Thread Wail Alkowaileet
Hello Michael,

Sorry for the late replay .. I was crossing the world the last few days.
I actually tried both ... REPEL and SparkApp. The reported exception was in
App.

Unfortunately the data I have is not for distribution ... sorry about that.
I saw it has been resolved.. I will try to reproduce the same error with
dummy data.

Thanks!

On Thu, Jan 7, 2016 at 2:03 PM, Michael Armbrust 
wrote:

> Were you running in the REPL?
>
> On Thu, Jan 7, 2016 at 10:34 AM, Michael Armbrust 
> wrote:
>
>> Thanks for providing a great description.  I've opened
>> https://issues.apache.org/jira/browse/SPARK-12696
>>
>> I'm actually getting a different error (running in notebooks though).
>> Something seems wrong either way.
>>
>>>
>>> *P.S* mapping by name with case classes doesn't work if the order of
>>> the fields of a case class doesn't match with the order of the DataFrame's
>>> schema.
>>
>>
>> We have tests for reordering
>> 
>>  can
>> you provide a smaller reproduction of this problem?
>>
>> On Wed, Jan 6, 2016 at 10:27 PM, Wail Alkowaileet 
>> wrote:
>>
>>> Hey,
>>>
>>> I got an error when trying to map a Dataset df.as[CLASS] when I have
>>> some nested case classes
>>> I'm not sure if it's a bug ... or I did something wrong... or I missed
>>> some configuration.
>>>
>>>
>>> I did the following:
>>>
>>> *input snapshot*
>>>
>>> {
>>>   "count": "string",
>>>   "name": [{
>>> "addr_no": "string",
>>> "dais_id": "string",
>>> "display_name": "string",
>>> "first_name": "string",
>>> "full_name": "string",
>>> "last_name": "string",
>>> "r_id": "string",
>>> "reprint": "string",
>>> "role": "string",
>>> "seq_no": "string",
>>> "suffix": "string",
>>> "wos_standard": "string"
>>>   }]
>>> }
>>>
>>> *Case classes:*
>>>
>>> case class listType1(addr_no:String, dais_id:String, display_name:String, 
>>> first_name:String, full_name:String, last_name:String, r_id:String, 
>>> reprint:String, role:String, seq_no:String, suffix:String, 
>>> wos_standard:String)
>>> case class DatasetType1(count:String, name:Array[listType1])
>>>
>>> *Schema:*
>>> root
>>>  |-- count: string (nullable = true)
>>>  |-- name: array (nullable = true)
>>>  ||-- element: struct (containsNull = true)
>>>  |||-- addr_no: string (nullable = true)
>>>  |||-- dais_id: string (nullable = true)
>>>  |||-- display_name: string (nullable = true)
>>>  |||-- first_name: string (nullable = true)
>>>  |||-- full_name: string (nullable = true)
>>>  |||-- last_name: string (nullable = true)
>>>  |||-- r_id: string (nullable = true)
>>>  |||-- reprint: string (nullable = true)
>>>  |||-- role: string (nullable = true)
>>>  |||-- seq_no: string (nullable = true)
>>>  |||-- suffix: string (nullable = true)
>>>  |||-- wos_standard: string (nullable = true)
>>>
>>> *Scala code:*
>>>
>>> import sqlContext.implicits._
>>>
>>> val ds = df.as[DatasetType1]
>>>
>>> //Taking first() works fine
>>> println(ds.first().count)
>>>
>>> //map() then first throws exception
>>> println(ds.map(x => x.count).first())
>>>
>>>
>>> *Exception Message:*
>>> Exception in thread "main" org.apache.spark.SparkException: Task not
>>> serializable
>>> at
>>> org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:304)
>>> at
>>> org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:294)
>>> at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:122)
>>> at org.apache.spark.SparkContext.clean(SparkContext.scala:2055)
>>> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1857)
>>> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
>>> at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:927)
>>> at
>>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
>>> at
>>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
>>> at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
>>> at org.apache.spark.rdd.RDD.collect(RDD.scala:926)
>>> at org.apache.spark.sql.Dataset.collect(Dataset.scala:668)
>>> at main.main$.testAsterixRDDWithSparkSQL(main.scala:63)
>>> at main.main$.main(main.scala:70)
>>> at main.main.main(main.scala)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> at java.lang.reflect.Method.invoke(Method.java:497)
>>> at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
>>> Caused by: java.io.NotSerializableException:
>>> scala.reflect.internal.Symbols$PackageClassSymbol

Re: [build system] jenkins wedged, had to do a quick restart

2016-01-11 Thread shane knapp
...aaand we're back up and building.

shane

On Mon, Jan 11, 2016 at 9:47 AM, shane knapp  wrote:
> jenkins looked to be wedged, and nothing was showing up in the logs.
> i tried a restart, and am still looking in to the problem.
>
> we should be back up and building shortly.  sorry for the inconvenience.
>
> shane

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



[build system] jenkins wedged, had to do a quick restart

2016-01-11 Thread shane knapp
jenkins looked to be wedged, and nothing was showing up in the logs.
i tried a restart, and am still looking in to the problem.

we should be back up and building shortly.  sorry for the inconvenience.

shane

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org