[jira] [Created] (ZEPPELIN-1833) Number of active connections between ZeppelinServer and RemoteInterpreterServer keeps growing

2016-12-16 Thread Prasad Wagle (JIRA)
Prasad Wagle created ZEPPELIN-1833:
--

 Summary: Number of active connections between ZeppelinServer and 
RemoteInterpreterServer keeps growing
 Key: ZEPPELIN-1833
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-1833
 Project: Zeppelin
  Issue Type: Bug
Reporter: Prasad Wagle


We have noticed that the number of active connections between ZeppelinServer 
and jdbc RemoteInterpreterServer keeps growing. We have seen it go as high as 
50K.
$ netstat | grep 'localhost:49974' | wc
  53374  320244 4750286

ip_local_port_range is 61000-32768 = 28232 ports.
$ sysctl net.ipv4.ip_local_port_range
net.ipv4.ip_local_port_range = 32768 61000

So there can be at most 28232*2 = 56464 active connections. After this point, 
the server fails with:

ERROR [2016-12-05 18:00:22,528] ({pool-1-thread-25} Job.java[run]:189) - Job 
failed
org.apache.zeppelin.interpreter.InterpreterException: 
org.apache.zeppelin.interpreter.InterpreterException: 
org.apache.thrift.transport.TTransportException: 
java.net.NoRouteToHostException: Cannot assign requested address
at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreter.interpret(RemoteInterpreter.java:250)
at 
org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:94)
at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:327)
at org.apache.zeppelin.scheduler.Job.run(Job.java:176)
at 
org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:328)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.zeppelin.interpreter.InterpreterException: 
org.apache.thrift.transport.TTransportException: 
java.net.NoRouteToHostException: Cannot assign requested address
at 
org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:53)
at 
org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:37)
at 
org.apache.commons.pool2.BasePooledObjectFactory.makeObject(BasePooledObjectFactory.java:60)
at 
org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:861)
at 
org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:435)
at 
org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:363)
at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.getClient(RemoteInterpreterProcess.java:184)
at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreter.interpret(RemoteInterpreter.java:248)
... 11 more
Caused by: org.apache.thrift.transport.TTransportException: 
java.net.NoRouteToHostException: Cannot assign requested address
at org.apache.thrift.transport.TSocket.open(TSocket.java:187)
at 
org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:51)
... 18 more
Caused by: java.net.NoRouteToHostException: Cannot assign requested address
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.apache.thrift.transport.TSocket.open(TSocket.java:182)
... 19 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ZEPPELIN-1822) Implement access to Google sheets

2016-12-15 Thread Prasad Wagle (JIRA)
Prasad Wagle created ZEPPELIN-1822:
--

 Summary: Implement access to Google sheets
 Key: ZEPPELIN-1822
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-1822
 Project: Zeppelin
  Issue Type: New Feature
Reporter: Prasad Wagle


http://www.tableau.com/about/blog/2016/5/connect-directly-your-google-sheets-tableau-10-53954



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ZEPPELIN-1755) CronJob.execute thread hangs because of a race condition

2016-12-05 Thread Prasad Wagle (JIRA)
Prasad Wagle created ZEPPELIN-1755:
--

 Summary: CronJob.execute thread hangs because of a race condition
 Key: ZEPPELIN-1755
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-1755
 Project: Zeppelin
  Issue Type: Bug
Reporter: Prasad Wagle
Priority: Minor


If paragraph is created while the scheduled report is executing, its status is 
READY. This causes CronJob.execute thread to hang in the "while 
(!note.isTerminated())" loop.

"QuartzScheduler_Worker-2" #42 prio=5 os_prio=31 tid=0x7faa9c7b5000 
nid=0x6f13 waiting on condition [0x00012e2aa000]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at org.apache.zeppelin.notebook.Notebook$CronJob.execute(Notebook.java: 
836)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool. 
java:573)
- locked <0x000780295f10> (a java.lang.Object)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ZEPPELIN-1685) High memory usage / memory leaks in ZeppelinServer and RemoteInterpreterServer processes

2016-11-18 Thread Prasad Wagle (JIRA)
Prasad Wagle created ZEPPELIN-1685:
--

 Summary: High memory usage / memory leaks in ZeppelinServer and 
RemoteInterpreterServer processes
 Key: ZEPPELIN-1685
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-1685
 Project: Zeppelin
  Issue Type: Bug
Reporter: Prasad Wagle


After a week of operation we have seen high memory usage for the ZeppelinServer 
and RemoteInterpreterServer processes.

ZeppelinServer:   VSZ: 6.6 GB  RSS: 4.3 GB
RemoteInterpreterServer for md:  VSZ: 6.3 GB   RSS: 2.6 GB

VSZ = virtual memory usage of entire process (in KiB)
RSS = resident set size, the non-swapped physical memory that a task has used 
(in KiB)

If this continues, the jvm crashes with error:
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 877658112 bytes for committing 
reserved memory.
# Possible reasons:
#   The system is out of physical RAM or swap space
#   In 32 bit mode, the process size limit was hit

The workaround is to restart the processes periodically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ZEPPELIN-1674) Cannot lock stacked bar graphs in zeppelin, defaults to grouped with a page refresh

2016-11-16 Thread Prasad Wagle (JIRA)
Prasad Wagle created ZEPPELIN-1674:
--

 Summary: Cannot lock stacked bar graphs in zeppelin, defaults to 
grouped with a page refresh
 Key: ZEPPELIN-1674
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-1674
 Project: Zeppelin
  Issue Type: Bug
  Components: GUI
Reporter: Prasad Wagle
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ZEPPELIN-1513) Paragraph text editor is very slow when number of notes is large (> 1000)

2016-09-30 Thread Prasad Wagle (JIRA)
Prasad Wagle created ZEPPELIN-1513:
--

 Summary: Paragraph text editor is very slow when number of notes 
is large (> 1000)
 Key: ZEPPELIN-1513
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-1513
 Project: Zeppelin
  Issue Type: Bug
  Components: front-end
Affects Versions: 0.7.0
Reporter: Prasad Wagle






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ZEPPELIN-1420) java.util.ConcurrentModificationException caused by calling remove inside foreach loop

2016-09-08 Thread Prasad Wagle (JIRA)
Prasad Wagle created ZEPPELIN-1420:
--

 Summary: java.util.ConcurrentModificationException caused by 
calling remove inside foreach loop
 Key: ZEPPELIN-1420
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-1420
 Project: Zeppelin
  Issue Type: Bug
Reporter: Prasad Wagle


https://github.com/apache/zeppelin/blob/33ddc00c637d043a31f3a7f2e861f58f2c1ebc5a/zeppelin-zengine/src/main/java/org/apache/zeppelin/interpreter/InterpreterFactory.java#L1021

ERROR [2016-08-25 14:01:22,273] ({qtp237351678-16} 
NotebookServer.java[onMessage]:271) - Can't handle message
java.util.ConcurrentModificationException
at 
java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:966)
at java.util.LinkedList$ListItr.next(LinkedList.java:888)
at 
org.apache.zeppelin.interpreter.InterpreterFactory.getInterpreterSettings(InterpreterFactory.java:1021)
at 
org.apache.zeppelin.socket.NotebookServer.sendAllAngularObjects(NotebookServer.java:1442)
at 
org.apache.zeppelin.socket.NotebookServer.sendNote(NotebookServer.java:563)
at 
org.apache.zeppelin.socket.NotebookServer.onMessage(NotebookServer.java:191)
at 
org.apache.zeppelin.socket.NotebookSocket.onWebSocketText(NotebookSocket.java:67)
at 
org.eclipse.jetty.websocket.common.events.JettyListenerEventDriver.onTextMessage(JettyListenerEventDriver.java:128)
at 
org.eclipse.jetty.websocket.common.message.SimpleTextMessage.messageComplete(SimpleTextMessage.java:69)
at 
org.eclipse.jetty.websocket.common.events.AbstractEventDriver.appendMessage(AbstractEventDriver.java:65)
at 
org.eclipse.jetty.websocket.common.events.JettyListenerEventDriver.onTextFrame(JettyListenerEventDriver.java:122)
at 
org.eclipse.jetty.websocket.common.events.AbstractEventDriver.incomingFrame(AbstractEventDriver.java:161)
at 
org.eclipse.jetty.websocket.common.WebSocketSession.incomingFrame(WebSocketSession.java:309)
at 
org.eclipse.jetty.websocket.common.extensions.ExtensionStack.incomingFrame(ExtensionStack.java:214)
at 
org.eclipse.jetty.websocket.common.Parser.notifyFrame(Parser.java:220)
at org.eclipse.jetty.websocket.common.Parser.parse(Parser.java:258)
at 
org.eclipse.jetty.websocket.common.io.AbstractWebSocketConnection.readParse(AbstractWebSocketConnection.java:632)
at 
org.eclipse.jetty.websocket.common.io.AbstractWebSocketConnection.onFillable(AbstractWebSocketConnection.java:480)
at 
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ZEPPELIN-1418) Making bookmarks for zeppelin notes work after authentication

2016-09-07 Thread Prasad Wagle (JIRA)
Prasad Wagle created ZEPPELIN-1418:
--

 Summary: Making bookmarks for zeppelin notes work after 
authentication
 Key: ZEPPELIN-1418
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-1418
 Project: Zeppelin
  Issue Type: Improvement
  Components: front-end
Reporter: Prasad Wagle
Priority: Minor


Many users in my company bookmark Zeppelin notes e.g. 
http://localhost:8080/#/notebook/2A94M5J1Z. If users are authenticated, the 
bookmarks work great. If users are not authenticated, we send them to an 
authentication server that has a mechanism to remember the original query 
string that can be used to redirect users to the original note after 
authentication. Since Zeppelin note URLs have # in them, the note name is not 
sent to the server and after authentication, users are sent to the home page.

[~corneadoug] wrote:
However, redirecting to the right page can probably be done on the front-end 
side (cf. 
http://nadeemkhedr.com/redirect-to-the-original-requested-page-after-login-using-angularjs/)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ZEPPELIN-1273) d3.format not called for large negative numbers

2016-08-02 Thread Prasad Wagle (JIRA)
Prasad Wagle created ZEPPELIN-1273:
--

 Summary: d3.format not called for large negative numbers
 Key: ZEPPELIN-1273
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-1273
 Project: Zeppelin
  Issue Type: Bug
  Components: front-end
Reporter: Prasad Wagle
Assignee: Prasad Wagle
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ZEPPELIN-1246) In JDBCInterpreter.getScheduler, use getMaxConcurrentConnection instead of hardcoding maxConcurrency to 10

2016-07-28 Thread Prasad Wagle (JIRA)
Prasad Wagle created ZEPPELIN-1246:
--

 Summary: In JDBCInterpreter.getScheduler, use 
getMaxConcurrentConnection instead of hardcoding maxConcurrency to 10
 Key: ZEPPELIN-1246
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-1246
 Project: Zeppelin
  Issue Type: Improvement
  Components: Interpreters
Reporter: Prasad Wagle
Assignee: Prasad Wagle
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ZEPPELIN-1084) Improving web performance by reducing number of broadcasts

2016-06-29 Thread Prasad Wagle (JIRA)
Prasad Wagle created ZEPPELIN-1084:
--

 Summary: Improving web performance by reducing number of broadcasts
 Key: ZEPPELIN-1084
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-1084
 Project: Zeppelin
  Issue Type: Improvement
Reporter: Prasad Wagle


-- Forwarded message --
From: Johnny W. 
Date: Tue, Jun 28, 2016 at 11:41 PM
Subject: Re: Improving web performance by reducing number of broadcasts
To: us...@zeppelin.apache.org, dev@zeppelin.apache.org


Thanks, Prasad! This is very helpful. The version we are using already includes 
Jetty 9 & Zeppelin-820.

However, we still see web hangs quite frequently caused by broadcasting 
updateNote. I may remove the synchronized block on noteSocketMap as a temporary 
fix, but I am wondering whether there is a better solution.

+ zeppelin-dev, since this issue may significantly limit the scalability of 
Zeppelin if there is a bad connection. One potential optimization can be: 
instead of lock the whole map, use fine-grained lock on map entries.

Best,
Johnny




On Tue, Jun 28, 2016 at 4:34 AM, Prasad Wagle  wrote:
Hi Johnny,

What version of the server are you using?

You may be interested in the following:

Email thread discussing zeppelin server hangs and reducing websocket connections
>From the thread: "I removed synchronized (noteSocketMap) from broadcast so 
>that one bad
   socket does not hang the server." This change helps with performance as well 
and hasn't caused any problems. However, this is not a long-term solution and 
we need to find a better one.

Jira issue: Reduce websocket communication by unicasting instead of 
broadcasting note list (https://issues.apache.org/jira/browse/ZEPPELIN-820)

Prasad

On Mon, Jun 20, 2016 at 5:04 PM, Johnny W.  wrote:
Hi zeppelin-users,

This is my first email to the top-level mailing list. Congratulations for 
graduation!

We are hitting some performance issues when multiple users are connected to the 
Zeppelin server. From the stack trace, many of the connections are blocked on a 
HashMap, which is locked by 
org.apache.zeppelin.socket.NotebookServer.broadcastNote.

Our largest notebook is around 800K, and there are around 10 - 20 connections 
to the Zeppelin server. I think it should be we are broadcasting some large 
amount of data to multiple users, and some slow connections hang the whole web 
interface.

Is there anyway to reduce the number of broadcasts to improve the web 
performance? It is fine for us to refresh and get updates. I've attached the 
full stack trace of this issue as well.

Thanks!

Johnny

Blocking Thread:
--
"qtp1874598090-2478" prio=10 tid=0x7f2fb0003800 nid=0x3373 waiting on 
condition [0x7f329ebe9000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x000704e15db0> (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
at 
org.eclipse.jetty.util.SharedBlockingCallback$Blocker.block(SharedBlockingCallback.java:219)
at 
org.eclipse.jetty.websocket.common.BlockingWriteCallback$WriteBlocker.block(BlockingWriteCallback.java:83)
at 
org.eclipse.jetty.websocket.common.WebSocketRemoteEndpoint.blockingWrite(WebSocketRemoteEndpoint.java:107)
at 
org.eclipse.jetty.websocket.common.WebSocketRemoteEndpoint.sendString(WebSocketRemoteEndpoint.java:387)
at 
org.apache.zeppelin.socket.NotebookSocket.send(NotebookSocket.java:69)
at 
org.apache.zeppelin.socket.NotebookServer.broadcast(NotebookServer.java:304)
- locked <0x0007006b6100> (a java.util.HashMap)
at 
org.apache.zeppelin.socket.NotebookServer.broadcastNote(NotebookServer.java:384)
at 
org.apache.zeppelin.socket.NotebookServer.updateNote(NotebookServer.java:492)
at 
org.apache.zeppelin.socket.NotebookServer.onMessage(NotebookServer.java:181)
at 
org.apache.zeppelin.socket.NotebookSocket.onWebSocketText(NotebookSocket.java:56)
at 
org.eclipse.jetty.websocket.common.events.JettyListenerEventDriver.onTextMessage(JettyListenerEventDriver.java:128)
at 
org.eclipse.jetty.websocket.common.message.SimpleTextMessage.messageComplete(SimpleTextMessage.java:69)
at 
org.eclipse.jetty.websocket.common.events.AbstractEventDriver.appendMessage(AbstractEventDriver.java:65)
at 
org.eclipse.jetty.websocket.common.events.JettyListenerEventDriver.onTextFrame(JettyListenerEventDriver.java:122)
at 
org.eclipse.jetty.websocket.common.events.AbstractEventDriver.incomingFrame(AbstractEventDriver.java:161)
at 

[jira] [Created] (ZEPPELIN-1006) Scalding documentation update

2016-06-14 Thread Prasad Wagle (JIRA)
Prasad Wagle created ZEPPELIN-1006:
--

 Summary: Scalding documentation update
 Key: ZEPPELIN-1006
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-1006
 Project: Zeppelin
  Issue Type: Bug
  Components: documentation
Reporter: Prasad Wagle
Assignee: Prasad Wagle
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ZEPPELIN-972) Remove scalding profile and include it in the module list

2016-06-07 Thread Prasad Wagle (JIRA)
Prasad Wagle created ZEPPELIN-972:
-

 Summary: Remove scalding profile and include it in the module list
 Key: ZEPPELIN-972
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-972
 Project: Zeppelin
  Issue Type: Task
Reporter: Prasad Wagle
Assignee: Prasad Wagle
Priority: Minor


>From the comment in https://github.com/apache/incubator-zeppelin/pull/917

Remove scalding profile and includes in the module list result release script 
creates a binary package with scalding interpreter included. Therefore, we need 
to take care few more things for binary package release.

One thing is Zeppelin want to avoid build binary package for release with 3rd 
party repository but scalding interpreter needs two 3rd party repositories 
(conjars.org/repo, maven.twttr.com).
Another thing updating LICENSE for binary package, while scalding interpreter 
brings new dependency libraries into binary package. mvn -DskipTests -pl 
'zeppelin-interpreter,scalding' package dependency:tree will give detailed list 
of dependencies (and transitive dependency).

Here's partial list of dependencies of scalding interpreter.

[INFO] +- com.twitter:scalding-core_2.11:jar:0.16.1-RC1:compile
[INFO] |  +- com.twitter:scalding-serialization_2.11:jar:0.16.1-RC1:compile
[INFO] |  +- com.twitter:maple:jar:0.16.1-RC1:compile
[INFO] |  +- cascading:cascading-core:jar:2.6.1:compile
[INFO] |  |  +- riffle:riffle:jar:0.1-dev:compile
[INFO] |  |  +- thirdparty:jgrapht-jdk1.6:jar:0.8.1:compile
[INFO] |  |  \- org.codehaus.janino:janino:jar:2.7.5:compile
[INFO] |  | \- org.codehaus.janino:commons-compiler:jar:2.7.5:compile
[INFO] |  +- cascading:cascading-hadoop:jar:2.6.1:compile
[INFO] |  +- cascading:cascading-local:jar:2.6.1:compile
[INFO] |  |  \- com.google.guava:guava:jar:15.0:compile
[INFO] |  +- com.twitter:chill-hadoop:jar:0.7.3:compile
[INFO] |  |  \- com.esotericsoftware.kryo:kryo:jar:2.21:compile
[INFO] |  | +- 
com.esotericsoftware.reflectasm:reflectasm:jar:shaded:1.07:compile
[INFO] |  | |  \- org.ow2.asm:asm:jar:4.0:compile
[INFO] |  | +- com.esotericsoftware.minlog:minlog:jar:1.2:compile
[INFO] |  | \- org.objenesis:objenesis:jar:1.2:compile
[INFO] |  +- com.twitter:chill-java:jar:0.7.3:compile
[INFO] |  +- com.twitter:chill-bijection_2.11:jar:0.7.3:compile
[INFO] |  +- com.twitter:algebird-core_2.11:jar:0.12.0:compile
[INFO] |  |  \- com.googlecode.javaewah:JavaEWAH:jar:0.6.6:compile
[INFO] |  +- com.twitter:bijection-core_2.11:jar:0.9.1:compile
[INFO] |  +- com.twitter:bijection-macros_2.11:jar:0.9.1:compile
[INFO] |  +- com.twitter:chill_2.11:jar:0.7.3:compile
[INFO] |  \- com.twitter:chill-algebird_2.11:jar:0.7.3:compile
[INFO] +- com.twitter:scalding-args_2.11:jar:0.16.1-RC1:compile
[INFO] +- com.twitter:scalding-date_2.11:jar:0.16.1-RC1:compile
[INFO] +- com.twitter:scalding-commons_2.11:jar:0.16.1-RC1:compile
[INFO] |  +- com.google.protobuf:protobuf-java:jar:2.4.1:compile
[INFO] |  +- com.twitter.elephantbird:elephant-bird-cascading2:jar:4.8:compile
[INFO] |  +- com.twitter.elephantbird:elephant-bird-core:jar:4.8:compile
[INFO] |  |  +- 
com.twitter.elephantbird:elephant-bird-hadoop-compat:jar:4.8:compile
[INFO] |  |  \- com.googlecode.json-simple:json-simple:jar:1.1:compile
[INFO] |  \- com.hadoop.gplcompression:hadoop-lzo:jar:0.4.19:compile
[INFO] | \- commons-logging:commons-logging:jar:1.1.1:compile
[INFO] +- com.twitter:scalding-avro_2.11:jar:0.16.1-RC1:compile
[INFO] |  +- cascading.avro:avro-scheme:jar:2.1.2:compile
[INFO] |  |  +- org.apache.avro:avro-mapred:jar:1.7.4:compile
[INFO] |  |  |  +- org.apache.avro:avro-ipc:jar:1.7.4:compile
[INFO] |  |  |  |  +- org.mortbay.jetty:jetty:jar:6.1.26:compile
[INFO] |  |  |  |  +- org.apache.velocity:velocity:jar:1.7:compile
[INFO] |  |  |  |  \- org.mortbay.jetty:servlet-api:jar:2.5-20081211:compile
[INFO] |  |  |  \- org.apache.avro:avro-ipc:jar:tests:1.7.4:compile
[INFO] |  |  \- cascading:cascading-xml:jar:2.1.6:compile
[INFO] |  | \- org.ccil.cowan.tagsoup:tagsoup:jar:1.2:compile
[INFO] |  \- org.apache.avro:avro:jar:1.7.4:compile
[INFO] | +- org.codehaus.jackson:jackson-core-asl:jar:1.8.8:compile
[INFO] | +- org.codehaus.jackson:jackson-mapper-asl:jar:1.8.8:compile
[INFO] | +- com.thoughtworks.paranamer:paranamer:jar:2.3:compile
[INFO] | +- org.xerial.snappy:snappy-java:jar:1.0.4.1:compile
[INFO] | \- org.apache.commons:commons-compress:jar:1.4.1:compile
[INFO] |\- org.tukaani:xz:jar:1.0:compile
[INFO] +- com.twitter:scalding-parquet_2.11:jar:0.16.1-RC1:compile
[INFO] |  +- org.apache.parquet:parquet-column:jar:1.8.1:compile
[INFO] |  |  +- org.apache.parquet:parquet-common:jar:1.8.1:compile
[INFO] |  |  +- org.apache.parquet:parquet-encoding:jar:1.8.1:compile
[INFO] |  |  \- commons-codec:commons-codec:jar:1.5:compile
[INFO] |  +- org.apache.parquet:parquet-hadoop:jar:1.8.1:compile

[jira] [Created] (ZEPPELIN-884) Build failure with error 'Unable to extract spark-1.6.1-bin-hadoop2.3.tgz'

2016-05-25 Thread Prasad Wagle (JIRA)
Prasad Wagle created ZEPPELIN-884:
-

 Summary: Build failure with error 'Unable to extract 
spark-1.6.1-bin-hadoop2.3.tgz'
 Key: ZEPPELIN-884
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-884
 Project: Zeppelin
  Issue Type: Bug
Reporter: Prasad Wagle
Priority: Minor


https://travis-ci.org/apache/incubator-zeppelin/builds/132935311

travis_time:end:21a394f1:start=1464204778861960813,finish=1464205258482497742,duration=479620536929
travis_fold:end:install
travis_fold:start:before_script.1
travis_time:start:3a453634
$ travis_retry ./testing/downloadSpark.sh $SPARK_VER $HADOOP_VER
+MAX_DOWNLOAD_TIME_SEC=590
++dirname ./testing/downloadSpark.sh
+FWDIR=./testing
++cd ./testing/..
++pwd
+ZEPPELIN_HOME=/home/travis/build/apache/incubator-zeppelin
+SPARK_CACHE=.spark-dist
+SPARK_ARCHIVE=spark-1.6.1-bin-hadoop2.3
+export 
SPARK_HOME=/home/travis/build/apache/incubator-zeppelin/spark-1.6.1-bin-hadoop2.3
+SPARK_HOME=/home/travis/build/apache/incubator-zeppelin/spark-1.6.1-bin-hadoop2.3
+echo 'SPARK_HOME is 
/home/travis/build/apache/incubator-zeppelin/spark-1.6.1-bin-hadoop2.3'
SPARK_HOME is 
/home/travis/build/apache/incubator-zeppelin/spark-1.6.1-bin-hadoop2.3
+[[ ! -d /home/travis/build/apache/incubator-zeppelin/spark-1.6.1-bin-hadoop2.3 
]]
+mkdir -p .spark-dist
+cd .spark-dist
+[[ ! -f spark-1.6.1-bin-hadoop2.3.tgz ]]
+cp spark-1.6.1-bin-hadoop2.3.tgz ..
+cd ..
+tar zxf spark-1.6.1-bin-hadoop2.3.tgz

gzip: stdin: unexpected end of file
tar: Unexpected EOF in archive
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now
+echo 'Unable to extract spark-1.6.1-bin-hadoop2.3.tgz'
Unable to extract spark-1.6.1-bin-hadoop2.3.tgz
+rm -rf spark-1.6.1-bin-hadoop2.3
+rm -f spark-1.6.1-bin-hadoop2.3.tgz
+set +xe

travis_time:end:3a453634:start=1464205258487517740,finish=1464205259031573705,duration=544055965
travis_fold:end:before_script.1
travis_fold:start:before_script.2
travis_time:start:22b5d9f8
$ ./testing/startSparkCluster.sh $SPARK_VER $HADOOP_VER
++dirname ./testing/startSparkCluster.sh
+FWDIR=./testing
++cd ./testing/..
++pwd
+ZEPPELIN_HOME=/home/travis/build/apache/incubator-zeppelin
+SPARK_ARCHIVE=spark-1.6.1-bin-hadoop2.3
+export 
SPARK_HOME=/home/travis/build/apache/incubator-zeppelin/spark-1.6.1-bin-hadoop2.3
+SPARK_HOME=/home/travis/build/apache/incubator-zeppelin/spark-1.6.1-bin-hadoop2.3
+echo 'SPARK_HOME is 
/home/travis/build/apache/incubator-zeppelin/spark-1.6.1-bin-hadoop2.3'
SPARK_HOME is 
/home/travis/build/apache/incubator-zeppelin/spark-1.6.1-bin-hadoop2.3
+export 
SPARK_PID_DIR=/home/travis/build/apache/incubator-zeppelin/spark-1.6.1-bin-hadoop2.3/run
+SPARK_PID_DIR=/home/travis/build/apache/incubator-zeppelin/spark-1.6.1-bin-hadoop2.3/run
+mkdir -p 
/home/travis/build/apache/incubator-zeppelin/spark-1.6.1-bin-hadoop2.3/run
+export SPARK_MASTER_PORT=7071
+SPARK_MASTER_PORT=7071
+export SPARK_MASTER_WEBUI_PORT=7072
+SPARK_MASTER_WEBUI_PORT=7072
+export SPARK_WORKER_WEBUI_PORT=8082
+SPARK_WORKER_WEBUI_PORT=8082
+/home/travis/build/apache/incubator-zeppelin/spark-1.6.1-bin-hadoop2.3/sbin/start-master.sh
./testing/startSparkCluster.sh: line 58: 
/home/travis/build/apache/incubator-zeppelin/spark-1.6.1-bin-hadoop2.3/sbin/start-master.sh:
 No such file or directory

travis_time:end:22b5d9f8:start=1464205259036240292,finish=1464205259046953195,duration=10712903

The command "./testing/startSparkCluster.sh $SPARK_VER $HADOOP_VER" 
failed and exited with 1 during .

Your build has been stopped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)