Hi Anand , I dont see your reply, can you pls send it again ?
On Sun, Aug 9, 2015 at 6:56 PM, Anand Raghavan
wrote:
> R
>
> *From*: Ashwin Shankar [mailto:ashwinshanka...@gmail.com]
> *Sent*: Saturday, August 08, 2015 03:54 AM
> *To*: users@zeppelin.incubator.apache.org <
> users@zeppelin.incuba
Thanks moon. For presto interpreter, it looks like it doesn't read from
system properties at all(-D), it reads only from what is saved on
"interpreter page"(?). Basically it might be nice if interpreters work in a
way such that user's don't have to configure properties on the interpreter
page, and
Hi all,
We've been using Zeppelin for a little while with CDH clusters and it's
great.
Recently a few of us have tried getting it working on local dev machines
(ubuntu 14.04) without clusters, i.e. local[*] master and separately
downloaded spark 1.3.0 referenced through spark.home.
What we're se
fyi, SparkInterpreter supposed to take any property starts with 'spark.',
even if they're not listed in default property list in UI.
pr197 itself looks fine, but you still able to add those properties even if
they're not listed in UI.
Thanks,
moon
On Sun, Aug 9, 2015 at 10:56 PM ÐΞ€ρ@Ҝ (๏̯͡๏) w
Hi piyush,
Separate instance of SparkILoop SparkIMain for each notebook while sharing
the SparkContext sounds great.
Actually, i tried to do it, found problem that multiple SparkILoop could
generates the same class name, and spark executor confuses classname since
they're reading classes from sin
What are the errors?
Date: Sun, 9 Aug 2015 20:06:13 +
From: djilokui...@yahoo.fr
To: m...@apache.org
CC: users@zeppelin.incubator.apache.org
Subject: Convert Spark dataframe to pandas dataframe in Zeppelin
Hi
How to Convert Spark dataframe to pandas dataframe in Zeppelin .
I tried Mydatafram
Finally I got it working by hacking the classpath generation to use the Spark
1.4 assembly jar first, but there has to be a cleaner way.
From: davidkl...@hotmail.com
To: users@zeppelin.incubator.apache.org
Subject: RE: Trying to build with support for Yarn, Spark 1.4 and Hadoop 2.7
Date: Mon, 10
Some progress here. The error was caused by malformed XML in yarn-site.xml.
After fixing it I'm still get an error because of typesafe version mismatching:
java.lang.NoSuchMethodError:
com.typesafe.config.Config.getDuration(Ljava/lang/String;Ljava/util/concurrent/TimeUnit;)J
at
akka.util.Helper
Hi Eric,
I'd also be very interested in this as, we are also working in a scala 2.11
environment with spark 1.4 and we're having trouble just building zeppelin. Any
help much appreciated.
Kind regards,
William
-Original Message-
From: Eric Charles [mailto:e...@apache.org]
Sent: 0
FY, here are the JIRA ticket:
https://issues.apache.org/jira/browse/ZEPPELIN-216 and the related PR:
https://github.com/apache/incubator-zeppelin/pull/200
On 6 August 2015 at 12:26, IT CTO wrote:
> +1 for chrome
>
>
> On Thu, Aug 6, 2015 at 12:26 PM Christian Tzolov
> wrote:
>
>> Hei,
>>
>> Whe
java:-2
INFO [2015-08-10 20:14:07,715] ({pool-1-thread-5}
Logging.scala[logInfo]:59) - Asked to cancel job group
zeppelin-20150810-172402_743954903
INFO [2015-08-10 20:14:07,908] ({pool-1-thread-5}
Logging.scala[logInfo]:59) - Asked to cancel job group
zeppelin-20150810-172402_743954903
*Logs in ze
Hi,
How do I use MySQL with zeppelin? Any instructions?
Thanks
Raj
Sent from my iPhone
> val orders = sc.textFile("file:///Users/jzhang/Downloads/SampleData.csv")
>.map{line=>line.split(",")}
The above code won't compile successfully, raise error
> orders: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[51] at
textFile at> :39 > :1: error: illegal start of definitio
Hi Moon,
Any suggestion on it, have to wait lot when multiple people working
with spark.
Can we create separate instance of SparkILoop SparkIMain and
printstrems for each notebook while sharing the SparkContext
ZeppelinContext SQLContext and DependencyResolver and then use
parallel scheduler
Hello Deepujain,
Thanks for the tip, I tried that but I still get the warnings:
[WARNING] The requested profile "spark-1.4" could not be activated because it
does not exist.
[WARNING] The requested profile "hadoop-2.6" could not be activated because it
does not exist.
I followed the steps you d
15 matches
Mail list logo