Hi,
I have already posted a question, but I did not get my answer. So, I'll try
being more informative this time.
I'm working on a separate network and trying to install zeppelin on a
cluster on the network.
I built zeppelin on a regular computer (using the command line : mvn
install
Thanks for the suggestion Moon, unfortunately, I got the Invocation
exception:
*sqlContext.sql(CREATE EXTERNAL TABLE IF NOT EXISTS test3(b int, w string,
a string, x int, y int) ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
LOCATION 'hdfs://ip:8020/user/flume/')*
*sqlContext.sql(select * from
Okay, I'm really confused haha, some things are working, but I'm not sure
why..
So this works:
hc.sql(CREATE EXTERNAL TABLE IF NOT EXISTS *test3*(date int, date_time
string, time string, sensor int, value int) ROW FORMAT DELIMITED FIELDS
TERMINATED BY ',' LOCATION
Hi IT CTO,
Thank you for the reply!
Because I couldn't find any ways to use charts (bar chart, pie chart, etc.) in
Zeppelin
without having my own Interpreter... If you know it, please let me know.
And now I see the comment of DepInterpreter, it seems it is used for Spark:
/**
*
Ahh okay, so when I don't use or create HiveContext it now works. It seems
though that I have to apply the schema to val results =
sqlContext.sql(select * from table) before being able to register the
table in a manner sqlContext is able to see the table.
Thanks for the answers Moon!
On Tue, Jun
Hello Daniel
I had set up some demos demonstrating the added values of Zeppelin for the
Spark over Cassandra combo. Source code here:
https://github.com/doanduyhai/incubator-zeppelin/tree/Spark_Cassandra_Demo
Video of the meetup I gave in Amsterdam here:
https://youtu.be/Y_AjbK4LKB0?t=1118
You
Hi,
This is a spark\hive question -
When I use %SQL interpreter I can access both tables registered using
registerTempTable or on the same time tables which are registered on hive
Should I expect different performance when running
Select * from bank
vrs
Select * from bank_hive
*** assuming
Hi,
right now, I'm not aware of such configuration in Zeppelin (please,
feel free to open the issue\submit a patch).
AFAIK dynamic YARN resource allocation is up to the user and is not
configured by default right now, which looks like one possible
solution to the problem you describe (at least
Actually reading some more about slider, I am not sure it s what I thought
it is... to fast on sending this mail.
Eran
On Tue, Jun 30, 2015 at 9:46 AM IT CTO goi@gmail.com wrote:
If I am not mistaking Apache Slider is aim to handle dynamic growing and
shrinking of applications on YARN but
Hi Su,
jsonFile method in Spark-SQL(1.3.1) only expect the path to file.also make sure
that when you store data in HDFS your json data should be one object per line
for more details follow the link
SQLContext (Spark 1.3.1 JavaDoc)
jsonFile(String path)Loads a JSON file (one object per line),
I had the same issue,
adding -U to the build command mvn clean package -U -P worked for me.
The -U forces Maven to update the dependencies
Regards
On Thu, Jun 25, 2015 at 10:39 PM, tog guillaume.all...@gmail.com wrote:
I tried diff -rq this but the list of differences is huge and as I
Alex -
How are you addressing the Yarn's need to have dynamic ports available on
the yarn-client so the app master can connect to it? I've run into an issue
where if I try to run Docker on Mesos in this setup, the containers fail
due to the application master trying to connect to the container,
BTW, this isn't working as well:
*val sidNameDF = hc.sql(select sid, name from hive_table limit 10)val
sidNameDF2 = hc.createDataFrame(sidNameDF.rdd, sidNameDF.schema)
sidNameDF2.registerTempTable(tmp_sid_name2)*
On Tue, Jun 30, 2015 at 1:45 PM, Ophir Cohen oph...@gmail.com wrote:
I've made
13 matches
Mail list logo