[
https://issues.apache.org/jira/browse/AMBARI-18542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-18542:
---
Summary: Ambari API request returns 500 when no stale service need to be
restarted (was: Ambari
Judy Nash created AMBARI-18542:
--
Summary: Ambari API request returns unexpected error code when no
stale service need to be restarted
Key: AMBARI-18542
URL: https://issues.apache.org/jira/browse/AMBARI-18542
Hi all,
Does anyone know of any effort from the community on security testing spark
clusters.
I.e.
Static source code analysis to find security flaws
Penetration testing to identify ways to compromise spark cluster
Fuzzing to crash spark
Thanks,
Judy
Hi all,
Does anyone know of any effort from the community on security testing spark
clusters.
I.e.
Static source code analysis to find security flaws
Penetration testing to identify ways to compromise spark cluster
Fuzzing to crash spark
Thanks,
Judy
[
https://issues.apache.org/jira/browse/AMBARI-13729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14992649#comment-14992649
]
Judy Nash commented on AMBARI-13729:
+1 LGTM.
> Change the Spark thrift server secur
I have not had any success building using sbt/sbt on windows.
However, I have been able to binary by using maven command directly.
From: Richard Eggert [mailto:richard.egg...@gmail.com]
Sent: Sunday, October 25, 2015 12:51 PM
To: Ted Yu
Cc: User
[
https://issues.apache.org/jira/browse/AMBARI-13513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13513:
---
Fix Version/s: trunk
2.1.2
> Add cmd option config to spark thrift ser
[
https://issues.apache.org/jira/browse/AMBARI-13513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13513:
---
Attachment: SPARK-13513.patch
> Add cmd option config to spark thrift ser
[
https://issues.apache.org/jira/browse/AMBARI-13513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13513:
---
Attachment: (was: SPARK-13513.patch)
> Add cmd option config to spark thrift ser
-----
On Oct. 21, 2015, 10:33 p.m., Judy Nash wrote:
>
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/39530/
>
on HDInsight cluster
Thanks,
Judy Nash
[
https://issues.apache.org/jira/browse/AMBARI-13513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13513:
---
Summary: Add cmd option config to spark thrift server (was: Add cmd opt
config to spark thrift
Judy Nash created AMBARI-13513:
--
Summary: Add cmd opt config to spark thrift server
Key: AMBARI-13513
URL: https://issues.apache.org/jira/browse/AMBARI-13513
Project: Ambari
Issue Type
[
https://issues.apache.org/jira/browse/AMBARI-13513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13513:
---
Attachment: AMBARI-13513.patch
> Add cmd option config to spark thrift ser
on HDInsight cluster
Thanks,
Judy Nash
[
https://issues.apache.org/jira/browse/AMBARI-13513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13513:
---
Attachment: SPARK-13513.patch
> Add cmd option config to spark thrift ser
[
https://issues.apache.org/jira/browse/AMBARI-13513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13513:
---
Attachment: (was: SPARK-13513.patch)
> Add cmd option config to spark thrift ser
[
https://issues.apache.org/jira/browse/AMBARI-13513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13513:
---
Attachment: SPARK-13513.patch
> Add cmd option config to spark thrift ser
[
https://issues.apache.org/jira/browse/AMBARI-13513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13513:
---
Attachment: (was: AMBARI-13513.patch)
> Add cmd option config to spark thrift ser
[
https://issues.apache.org/jira/browse/AMBARI-13513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14968288#comment-14968288
]
Judy Nash commented on AMBARI-13513:
Here it is: https://reviews.apache.org/r/39530/
> Add
ue consumable by ambari that I
have not thought of?
- Judy
---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/39530/#review103500
-------
On Oct. 21, 2015, 10:33 p.m., Judy Nash wrote:
>
> ---
-override.xml
PRE-CREATION
ambari-server/src/main/resources/stacks/HDP/2.3/services/SPARK/configuration/spark-thrift-sparkconf.xml
PRE-CREATION
Diff: https://reviews.apache.org/r/39200/diff/
Testing
---
Validated E2E on HDP 2.3 spark 1.4 cluster.
Thanks,
Judy Nash
[
https://issues.apache.org/jira/browse/AMBARI-13382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13382:
---
Attachment: (was: AMBARI-13382.patch)
> spark thrift server cannot load the new configurat
[
https://issues.apache.org/jira/browse/AMBARI-13382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13382:
---
Attachment: AMBARI-13382.patch
> spark thrift server cannot load the new configuration files ad
[
https://issues.apache.org/jira/browse/AMBARI-13382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13382:
---
Attachment: AMBARI-13382.patch
> spark thrift server cannot load the new configuration files ad
[
https://issues.apache.org/jira/browse/AMBARI-13382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13382:
---
Description:
spark thrift server cannot load the new configuration files added.
Why
1
[
https://issues.apache.org/jira/browse/AMBARI-13382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13382:
---
Summary: spark thrift server cannot load the new configuration files added
(was: spark thrift
Judy Nash created AMBARI-13382:
--
Summary: spark thrift server cannot load the new configuration
files added.
Key: AMBARI-13382
URL: https://issues.apache.org/jira/browse/AMBARI-13382
Project: Ambari
[
https://issues.apache.org/jira/browse/AMBARI-13094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13094:
---
Attachment: (was: AMBARI-13094.patch)
> Add Spark Thrift Ambari Serv
[
https://issues.apache.org/jira/browse/AMBARI-13094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14901444#comment-14901444
]
Judy Nash commented on AMBARI-13094:
Error looks like a build issue. Retrying patch.
> Add Sp
[
https://issues.apache.org/jira/browse/AMBARI-13094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13094:
---
Attachment: AMBARI-13094.patch
> Add Spark Thrift Ambari Serv
[
https://issues.apache.org/jira/browse/AMBARI-13094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13094:
---
Description:
New feature to add spark thrift server support on Ambari.
Design specification
[
https://issues.apache.org/jira/browse/AMBARI-13094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13094:
---
Description:
New feature to add spark thrift server support on Ambari.
Design specification
[
https://issues.apache.org/jira/browse/AMBARI-13094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13094:
---
Attachment: (was: AMBARI-13094.patch)
> Add Spark Thrift Ambari Serv
[
https://issues.apache.org/jira/browse/AMBARI-13094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13094:
---
Attachment: AMBARI-13094.patch
> Add Spark Thrift Ambari Serv
[
https://issues.apache.org/jira/browse/AMBARI-13094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13094:
---
Attachment: AMBARI-13094.patch
> Add Spark Thrift Ambari Serv
[
https://issues.apache.org/jira/browse/AMBARI-13094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13094:
---
Attachment: (was: ambari-237.patch)
> Add Spark Thrift Ambari Serv
[
https://issues.apache.org/jira/browse/AMBARI-13094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13094:
---
Attachment: ambari-237.patch
> Add Spark Thrift Ambari Serv
[
https://issues.apache.org/jira/browse/AMBARI-13094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13094:
---
Attachment: (was: AMBARI-13094.patch)
> Add Spark Thrift Ambari Serv
[
https://issues.apache.org/jira/browse/AMBARI-13094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13094:
---
Attachment: AMBARI-13094.patch
> Add Spark Thrift Ambari Serv
[
https://issues.apache.org/jira/browse/AMBARI-13094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13094:
---
Attachment: (was:
0001-SPARK-237-Add-Spark-Thrift-Ambari-Service.6.patch)
> Add Spark Thr
[
https://issues.apache.org/jira/browse/AMBARI-13094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13094:
---
Attachment: AMBARI-13094.patch
> Add Spark Thrift Ambari Serv
[
https://issues.apache.org/jira/browse/AMBARI-13094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13094:
---
Attachment: (was: AMBARI-13094.patch)
> Add Spark Thrift Ambari Serv
Judy Nash created AMBARI-13094:
--
Summary: Add Spark Thrift Ambari Service
Key: AMBARI-13094
URL: https://issues.apache.org/jira/browse/AMBARI-13094
Project: Ambari
Issue Type: New Feature
[
https://issues.apache.org/jira/browse/AMBARI-13094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13094:
---
Attachment: Ambari Service for Spark Thrift Design Specification.docx
> Add Spark Thrift Amb
[
https://issues.apache.org/jira/browse/AMBARI-13094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13094:
---
Attachment: 0001-SPARK-237-Add-Spark-Thrift-Ambari-Service.4.patch
> Add Spark Thrift Amb
[
https://issues.apache.org/jira/browse/AMBARI-13094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13094:
---
Attachment: 0001-SPARK-237-Add-Spark-Thrift-Ambari-Service.6.patch
> Add Spark Thrift Amb
[
https://issues.apache.org/jira/browse/AMBARI-13094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated AMBARI-13094:
---
Attachment: (was:
0001-SPARK-237-Add-Spark-Thrift-Ambari-Service.4.patch)
> Add Spark Thr
Hello everyone,
Does spark thrift server support timeout?
Is there a documentation I can reference for questions like these?
I know it support cancels, but not sure about timeout.
Thanks,
Judy
Hi everyone,
Found a thrift server reliability issue on spark 1.3.1 that causes thrift to
fail.
When thrift server has too little memory allocated to the driver to process the
request, its Spark SQL session exits with OutOfMemory exception, causing thrift
server to stop working.
Is this a
Judy Nash created SPARK-7811:
Summary: Fix typo on slf4j configuration on
metrics.properties.template
Key: SPARK-7811
URL: https://issues.apache.org/jira/browse/SPARK-7811
Project: Spark
Issue
Hi,
How can I get a list of temporary tables via Thrift?
Have used thrift's startWithContext and registered a temp table, but not seeing
the temp table/rdd when running show tables.
Thanks,
Judy
Hello,
I am following the tutorial code on sql programming
guidehttps://spark.apache.org/docs/1.2.1/sql-programming-guide.html#inferring-the-schema-using-reflection
to try out Python on spark 1.2.1.
SaveAsTable function works on Scala bur fails on python with Unresolved plan
found.
Broken
SPARK-4825https://issues.apache.org/jira/browse/SPARK-4825 looks like the
right bug, but it should've been fixed on 1.2.1.
Is a similar fix needed in Python?
From: Judy Nash
Sent: Thursday, May 7, 2015 7:26 AM
To: user@spark.apache.org
Subject: saveAsTable fails on Python with Unresolved plan
Figured it out. It was because I was using HiveContext instead of SQLContext.
FYI in case others saw the same issue.
From: Judy Nash
Sent: Thursday, May 7, 2015 7:38 AM
To: 'user@spark.apache.org'
Subject: RE: saveAsTable fails on Python with Unresolved plan found
SPARK-4825https
The expensive query can take all executor slots, but no task occupy the
executor permanently.
i.e. The second job can possibly to take some resources to execute in-between
tasks of the expensive queries.
Can the fair scheduler mode help in this case? Or is it possible to setup
thrift such that
Hi all,
Noticed a bug in my current version of Spark 1.2.1.
After a table is cached with cache table table command, query will not read
from memory if SQL query renames the table.
This query reads from in memory table
i.e. select hivesampletable.country from default.hivesampletable group by
Hi,
I want to get telemetry metrics on spark apps activities, such as run time and
jvm activities.
Using Spark Metrics I am able to get the following sample data point on the an
app:
type=GAUGE, name=application.SparkSQL::headnode0.1426626495312.runtime_ms,
value=414873
How can I match this
want to see if matching
partitions to available core count will make it faster.
I’ll give your suggestion a try to see if it will help. Experiment is a great
way to learn more about spark internals.
From: Cheng Lian [mailto:lian.cs@gmail.com]
Sent: Monday, March 16, 2015 5:41 AM
To: Judy Nash
[mailto:so...@cloudera.com]
Sent: Thursday, February 26, 2015 2:11 AM
To: Judy Nash
Cc: user@spark.apache.org
Subject: Re: spark standalone with multiple executors in one work node
--num-executors is the total number of executors. In YARN there is not quite
the same notion of a Spark worker
Hi,
I am tuning a hive dataset on Spark SQL deployed via thrift server.
How can I change the number of partitions after caching the table on thrift
server?
I have tried the following but still getting the same number of partitions
after caching:
Spark.default.parallelism
Hello,
Does spark standalone support running multiple executors in one worker node?
It seems yarn has the parameter --num-executors to set number of executors to
deploy, but I do not find the equivalent parameter in spark standalone.
Thanks,
Judy
[
https://issues.apache.org/jira/browse/SPARK-5914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated SPARK-5914:
-
Summary: Enable spark-submit to run requiring only user permission on
windows (was: Enable spark-submit
account in this case). With this fix,
slave will be able to run without admin permission.
FYI: master thrift server works fine with only user permission, so no issue
there.
From: Judy Nash [mailto:judyn...@exchange.microsoft.com]
Sent: Thursday, February 19, 2015 12:26 AM
To: Akhil Das; dev
[
https://issues.apache.org/jira/browse/SPARK-5914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated SPARK-5914:
-
Summary: Enable spark-submit to run with only user permission on windows
(was: Spark-submit cannot
@01D04BDA.A74C65E0]
From: Akhil Das [mailto:ak...@sigmoidanalytics.com]
Sent: Wednesday, February 18, 2015 10:40 PM
To: Judy Nash
Cc: u...@spark.apache.org
Subject: Re: spark slave cannot execute without admin permission on windows
You need not require admin permission, but just make sure all those jars has
Judy Nash created SPARK-5914:
Summary: Spark-submit cannot execute without machine admin
permission on windows
Key: SPARK-5914
URL: https://issues.apache.org/jira/browse/SPARK-5914
Project: Spark
Hi,
Is it possible to configure spark to run without admin permission on windows?
My current setup run master slave successfully with admin permission.
However, if I downgrade permission level from admin to user, SparkPi fails with
the following exception on the slave node:
Exception in thread
It should relay the queries to spark (i.e. you shouldn't see any MR job on
Hadoop you should see activities on the spark app on headnode UI).
Check your hive-site.xml. Are you directing to the hive server 2 port instead
of spark thrift port?
Their default ports are both 1.
From: Andrew
Judy Nash created SPARK-5708:
Summary: Add Slf4jSink to Spark Metrics Sink
Key: SPARK-5708
URL: https://issues.apache.org/jira/browse/SPARK-5708
Project: Spark
Issue Type: Bug
Hello,
Working on SPARK-5708https://issues.apache.org/jira/browse/SPARK-5708 - Add
Slf4jSink to Spark Metrics Sink.
Wrote a new Slf4jSink class (see patch attached), but the new class is not
packaged as part of spark-assembly jar.
Do I need to update build config somewhere to have this
Thanks Patrick! That was the issue.
Built the jars on windows env with mvn and forgot to run make-distributions.ps1
afterward, so was looking at old jars.
From: Patrick Wendell [mailto:pwend...@gmail.com]
Sent: Monday, February 9, 2015 10:43 PM
To: Judy Nash
Cc: dev@spark.apache.org
Subject: Re
Hi all,
Looking at spark metricsServlet.
What is the url exposing driver executor json response?
Found master and worker successfully, but can't find url that return json for
the other 2 sources.
Thanks!
Judy
Yes. It's compatible with HDP 2.1
-Original Message-
From: bhavyateja [mailto:bhavyateja.potin...@gmail.com]
Sent: Friday, January 16, 2015 3:17 PM
To: user@spark.apache.org
Subject: spark 1.2 compatibility
Is spark 1.2 is compatibly with HDP 2.1
--
View this message in context:
Should clarify on this. I personally have used HDP 2.1 + Spark 1.2 and have not
seen a problem.
However officially HDP 2.1 + Spark 1.2 is not a supported scenario.
-Original Message-
From: Judy Nash
Sent: Friday, January 16, 2015 5:35 PM
To: 'bhavyateja'; user@spark.apache.org
Thanks Cheng. Tried it out and saw the InMemoryColumnarTableScan word in the
physical plan.
From: Cheng Lian [mailto:lian.cs@gmail.com]
Sent: Friday, December 12, 2014 11:37 PM
To: Judy Nash; user@spark.apache.org
Subject: Re: Spark SQL API Doc IsCached as SQL command
There isn’t a SQL
Hello,
Few questions on Spark SQL:
1) Does Spark SQL support equivalent SQL Query for Scala command:
IsCached(table name) ?
2) Is there a documentation spec I can reference for question like this?
Closest doc I can find is this one:
[
https://issues.apache.org/jira/browse/SPARK-4700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated SPARK-4700:
-
Description:
Currently thrift only supports TCP connection.
The JIRA is to add HTTP support to spark
[
https://issues.apache.org/jira/browse/SPARK-4700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Judy Nash updated SPARK-4700:
-
Affects Version/s: 1.3.0
Add Http support to Spark Thrift server
SQL experts on the forum can confirm on this though.
From: Cheng Lian [mailto:lian.cs@gmail.com]
Sent: Tuesday, December 9, 2014 6:42 AM
To: Anas Mosaad
Cc: Judy Nash; user@spark.apache.org
Subject: Re: Spark-SQL JDBC driver
According to the stacktrace, you were still using SQLContext rather
...@cloudera.com]
Sent: Tuesday, December 2, 2014 11:35 AM
To: Judy Nash
Cc: Patrick Wendell; Denny Lee; Cheng Lian; u...@spark.incubator.apache.org
Subject: Re: latest Spark 1.2 thrift server fail with NoClassDefFoundError on
Guava
On Tue, Dec 2, 2014 at 11:22 AM, Judy Nash judyn...@exchange.microsoft.com
You can use thrift server for this purpose then test it with beeline.
See doc:
https://spark.apache.org/docs/latest/sql-programming-guide.html#running-the-thrift-jdbc-server
From: Anas Mosaad [mailto:anas.mos...@incorta.com]
Sent: Monday, December 8, 2014 11:01 AM
To: user@spark.apache.org
Thanks Josh. That was the issue.
From: Josh Rosen [mailto:rosenvi...@gmail.com]
Sent: Friday, December 5, 2014 3:21 PM
To: Judy Nash; dev@spark.apache.org
Subject: Re: build in IntelliJ IDEA
If you go to “File - Project Structure” and click on “Project” under the
“Project settings” heading, do
Hello,
Are there ways we can programmatically get health status of master slave
nodes, similar to Hadoop Ambari?
Wiki seems to suggest there are only web UI or instrumentations
(http://spark.apache.org/docs/latest/monitoring.html).
Thanks,
Judy
Hi everyone,
Have a newbie question on using IntelliJ to build and debug.
I followed this wiki to setup IntelliJ:
https://cwiki.apache.org/confluence/display/SPARK/Useful+Developer+Tools#UsefulDeveloperTools-BuildingSparkinIntelliJIDEA
Afterward I tried to build via Toolbar (Build Rebuild
Judy Nash created SPARK-4700:
Summary: Add Http support to Spark Thrift server
Key: SPARK-4700
URL: https://issues.apache.org/jira/browse/SPARK-4700
Project: Spark
Issue Type: New Feature
Any suggestion on how can user with custom Hadoop jar solve this issue?
-Original Message-
From: Patrick Wendell [mailto:pwend...@gmail.com]
Sent: Sunday, November 30, 2014 11:06 PM
To: Judy Nash
Cc: Denny Lee; Cheng Lian; u...@spark.incubator.apache.org
Subject: Re: latest Spark 1.2
Have you checked out the wiki here?
http://spark.apache.org/docs/latest/building-with-maven.html
A couple things I did differently from you:
1) I got the bits directly from github (https://github.com/apache/spark/). Use
branch 1.1 for spark 1.1
2) execute maven command on cmd (powershell misses
I have found the following to work for me on win 8.1:
1) run sbt assembly
2) Use Maven. You can find the maven commands for your build at :
docs\building-spark.md
-Original Message-
From: Ishwardeep Singh [mailto:ishwardeep.si...@impetus.co.in]
Sent: Thursday, November 27, 2014 11:31
-
From: Patrick Wendell [mailto:pwend...@gmail.com]
Sent: Wednesday, November 26, 2014 8:17 AM
To: Judy Nash
Cc: Denny Lee; Cheng Lian; u...@spark.incubator.apache.org
Subject: Re: latest Spark 1.2 thrift server fail with NoClassDefFoundError on
Guava
Just to double check - I looked at our own
Made progress but still blocked.
After recompiling the code on cmd instead of PowerShell, now I can see all 5
classes as you mentioned.
However I am still seeing the same error as before. Anything else I can check
for?
From: Judy Nash [mailto:judyn...@exchange.microsoft.com]
Sent: Monday
AM
To: Judy Nash; u...@spark.incubator.apache.org
Subject: Re: latest Spark 1.2 thrift server fail with NoClassDefFoundError on
Guava
Oh so you're using Windows. What command are you using to start the Thrift
server then?
On 11/25/14 4:25 PM, Judy Nash wrote:
Made progress but still blocked.
After
21, 2014 12:42 AM
To: Judy Nash
Cc: u...@spark.incubator.apache.org
Subject: Re: beeline via spark thrift doesn't retain cache
1) make sure your beeline client connected to Hiveserver2 of Spark SQL.
You can found execution logs of Hiveserver2 in the environment of
start-thriftserver.sh.
2) what
-examples-1.2.1-SNAPSHOT-hadoop2.4.0.jar 100
Had used the same build steps on spark 1.1 and had no issue.
From: Denny Lee [mailto:denny.g@gmail.com]
Sent: Tuesday, November 25, 2014 5:47 PM
To: Judy Nash; Cheng Lian; u...@spark.incubator.apache.org
Subject: Re: latest Spark 1.2 thrift server fail
this file:
com/google/inject/internal/util/$Preconditions.class
Any suggestion on how to fix this?
Very much appreciate the help as I am very new to Spark and open source
technologies.
From: Cheng Lian [mailto:lian.cs@gmail.com]
Sent: Monday, November 24, 2014 8:24 PM
To: Judy Nash; u
Hi,
Thrift server is failing to start for me on latest spark 1.2 branch.
I got the error below when I start thrift server.
Exception in thread main java.lang.NoClassDefFoundError: com/google/common/bas
e/Preconditions
at org.apache.hadoop.conf.Configuration$DeprecationDelta.init(Configur
Hi friends,
I have successfully setup thrift server and execute beeline on top.
Beeline can handle select queries just fine, but it cannot seem to do any kind
of caching/RDD operations.
i.e.
1) Command cache table doesn't work. See error:
Error: Error while processing statement: FAILED:
97 matches
Mail list logo