Hi,
what do you get running just 'sudo netstat'?
Also, what's the output of 'jps -mlv' when running your spark application?
Can you post the contents of the files in $SPARK_HOME/conf ?
Are there any special firewall rules in place, forbidding connections
on localhost?
Regarding the IP address chan
Hi Jakob, sorry for my late reply
I tried to run the below; came back with "netstat: lunt: unknown or
uninstrumented protocol
I also tried uninstalling version 1.6.0 and installing version1.5.2 with Java 7
and SCALA version 2.10.6; got the same error messages
Do you think it would be worth me
regarding my previous message, I forgot to mention to run netstat as
root (sudo netstat -plunt)
sorry for the noise
On Fri, Mar 11, 2016 at 12:29 AM, Jakob Odersky wrote:
> Some more diagnostics/suggestions:
>
> 1) are other services listening to ports in the 4000 range (run
> "netstat -plunt")?
Some more diagnostics/suggestions:
1) are other services listening to ports in the 4000 range (run
"netstat -plunt")? Maybe there is an issue with the error message
itself.
2) are you sure the correct java version is used? java -version
3) can you revert all installation attempts you have done s
If you type ‘whoami’ in the terminal, and it responds with ‘root’ then you’re
the superuser.
However, as mentioned below, I don’t think its a relevant factor.
> On Mar 10, 2016, at 12:02 PM, Aida Tefera wrote:
>
> Hi Tristan,
>
> I'm afraid I wouldn't know whether I'm running it as super user
.
"-Dx=y")
# - SPARK_PUBLIC_DNS, to set the public dns name of the master or workers
# Generic options for the daemons used in the standalone deploy mode
# - SPARK_CONF_DIR Alternate conf dir. (Default: ${SPARK_HOME}/conf)
# - SPARK_LOG_DIR Where log files are stored. (Default
Hi Tristan,
I'm afraid I wouldn't know whether I'm running it as super user.
I have java version 1.8.0_73 and SCALA version 2.11.7
Sent from my iPhone
> On 9 Mar 2016, at 21:58, Tristan Nixon wrote:
>
> That’s very strange. I just un-set my SPARK_HOME env param, downloaded a
> fresh 1.6.0
It really shouldn’t, if anything, running as superuser should ALLOW you to bind
to ports 0, 1 etc.
It seems very strange that it should even be trying to bind to these ports -
maybe a JVM issue?
I wonder if the old Apple JVM implementations could have used some different
native libraries for cor
It should just work with these steps. You don't need to configure much. As
mentioned, some settings on your machine are overriding default spark
settings.
Even running as super-user should not be a problem. It works just fine as
super-user as well.
Can you tell us what version of Java you are usi
That’s very strange. I just un-set my SPARK_HOME env param, downloaded a fresh
1.6.0 tarball,
unzipped it to local dir (~/Downloads), and it ran just fine - the driver port
is some randomly generated large number.
So SPARK_HOME is definitely not needed to run this.
Aida, you are not running thi
Hi Jakob,
Tried running the command env|grep SPARK; nothing comes back
Tried env|grep Spark; which is the directory I created for Spark once I
downloaded the tgz file; comes back with PWD=/Users/aidatefera/Spark
Tried running ./bin/spark-shell ; comes back with same error as below; i.e
could
la:64)
>
> at
>
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
>
> at
>
> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop
As Tristan mentioned, it looks as though Spark is trying to bind on
port 0 and then 1 (which is not allowed). Could it be that some
environment variables from you previous installation attempts are
polluting your configuration?
What does running "env | grep SPARK" show you?
Also, try running just
gt;>>>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>>>>>>>> at
>>>>>>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45
gt; at .()
>>>>>>> at .(:7)
>>>>>>> at .()
>>>>>>> at $print()
>>>>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>>>> at
>>>>>>> sun.reflec
.java:43)
>>>>>> at java.lang.reflect.Method.invoke(Method.java:497)
>>>>>> at
>>>>>> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>>>>>> at
>>>>>> org.apache.spark.repl.SparkIMain$Re
che.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>>>>> at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>>>>> at
>>>>> org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:132)
>
kILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
>>>> at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
>>>> at
>>>> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5
gt;>>at
>>> org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
>>>at
>>> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(Spa
avingContextLoader(ScalaClassLoader.scala:135)
>> at
>> org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
>> at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
>> at org.apache.spark.repl.Main$.main(Mai
eMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
>
. SKIPPED
>>>>>>> [INFO] Spark Project SQL ......... SKIPPED
>>>>>>> [INFO] Spark Project ML Library .. SKIPPED
>>>>>>> [INFO] Spark Project Tools
a:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
:16: error: not found: value sqlContext
import sqlContext.implicits._
^
:16: error: not found: value sqlContext
import sqlContex
> On 8 Mar 2016, at 18:06, Aida wrote:
>
> Detected Maven Version: 3.0.3 is not in the allowed range 3.3.3.
I'd look at that error message and fix it
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional
.. SKIPPED
>>>>>> [INFO] Spark Project Hive SKIPPED
>>>>>> [INFO] Spark Project Docker Integration Tests SKIPPED
>>>>>> [INFO] Spark Project REPL ............ SKIPPED
.. SKIPPED
>>>>>> [INFO] Spark Project Docker Integration Tests SKIPPED
>>>>>> [INFO] Spark Project REPL ............ SKIPPED
>>>>>> [INFO] Spark Project Assembly
ect External Flume Sink . SKIPPED
>>>>> [INFO] Spark Project External Flume .. SKIPPED
>>>>> [INFO] Spark Project External Flume Assembly . SKIPPED
>>>>> [INFO] Spark Project External MQTT ...
. SKIPPED
>>>> [INFO] Spark Project External Kafka .. SKIPPED
>>>> [INFO] Spark Project Examples SKIPPED
>>>> [INFO] Spark Project External Kafka Assem
tal time: 1.745s
>>> [INFO] Finished at: Tue Mar 08 18:01:48 GMT 2016
>>> [INFO] Final Memory: 19M/183M
>>> [INFO]
>>>
>>> [ERROR] Failed to execute goal
>>> org.apach
gin:1.4:enforce
>> (enforce-versions) on project spark-parent_2.10: Some Enforcer rules have
>> failed. Look above for specific messages explaining why the rule failed.
>> ->
>> [Help 1]
>> [ERROR]
>> [ERROR] To see the full stack trace of the errors, re-run Ma
trace of the errors, re-run Maven with the -e
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions,
> please
> read the followin
apCodes/sbt-pom-reader.git
/Users/aidatefera/.sbt/0.13/staging/ad8e8574a5bcb2d22d23/sbt-pom-reader
[error] Use 'last' for the full log.
Project loading failed: (r)etry, (q)uit, (l)ast, or (i)gnore?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.co
ion
ukdrfs01:spark-1.6.0 aidatefera$
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Installing-Spark-on-Mac-tp26397p26431.html
Sent from the Apache Spark User List mailing list archive at Nabble.
Installing spark on mac is similar to how you install it on Linux.
I use mac and have written a blog on how to install spark here is the link
: http://vishnuviswanath.com/spark_start.html
Hope this helps.
On Fri, Mar 4, 2016 at 2:29 PM, Simon Hafner wrote:
> I'd try `brew install s
;
>
>--
>View this message in context:
>http://apache-spark-user-list.1001560.n3.nabble.com/Installing-Spark-on-Mac-tp26397.html
>Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
>---
anything else?
>
> I am very eager to learn more about Spark but am unsure about the best way
> to do it.
>
> I would be happy for any suggestions or ideas
>
> Many thanks,
>
> Aida
>
>
>
> --
> View this message in context:
> http://apache-spark-u
:
http://apache-spark-user-list.1001560.n3.nabble.com/Installing-Spark-on-Mac-tp26397.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For
37 matches
Mail list logo