hello

2021-01-02 Thread somni fourfiveone
Hi,

 

Is anybody out there ?

 

 

Somni-451

 

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: Hello !

2016-04-11 Thread mylisttech
Thank you ! 



On Apr 12, 2016, at 1:41, Ted Yu  wrote:

> For SparkR, please refer to https://spark.apache.org/docs/latest/sparkr.html
> 
> bq. on Ubuntu or CentOS
> 
> Both platforms are supported.
> 
> On Mon, Apr 11, 2016 at 1:08 PM,  wrote:
> Dear Experts ,
> 
> I am posting this for your information. I am a newbie to spark.
> I am interested in understanding Spark at the internal level.
> 
> I need your opinion, which unix flavor should I install spark on Ubuntu or 
> CentOS. I have had enough trouble with the windows version (1.6.1 with Hadoop 
> 2.6 pre built binaries , keeps giving me exceptions ).
> 
> I have worked on R on windows till date . Is there an R for unix? I have not 
> googled this either. Sorry about that.Just want to make sure SparkR has a 
> smooth run.
> 
> Thanks in advance.
> Harry
> 
> 
> 
> 
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
> 
> 


Re: Hello !

2016-04-11 Thread Ted Yu
For SparkR, please refer to https://spark.apache.org/docs/latest/sparkr.html

bq. on Ubuntu or CentOS

Both platforms are supported.

On Mon, Apr 11, 2016 at 1:08 PM,  wrote:

> Dear Experts ,
>
> I am posting this for your information. I am a newbie to spark.
> I am interested in understanding Spark at the internal level.
>
> I need your opinion, which unix flavor should I install spark on Ubuntu or
> CentOS. I have had enough trouble with the windows version (1.6.1 with
> Hadoop 2.6 pre built binaries , keeps giving me exceptions ).
>
> I have worked on R on windows till date . Is there an R for unix? I have
> not googled this either. Sorry about that.Just want to make sure SparkR has
> a smooth run.
>
> Thanks in advance.
> Harry
>
>
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>


Hello !

2016-04-11 Thread mylisttech
Dear Experts ,

I am posting this for your information. I am a newbie to spark.
I am interested in understanding Spark at the internal level.

I need your opinion, which unix flavor should I install spark on Ubuntu or 
CentOS. I have had enough trouble with the windows version (1.6.1 with Hadoop 
2.6 pre built binaries , keeps giving me exceptions ). 

I have worked on R on windows till date . Is there an R for unix? I have not 
googled this either. Sorry about that.Just want to make sure SparkR has a 
smooth run.

Thanks in advance.
Harry




-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Unable to start Pi (hello world) application on Spark 1.4

2015-06-28 Thread ๏̯͡๏
Figured it out.

All the jars that are specified with driver-class-path are now exported
through SPARK_CLASSPATH and its working now.

I thought SPARK_CLASSPATH was dead. Looks like its flipping ON/OFF

On Sun, Jun 28, 2015 at 12:55 PM, ÐΞ€ρ@Ҝ (๏̯͡๏)  wrote:

> Any thoughts on this ?
>
> On Fri, Jun 26, 2015 at 2:27 PM, ÐΞ€ρ@Ҝ (๏̯͡๏) 
> wrote:
>
>> It used to work with 1.3.1, however with 1.4.0 i get the following
>> exception
>>
>>
>> export SPARK_HOME=/home/dvasthimal/spark1.4/spark-1.4.0-bin-hadoop2.4
>> export
>> SPARK_JAR=/home/dvasthimal/spark1.4/spark-1.4.0-bin-hadoop2.4/lib/spark-assembly-1.4.0-hadoop2.4.0.jar
>> export HADOOP_CONF_DIR=/apache/hadoop/conf
>> cd $SPARK_HOME
>> ./bin/spark-submit -v --master yarn-cluster --driver-class-path
>> /apache/hadoop/share/hadoop/common/hadoop-common-2.4.1-EBAY-2.jar:/apache/hadoop-2.4.1-2.1.3.0-2-EBAY/share/hadoop/yarn/lib/guava-11.0.2.jar
>> --jars
>> /apache/hadoop/lib/hadoop-lzo-0.6.0.jar,/home/dvasthimal/spark1.4/spark-1.4.0-bin-hadoop2.4/lib/datanucleus-api-jdo-3.2.6.jar,/home/dvasthimal/spark1.4/spark-1.4.0-bin-hadoop2.4/lib/datanucleus-core-3.2.10.jar,/home/dvasthimal/spark1.4/spark-1.4.0-bin-hadoop2.4/lib/datanucleus-rdbms-3.2.9.jar
>> --num-executors 1 --driver-memory 4g --driver-java-options
>> "-XX:MaxPermSize=2G" --executor-memory 2g --executor-cores 1 --queue
>> hdmi-express --class org.apache.spark.examples.SparkPi
>> ./lib/spark-examples*.jar 10
>>
>> *Exception*
>>
>> 15/06/26 14:24:42 INFO client.ConfiguredRMFailoverProxyProvider: Failing
>> over to rm2
>>
>> 15/06/26 14:24:42 WARN ipc.Client: Exception encountered while connecting
>> to the server : java.lang.IllegalArgumentException: Server has invalid
>> Kerberos principal: hadoop/x-y-rm-2.vip.cm@corp.cm.com
>>
>>
>> I remember getting this error when working Spark 1.2.x where in the way i
>> used to get
>>
>> */apache/hadoop/share/hadoop/common/hadoop-common-2.4.1-EBAY-2.jar*
>>
>> this library into cp. With 1.3.1 using --driver-class-path gets it
>> running but with 1.4 it does not work
>>
>> Please suggest.
>>
>> --
>> Deepak
>>
>>
>
>
> --
> Deepak
>
>


-- 
Deepak


Re: Unable to start Pi (hello world) application on Spark 1.4

2015-06-28 Thread ๏̯͡๏
Any thoughts on this ?

On Fri, Jun 26, 2015 at 2:27 PM, ÐΞ€ρ@Ҝ (๏̯͡๏)  wrote:

> It used to work with 1.3.1, however with 1.4.0 i get the following
> exception
>
>
> export SPARK_HOME=/home/dvasthimal/spark1.4/spark-1.4.0-bin-hadoop2.4
> export
> SPARK_JAR=/home/dvasthimal/spark1.4/spark-1.4.0-bin-hadoop2.4/lib/spark-assembly-1.4.0-hadoop2.4.0.jar
> export HADOOP_CONF_DIR=/apache/hadoop/conf
> cd $SPARK_HOME
> ./bin/spark-submit -v --master yarn-cluster --driver-class-path
> /apache/hadoop/share/hadoop/common/hadoop-common-2.4.1-EBAY-2.jar:/apache/hadoop-2.4.1-2.1.3.0-2-EBAY/share/hadoop/yarn/lib/guava-11.0.2.jar
> --jars
> /apache/hadoop/lib/hadoop-lzo-0.6.0.jar,/home/dvasthimal/spark1.4/spark-1.4.0-bin-hadoop2.4/lib/datanucleus-api-jdo-3.2.6.jar,/home/dvasthimal/spark1.4/spark-1.4.0-bin-hadoop2.4/lib/datanucleus-core-3.2.10.jar,/home/dvasthimal/spark1.4/spark-1.4.0-bin-hadoop2.4/lib/datanucleus-rdbms-3.2.9.jar
> --num-executors 1 --driver-memory 4g --driver-java-options
> "-XX:MaxPermSize=2G" --executor-memory 2g --executor-cores 1 --queue
> hdmi-express --class org.apache.spark.examples.SparkPi
> ./lib/spark-examples*.jar 10
>
> *Exception*
>
> 15/06/26 14:24:42 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to rm2
>
> 15/06/26 14:24:42 WARN ipc.Client: Exception encountered while connecting
> to the server : java.lang.IllegalArgumentException: Server has invalid
> Kerberos principal: hadoop/x-y-rm-2.vip.cm@corp.cm.com
>
>
> I remember getting this error when working Spark 1.2.x where in the way i
> used to get
>
> */apache/hadoop/share/hadoop/common/hadoop-common-2.4.1-EBAY-2.jar*
>
> this library into cp. With 1.3.1 using --driver-class-path gets it running
> but with 1.4 it does not work
>
> Please suggest.
>
> --
> Deepak
>
>


-- 
Deepak


Unable to start Pi (hello world) application on Spark 1.4

2015-06-26 Thread ๏̯͡๏
It used to work with 1.3.1, however with 1.4.0 i get the following exception


export SPARK_HOME=/home/dvasthimal/spark1.4/spark-1.4.0-bin-hadoop2.4
export
SPARK_JAR=/home/dvasthimal/spark1.4/spark-1.4.0-bin-hadoop2.4/lib/spark-assembly-1.4.0-hadoop2.4.0.jar
export HADOOP_CONF_DIR=/apache/hadoop/conf
cd $SPARK_HOME
./bin/spark-submit -v --master yarn-cluster --driver-class-path
/apache/hadoop/share/hadoop/common/hadoop-common-2.4.1-EBAY-2.jar:/apache/hadoop-2.4.1-2.1.3.0-2-EBAY/share/hadoop/yarn/lib/guava-11.0.2.jar
--jars
/apache/hadoop/lib/hadoop-lzo-0.6.0.jar,/home/dvasthimal/spark1.4/spark-1.4.0-bin-hadoop2.4/lib/datanucleus-api-jdo-3.2.6.jar,/home/dvasthimal/spark1.4/spark-1.4.0-bin-hadoop2.4/lib/datanucleus-core-3.2.10.jar,/home/dvasthimal/spark1.4/spark-1.4.0-bin-hadoop2.4/lib/datanucleus-rdbms-3.2.9.jar
--num-executors 1 --driver-memory 4g --driver-java-options
"-XX:MaxPermSize=2G" --executor-memory 2g --executor-cores 1 --queue
hdmi-express --class org.apache.spark.examples.SparkPi
./lib/spark-examples*.jar 10

*Exception*

15/06/26 14:24:42 INFO client.ConfiguredRMFailoverProxyProvider: Failing
over to rm2

15/06/26 14:24:42 WARN ipc.Client: Exception encountered while connecting
to the server : java.lang.IllegalArgumentException: Server has invalid
Kerberos principal: hadoop/x-y-rm-2.vip.cm@corp.cm.com


I remember getting this error when working Spark 1.2.x where in the way i
used to get

*/apache/hadoop/share/hadoop/common/hadoop-common-2.4.1-EBAY-2.jar*

this library into cp. With 1.3.1 using --driver-class-path gets it running
but with 1.4 it does not work

Please suggest.

-- 
Deepak


Re: Spark 1.2.1: ClassNotFoundException when running hello world example in scala 2.11

2015-02-19 Thread Akhil Das
Can you downgrade your scala dependency to 2.10 and give it a try?

Thanks
Best Regards

On Fri, Feb 20, 2015 at 12:40 AM, Luis Solano  wrote:

> I'm having an issue with spark 1.2.1 and scala 2.11. I detailed the
> symptoms in this stackoverflow question.
>
>
> http://stackoverflow.com/questions/28612837/spark-classnotfoundexception-when-running-hello-world-example-in-scala-2-11
>
> Has anyone experienced anything similar?
>
> Thank you!
>


Spark 1.2.1: ClassNotFoundException when running hello world example in scala 2.11

2015-02-19 Thread Luis Solano
I'm having an issue with spark 1.2.1 and scala 2.11. I detailed the
symptoms in this stackoverflow question.

http://stackoverflow.com/questions/28612837/spark-classnotfoundexception-when-running-hello-world-example-in-scala-2-11

Has anyone experienced anything similar?

Thank you!


Re: hello

2014-12-18 Thread Harihar Nahak
You mean to Spark User List, Its pretty easy. check the first  email it has
all instructions

On 18 December 2014 at 21:56, csjtx1021 [via Apache Spark User List] <
ml-node+s1001560n20759...@n3.nabble.com> wrote:
>
> i want to join you
>
> --
>  If you reply to this email, your message will be added to the discussion
> below:
> http://apache-spark-user-list.1001560.n3.nabble.com/hello-tp20759.html
>  To start a new topic under Apache Spark User List, email
> ml-node+s1001560n1...@n3.nabble.com
> To unsubscribe from Apache Spark User List, click here
> <http://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=1&code=aG5haGFrQHd5bnlhcmRncm91cC5jb218MXwtMTgxOTE5MTkyOQ==>
> .
> NAML
> <http://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>


-- 
Regards,
Harihar Nahak
BigData Developer
Wynyard
Email:hna...@wynyardgroup.com | Extn: 8019




-
--Harihar
--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/hello-tp20759p20770.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Re: Fails to run simple Spark (Hello World) scala program

2014-09-23 Thread Moshe Beeri
t;>>   val logData = sc.textFile(logFile, 2).cache()
>>>>>   val numAs = logData.filter(line => line.contains("a")).count()
>>>>>   val numBs = logData.filter(line => line.contains("b")).count()
>>>>>   println("Lines with a: %s, Lines with b: %s".format(numAs,
>>>>> numBs))
>>>>>
>>>>> } catch {
>>>>>   case e => {
>>>>> println(e.getCause())
>>>>> println("stack:")
>>>>> e.printStackTrace()
>>>>>   }
>>>>> }
>>>>>   }
>>>>> }
>>>>> Runs with Scala 2.10.4
>>>>> The problem is this [vogue] exception:
>>>>>
>>>>> at com.example.scamel.Nizoz.main(Nizoz.scala)
>>>>> Caused by: java.lang.RuntimeException:
>>>>> java.lang.reflect.InvocationTargetException
>>>>> at
>>>>>
>>>>> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
>>>>> at org.apache.hadoop.security.Groups.(Groups.java:64)
>>>>> at
>>>>>
>>>>> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:240)
>>>>> ...
>>>>> Caused by: java.lang.reflect.InvocationTargetException
>>>>> at
>>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>>>> at
>>>>>
>>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>>>>> at
>>>>>
>>>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>>>> ...
>>>>> ... 10 more
>>>>> Caused by: java.lang.UnsatisfiedLinkError:
>>>>> org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative()V
>>>>> at
>>>>> org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative(Native
>>>>> Method)
>>>>> at
>>>>>
>>>>> org.apache.hadoop.security.JniBasedUnixGroupsMapping.(JniBasedUnixGroupsMapping.java:49)
>>>>>
>>>>> I have Hadoop 1.2.1 running on Ubuntu 14.04, the Scala console run as
>>>>> expected.
>>>>>
>>>>> What am I doing wrong?
>>>>> Any idea will be welcome
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> View this message in context:
>>>>> http://apache-spark-user-list.1001560.n3.nabble.com/Fails-to-run-simple-Spark-Hello-World-scala-program-tp14718.html
>>>>> Sent from the Apache Spark User List mailing list archive at
>>>>> Nabble.com.
>>>>>
>>>>> -
>>>>> To unsubscribe, e-mail: [hidden email]
>>>>> <http://user/SendEmail.jtp?type=node&node=14724&i=2>
>>>>> For additional commands, e-mail: [hidden email]
>>>>> <http://user/SendEmail.jtp?type=node&node=14724&i=3>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Manu Suryavansh
>>>>
>>>
>>>
>>> --
>>>  If you reply to this email, your message will be added to the
>>> discussion below:
>>>
>>> http://apache-spark-user-list.1001560.n3.nabble.com/Fails-to-run-simple-Spark-Hello-World-scala-program-tp14718p14724.html
>>>  To unsubscribe from Fails to run simple Spark (Hello World) scala
>>> program, click here.
>>> NAML
>>> <http://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>>
>>
>>
>> --
>> View this message in context: Re: Fails to run simple Spark (Hello
>> World) scala program
>> <http://apache-spark-user-list.1001560.n3.nabble.com/Fails-to-run-simple-Spark-Hello-World-scala-program-tp14718p14731.html>
>>
>> Sent from the Apache Spark User List mailing list archive
>> <http://apache-spark-user-list.1001560.n3.nabble.com/> at Nabble.com.
>>
>
>
>
> --
>  If you reply to this email, your message will be added to the discussion
> below:
>
> http://apache-spark-user-list.1001560.n3.nabble.com/Fails-to-run-simple-Spark-Hello-World-scala-program-tp14718p14785.html
>  To unsubscribe from Fails to run simple Spark (Hello World) scala
> program, click here
> <http://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=14718&code=bW9zaGUuYmVlcmlAZ21haWwuY29tfDE0NzE4fDE0NzUwMDQ2Ng==>
> .
> NAML
> <http://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Fails-to-run-simple-Spark-Hello-World-scala-program-tp14718p14907.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Re: Fails to run simple Spark (Hello World) scala program

2014-09-20 Thread Moshe Beeri
Hi Sean,

Thanks a lot for the answer , I loved your excellent book
*​Mahout in Action
<http://www.amazon.com/Mahout-Action-Sean-Owen/dp/1935182684> *hope you'll
keep on writing more books in the field of Big Data.
The issue was with redundant Hadoop library, But now I am facing some other
issue (see prev post in this thread)
java.lang.ClassNotFoundException: com.example.scamel.Nizoz$$anonfun$3

But the class com.example.scamel.Nizoz (in fact Scala object) is the one
under debugging.

  def main(args: Array[String]) {
println(scala.tools.nsc.Properties.versionString)
try {
  //Nizoz.connect
  val logFile =
"/home/moshe/store/frameworks/spark-1.1.0-bin-hadoop1/README.md" // Should
be some file on your system
  val conf = new SparkConf().setAppName("spark
town").setMaster("spark://nash:7077"); //spark://master:7077
  val sc = new SparkContext(conf)
  val logData = sc.textFile(logFile, 2).cache()
  *val numAs = logData.filter(line => line.contains("a")).count()//
<- here is  where the exception thrown *

Do you have any idea whats wrong?
Thanks,
Moshe Beeri.


*​*


תודה רבה,
משה בארי.
054-3133943
Email  | linkedin <http://www.linkedin.com/in/mobee>



On Sat, Sep 20, 2014 at 12:02 PM, sowen [via Apache Spark User List] <
ml-node+s1001560n14724...@n3.nabble.com> wrote:

> Spark does not require Hadoop 2 or YARN. This looks like a problem with
> the Hadoop installation as it is not funding native libraries it needs to
> make some security related system call. Check the installation.
> On Sep 20, 2014 9:13 AM, "Manu Suryavansh" <[hidden email]
> <http://user/SendEmail.jtp?type=node&node=14724&i=0>> wrote:
>
>> Hi Moshe,
>>
>> Spark needs a Hadoop 2.x/YARN cluster. Other wise you can run it without
>> hadoop in the stand alone mode.
>>
>> Manu
>>
>>
>>
>> On Sat, Sep 20, 2014 at 12:55 AM, Moshe Beeri <[hidden email]
>> <http://user/SendEmail.jtp?type=node&node=14724&i=1>> wrote:
>>
>>> object Nizoz {
>>>
>>>   def connect(): Unit = {
>>> val conf = new SparkConf().setAppName("nizoz").setMaster("master");
>>> val spark = new SparkContext(conf)
>>> val lines =
>>>
>>> spark.textFile("file:///home/moshe/store/frameworks/spark-1.1.0-bin-hadoop1/README.md")
>>> val lineLengths = lines.map(s => s.length)
>>> val totalLength = lineLengths.reduce((a, b) => a + b)
>>> println("totalLength=" + totalLength)
>>>
>>>   }
>>>
>>>   def main(args: Array[String]) {
>>> println(scala.tools.nsc.Properties.versionString)
>>> try {
>>>   //Nizoz.connect
>>>   val logFile =
>>> "/home/moshe/store/frameworks/spark-1.1.0-bin-hadoop1/README.md" //
>>> Should
>>> be some file on your system
>>>   val conf = new SparkConf().setAppName("Simple
>>> Application").setMaster("spark://master:7077")
>>>   val sc = new SparkContext(conf)
>>>   val logData = sc.textFile(logFile, 2).cache()
>>>   val numAs = logData.filter(line => line.contains("a")).count()
>>>   val numBs = logData.filter(line => line.contains("b")).count()
>>>   println("Lines with a: %s, Lines with b: %s".format(numAs, numBs))
>>>
>>> } catch {
>>>   case e => {
>>> println(e.getCause())
>>> println("stack:")
>>> e.printStackTrace()
>>>   }
>>> }
>>>   }
>>> }
>>> Runs with Scala 2.10.4
>>> The problem is this [vogue] exception:
>>>
>>> at com.example.scamel.Nizoz.main(Nizoz.scala)
>>> Caused by: java.lang.RuntimeException:
>>> java.lang.reflect.InvocationTargetException
>>> at
>>>
>>> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
>>> at org.apache.hadoop.security.Groups.(Groups.java:64)
>>> at
>>>
>>> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:240)
>>> ...
>>> Caused by: java.lang.reflect.InvocationTargetException
>>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>> Method)
>>> at
>>>
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>>> at
>>>

Re: Fails to run simple Spark (Hello World) scala program

2014-09-20 Thread Sean Owen
Spark does not require Hadoop 2 or YARN. This looks like a problem with the
Hadoop installation as it is not funding native libraries it needs to make
some security related system call. Check the installation.
On Sep 20, 2014 9:13 AM, "Manu Suryavansh" 
wrote:

> Hi Moshe,
>
> Spark needs a Hadoop 2.x/YARN cluster. Other wise you can run it without
> hadoop in the stand alone mode.
>
> Manu
>
>
>
> On Sat, Sep 20, 2014 at 12:55 AM, Moshe Beeri 
> wrote:
>
>> object Nizoz {
>>
>>   def connect(): Unit = {
>> val conf = new SparkConf().setAppName("nizoz").setMaster("master");
>> val spark = new SparkContext(conf)
>> val lines =
>>
>> spark.textFile("file:///home/moshe/store/frameworks/spark-1.1.0-bin-hadoop1/README.md")
>> val lineLengths = lines.map(s => s.length)
>> val totalLength = lineLengths.reduce((a, b) => a + b)
>> println("totalLength=" + totalLength)
>>
>>   }
>>
>>   def main(args: Array[String]) {
>> println(scala.tools.nsc.Properties.versionString)
>> try {
>>   //Nizoz.connect
>>   val logFile =
>> "/home/moshe/store/frameworks/spark-1.1.0-bin-hadoop1/README.md" // Should
>> be some file on your system
>>   val conf = new SparkConf().setAppName("Simple
>> Application").setMaster("spark://master:7077")
>>   val sc = new SparkContext(conf)
>>   val logData = sc.textFile(logFile, 2).cache()
>>   val numAs = logData.filter(line => line.contains("a")).count()
>>   val numBs = logData.filter(line => line.contains("b")).count()
>>   println("Lines with a: %s, Lines with b: %s".format(numAs, numBs))
>>
>> } catch {
>>   case e => {
>> println(e.getCause())
>> println("stack:")
>> e.printStackTrace()
>>   }
>> }
>>   }
>> }
>> Runs with Scala 2.10.4
>> The problem is this [vogue] exception:
>>
>> at com.example.scamel.Nizoz.main(Nizoz.scala)
>> Caused by: java.lang.RuntimeException:
>> java.lang.reflect.InvocationTargetException
>> at
>>
>> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
>> at org.apache.hadoop.security.Groups.(Groups.java:64)
>> at
>>
>> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:240)
>> ...
>> Caused by: java.lang.reflect.InvocationTargetException
>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> Method)
>> at
>>
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>> at
>>
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>> ...
>> ... 10 more
>> Caused by: java.lang.UnsatisfiedLinkError:
>> org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative()V
>> at
>> org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative(Native
>> Method)
>> at
>>
>> org.apache.hadoop.security.JniBasedUnixGroupsMapping.(JniBasedUnixGroupsMapping.java:49)
>>
>> I have Hadoop 1.2.1 running on Ubuntu 14.04, the Scala console run as
>> expected.
>>
>> What am I doing wrong?
>> Any idea will be welcome
>>
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/Fails-to-run-simple-Spark-Hello-World-scala-program-tp14718.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>>
>
>
> --
> Manu Suryavansh
>


Re: Fails to run simple Spark (Hello World) scala program

2014-09-20 Thread Moshe Beeri
Hi Nanu/All

Now I interfacing an other strange (relatively to new complex framework)
error.
I run ./sbin/start-all.sh (my computer name after John nash) and got the
connection Connecting to master spark://nash:7077
running on my local machine yields
java.lang.ClassNotFoundException: com.example.scamel.Nizoz$$anonfun$3

But the class com.example.scamel.Nizoz (in fact Scala object) is the one
under debugging.

  def main(args: Array[String]) {
println(scala.tools.nsc.Properties.versionString)
try {
  //Nizoz.connect
  val logFile =
"/home/moshe/store/frameworks/spark-1.1.0-bin-hadoop1/README.md" // Should
be some file on your system
  val conf = new SparkConf().setAppName("spark
town").setMaster("spark://nash:7077"); //spark://master:7077
  val sc = new SparkContext(conf)
  val logData = sc.textFile(logFile, 2).cache()
  *val numAs = logData.filter(line => line.contains("a")).count()//
<- here is  where the exception thrown *

Any help will be welcome





תודה רבה,
משה בארי.
054-3133943
Email  | linkedin <http://www.linkedin.com/in/mobee>



On Sat, Sep 20, 2014 at 11:22 AM, Moshe Beeri  wrote:

> Thank Manu,
>
> I just saw I have included hadoop client 2.x in my pom.xml, removing it
> solved the problem.
>
> Thanks for you help
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Fails-to-run-simple-Spark-Hello-World-scala-program-tp14718p14721.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>


Re: Fails to run simple Spark (Hello World) scala program

2014-09-20 Thread Moshe Beeri
Thank Manu,

I just saw I have included hadoop client 2.x in my pom.xml, removing it
solved the problem.

Thanks for you help



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Fails-to-run-simple-Spark-Hello-World-scala-program-tp14718p14721.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Fails to run simple Spark (Hello World) scala program

2014-09-20 Thread Manu Suryavansh
Hi Moshe,

Spark needs a Hadoop 2.x/YARN cluster. Other wise you can run it without
hadoop in the stand alone mode.

Manu



On Sat, Sep 20, 2014 at 12:55 AM, Moshe Beeri  wrote:

> object Nizoz {
>
>   def connect(): Unit = {
> val conf = new SparkConf().setAppName("nizoz").setMaster("master");
> val spark = new SparkContext(conf)
> val lines =
>
> spark.textFile("file:///home/moshe/store/frameworks/spark-1.1.0-bin-hadoop1/README.md")
> val lineLengths = lines.map(s => s.length)
> val totalLength = lineLengths.reduce((a, b) => a + b)
> println("totalLength=" + totalLength)
>
>   }
>
>   def main(args: Array[String]) {
> println(scala.tools.nsc.Properties.versionString)
> try {
>   //Nizoz.connect
>   val logFile =
> "/home/moshe/store/frameworks/spark-1.1.0-bin-hadoop1/README.md" // Should
> be some file on your system
>   val conf = new SparkConf().setAppName("Simple
> Application").setMaster("spark://master:7077")
>   val sc = new SparkContext(conf)
>   val logData = sc.textFile(logFile, 2).cache()
>   val numAs = logData.filter(line => line.contains("a")).count()
>   val numBs = logData.filter(line => line.contains("b")).count()
>   println("Lines with a: %s, Lines with b: %s".format(numAs, numBs))
>
> } catch {
>   case e => {
> println(e.getCause())
> println("stack:")
> e.printStackTrace()
>   }
> }
>   }
> }
> Runs with Scala 2.10.4
> The problem is this [vogue] exception:
>
> at com.example.scamel.Nizoz.main(Nizoz.scala)
> Caused by: java.lang.RuntimeException:
> java.lang.reflect.InvocationTargetException
> at
>
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
> at org.apache.hadoop.security.Groups.(Groups.java:64)
> at
>
> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:240)
> ...
> Caused by: java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> at
>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at
>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> ...
> ... 10 more
> Caused by: java.lang.UnsatisfiedLinkError:
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative()V
> at
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative(Native
> Method)
> at
>
> org.apache.hadoop.security.JniBasedUnixGroupsMapping.(JniBasedUnixGroupsMapping.java:49)
>
> I have Hadoop 1.2.1 running on Ubuntu 14.04, the Scala console run as
> expected.
>
> What am I doing wrong?
> Any idea will be welcome
>
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Fails-to-run-simple-Spark-Hello-World-scala-program-tp14718.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>


-- 
Manu Suryavansh


Fails to run simple Spark (Hello World) scala program

2014-09-20 Thread Moshe Beeri
object Nizoz {

  def connect(): Unit = {
val conf = new SparkConf().setAppName("nizoz").setMaster("master");
val spark = new SparkContext(conf)
val lines =
spark.textFile("file:///home/moshe/store/frameworks/spark-1.1.0-bin-hadoop1/README.md")
val lineLengths = lines.map(s => s.length)
val totalLength = lineLengths.reduce((a, b) => a + b)
println("totalLength=" + totalLength)

  }

  def main(args: Array[String]) {
println(scala.tools.nsc.Properties.versionString)
try {
  //Nizoz.connect
  val logFile =
"/home/moshe/store/frameworks/spark-1.1.0-bin-hadoop1/README.md" // Should
be some file on your system
  val conf = new SparkConf().setAppName("Simple
Application").setMaster("spark://master:7077")
  val sc = new SparkContext(conf)
  val logData = sc.textFile(logFile, 2).cache()
  val numAs = logData.filter(line => line.contains("a")).count()
  val numBs = logData.filter(line => line.contains("b")).count()
  println("Lines with a: %s, Lines with b: %s".format(numAs, numBs))

} catch {
  case e => {
println(e.getCause())
println("stack:")
e.printStackTrace()
  }
}
  }
}
Runs with Scala 2.10.4
The problem is this [vogue] exception:

at com.example.scamel.Nizoz.main(Nizoz.scala)
Caused by: java.lang.RuntimeException:
java.lang.reflect.InvocationTargetException
at
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
at org.apache.hadoop.security.Groups.(Groups.java:64)
at
org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:240)
...
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
...
... 10 more
Caused by: java.lang.UnsatisfiedLinkError:
org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative()V
at 
org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative(Native
Method)
at
org.apache.hadoop.security.JniBasedUnixGroupsMapping.(JniBasedUnixGroupsMapping.java:49)

I have Hadoop 1.2.1 running on Ubuntu 14.04, the Scala console run as
expected.

What am I doing wrong?
Any idea will be welcome 





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Fails-to-run-simple-Spark-Hello-World-scala-program-tp14718.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org