Fwd: unsubscribe

2019-01-24 Thread Anahita Talebi
unsubscribe

Re: Upgrade the scala code using the most updated Spark version

2017-03-29 Thread Anahita Talebi
.0.0. You can > use mapPartitionsWithIndex instead. > > On Tue, Mar 28, 2017 at 3:52 PM, Anahita Talebi <anahita.t.am...@gmail.com > > wrote: > >> Thanks. >> I tried this one, as well. Unfortunately I still get the same error. >> >> >> On Wednesday

Re: Upgrade the scala code using the most updated Spark version

2017-03-28 Thread Anahita Talebi
Thanks. I tried this one, as well. Unfortunately I still get the same error. On Wednesday, March 29, 2017, Marco Mistroni <mmistr...@gmail.com> wrote: > 1.7.5 > > On 28 Mar 2017 10:10 pm, "Anahita Talebi" <anahita.t.am...@gmail.com > <javascript:_e(%7B%7D

Re: Upgrade the scala code using the most updated Spark version

2017-03-28 Thread Anahita Talebi
, where i am using > spark 2.1, scala 2.11 and scalatest (i upgraded to 3.0.0) and it works fine > in my projects though i don thave any of the following libraries that you > mention > - breeze > - netlib,all > - scoopt > > hth > > On Tue, Mar 28, 2017 at 9:10 PM,

Re: Upgrade the scala code using the most updated Spark version

2017-03-28 Thread Anahita Talebi
sbt is closest to yours, where i am using > spark 2.1, scala 2.11 and scalatest (i upgraded to 3.0.0) and it works fine > in my projects though i don thave any of the following libraries that you > mention > - breeze > - netlib,all > - scoopt > > hth > > On Tue, Mar

Re: Upgrade the scala code using the most updated Spark version

2017-03-28 Thread Anahita Talebi
e "log4j.properties" => MergeStrategy.discard case m if m.toLowerCase.endsWith("manifest.mf") => MergeStrategy.discard case m if m.toLowerCase.matches("meta-inf.*\\.sf$") => MergeStrategy.discard case _ => MergeS

Re: Upgrade the scala code using the most updated Spark version

2017-03-28 Thread Anahita Talebi
if it works > Then amend the scala version > > hth > marco > > On Tue, Mar 28, 2017 at 5:20 PM, Anahita Talebi <anahita.t.am...@gmail.com > > wrote: > >> Hello, >> >> Thanks you all for your informative answers. >> I actually changed the scala versi

Re: Upgrade the scala code using the most updated Spark version

2017-03-28 Thread Anahita Talebi
% > > "4.6.0-HBase-1.0" > > libraryDependencies += "org.apache.hbase" % "hbase" % "1.2.3" > > libraryDependencies += "org.apache.hbase" % "hbase-client" % "1.2.3" > > libraryDependencies += "org.apa

Upgrade the scala code using the most updated Spark version

2017-03-27 Thread Anahita Talebi
Hi friends, I have a code which is written in Scala. The scala version 2.10.4 and Spark version 1.5.2 are used to run the code. I would like to upgrade the code to the most updated version of spark, meaning 2.1.0. Here is the build.sbt: import AssemblyKeys._ assemblySettings name :=

Re: How to run a spark on Pycharm

2017-03-03 Thread Anahita Talebi
with Pycharm that I was failed. Thanks, Anahita On Fri, Mar 3, 2017 at 3:48 PM, Pushkar.Gujar <pushkarvgu...@gmail.com> wrote: > Jupyter notebook/ipython can be connected to apache spark > > > Thank you, > *Pushkar Gujar* > > > On Fri, Mar 3, 2017 at 9:43 AM, Anahita Tal

How to run a spark on Pycharm

2017-03-03 Thread Anahita Talebi
Hi everyone, I am trying to run a spark code on Pycharm. I tried to give the path of spark as a environment variable to the configuration of Pycharm. Unfortunately, I get the error. Does anyone know how I can run the spark code on Pycharm? It shouldn't be necessarily on Pycharm. if you know any

Re: No main class set in JAR; please specify one with --class and java.lang.ClassNotFoundException

2017-02-25 Thread Anahita Talebi
-* > *Sincerely yours,* > > > *Raymond* > > On Sat, Feb 25, 2017 at 4:48 PM, Anahita Talebi <anahita.t.am...@gmail.com > <javascript:_e(%7B%7D,'cvml','anahita.t.am...@gmail.com');>> wrote: > >> Hi, >> >> I think if you remove --j

Re: No main class set in JAR; please specify one with --class and java.lang.ClassNotFoundException

2017-02-25 Thread Anahita Talebi
Hi, I think if you remove --jars, it will work. Like: spark-submit /usr/hdp/2.5.0.0-1245/spark/lib/spark-assembly-1.6.2.2.5. 0.0-1245-hadoop2.7.3.2.5.0.0-1245.jar I had the same problem before and solved it by removing --jars. Cheers, Anahita On Saturday, February 25, 2017, Raymond Xie

submit a spark code on google cloud

2017-02-07 Thread Anahita Talebi
Hello Friends, I am trying to run a spark code on multiple machines. To this aim, I submit a spark code on submit job on google cloud platform. https://cloud.google.com/dataproc/docs/guides/submit-job I have created a cluster with 6 nodes. Does anyone know how I can realize which nodes are

Re: Running a spark code on multiple machines using google cloud platform

2017-02-02 Thread Anahita Talebi
Thanks for your answer. do you mean Amazon EMR? On Thu, Feb 2, 2017 at 2:30 PM, Marco Mistroni <mmistr...@gmail.com> wrote: > U can use EMR if u want to run. On a cluster > Kr > > On 2 Feb 2017 12:30 pm, "Anahita Talebi" <anahita.t.am...@gmail.com> > wr

Running a spark code on multiple machines using google cloud platform

2017-02-02 Thread Anahita Talebi
Dear all, I am trying to run a spark code on multiple machines using submit job in google cloud platform. As the inputs of my code, I have a training and testing datasets. When I use small training data set like (10kb), the code can be successfully ran on the google cloud while when I have a

Re: Running a spark code using submit job in google cloud platform

2017-01-13 Thread Anahita Talebi
Hello, Thanks a lot Dinko. Yes, now it is working perfectly. Cheers, Anahita On Fri, Jan 13, 2017 at 2:19 PM, Dinko Srkoč <dinko.sr...@gmail.com> wrote: > On 13 January 2017 at 13:55, Anahita Talebi <anahita.t.am...@gmail.com> > wrote: > > Hi, > > > > Th

Re: Running a spark code using submit job in google cloud platform

2017-01-13 Thread Anahita Talebi
ave tested this code on Spark version on your local machine > version of which may be different to whats in Google Cloud Storage. > You need to select appropraite Spark version when you submit your job. > > On 12 January 2017 at 15:51, Anahita Talebi <anahita.t.am...@gmail.com> &

Running a spark code using submit job in google cloud platform

2017-01-12 Thread Anahita Talebi
Dear all, I am trying to run a .jar file as a job using submit job in google cloud console. https://cloud.google.com/dataproc/docs/guides/submit-job I actually ran the spark code on my local computer to generate a .jar file. Then in the Argument folder, I give the value of the arguments that I

Fwd: Entering the variables in the Argument part in Submit job section to run a spark code on Google Cloud

2017-01-09 Thread Anahita Talebi
Dear friends, I am trying to run a run a spark code on Google cloud using submit job. https://cloud.google.com/dataproc/docs/tutorials/spark-scala My question is about the part "argument". In my spark code, they are some variables that their values are defined in a shell file (.sh), as