unsubscribe
.0.0. You can
> use mapPartitionsWithIndex instead.
>
> On Tue, Mar 28, 2017 at 3:52 PM, Anahita Talebi <anahita.t.am...@gmail.com
> > wrote:
>
>> Thanks.
>> I tried this one, as well. Unfortunately I still get the same error.
>>
>>
>> On Wednesday
Thanks.
I tried this one, as well. Unfortunately I still get the same error.
On Wednesday, March 29, 2017, Marco Mistroni <mmistr...@gmail.com> wrote:
> 1.7.5
>
> On 28 Mar 2017 10:10 pm, "Anahita Talebi" <anahita.t.am...@gmail.com
> <javascript:_e(%7B%7D
, where i am using
> spark 2.1, scala 2.11 and scalatest (i upgraded to 3.0.0) and it works fine
> in my projects though i don thave any of the following libraries that you
> mention
> - breeze
> - netlib,all
> - scoopt
>
> hth
>
> On Tue, Mar 28, 2017 at 9:10 PM,
sbt is closest to yours, where i am using
> spark 2.1, scala 2.11 and scalatest (i upgraded to 3.0.0) and it works fine
> in my projects though i don thave any of the following libraries that you
> mention
> - breeze
> - netlib,all
> - scoopt
>
> hth
>
> On Tue, Mar
e "log4j.properties" =>
MergeStrategy.discard
case m if m.toLowerCase.endsWith("manifest.mf") =>
MergeStrategy.discard
case m if m.toLowerCase.matches("meta-inf.*\\.sf$") =>
MergeStrategy.discard
case _ => MergeS
if it works
> Then amend the scala version
>
> hth
> marco
>
> On Tue, Mar 28, 2017 at 5:20 PM, Anahita Talebi <anahita.t.am...@gmail.com
> > wrote:
>
>> Hello,
>>
>> Thanks you all for your informative answers.
>> I actually changed the scala versi
%
> > "4.6.0-HBase-1.0"
> > libraryDependencies += "org.apache.hbase" % "hbase" % "1.2.3"
> > libraryDependencies += "org.apache.hbase" % "hbase-client" % "1.2.3"
> > libraryDependencies += "org.apa
Hi friends,
I have a code which is written in Scala. The scala version 2.10.4 and Spark
version 1.5.2 are used to run the code.
I would like to upgrade the code to the most updated version of spark,
meaning 2.1.0.
Here is the build.sbt:
import AssemblyKeys._
assemblySettings
name :=
with Pycharm that
I was failed.
Thanks,
Anahita
On Fri, Mar 3, 2017 at 3:48 PM, Pushkar.Gujar <pushkarvgu...@gmail.com>
wrote:
> Jupyter notebook/ipython can be connected to apache spark
>
>
> Thank you,
> *Pushkar Gujar*
>
>
> On Fri, Mar 3, 2017 at 9:43 AM, Anahita Tal
Hi everyone,
I am trying to run a spark code on Pycharm. I tried to give the path of
spark as a environment variable to the configuration of Pycharm.
Unfortunately, I get the error. Does anyone know how I can run the spark
code on Pycharm?
It shouldn't be necessarily on Pycharm. if you know any
-*
> *Sincerely yours,*
>
>
> *Raymond*
>
> On Sat, Feb 25, 2017 at 4:48 PM, Anahita Talebi <anahita.t.am...@gmail.com
> <javascript:_e(%7B%7D,'cvml','anahita.t.am...@gmail.com');>> wrote:
>
>> Hi,
>>
>> I think if you remove --j
Hi,
I think if you remove --jars, it will work. Like:
spark-submit /usr/hdp/2.5.0.0-1245/spark/lib/spark-assembly-1.6.2.2.5.
0.0-1245-hadoop2.7.3.2.5.0.0-1245.jar
I had the same problem before and solved it by removing --jars.
Cheers,
Anahita
On Saturday, February 25, 2017, Raymond Xie
Hello Friends,
I am trying to run a spark code on multiple machines. To this aim, I submit
a spark code on submit job on google cloud platform.
https://cloud.google.com/dataproc/docs/guides/submit-job
I have created a cluster with 6 nodes. Does anyone know how I can realize
which nodes are
Thanks for your answer.
do you mean Amazon EMR?
On Thu, Feb 2, 2017 at 2:30 PM, Marco Mistroni <mmistr...@gmail.com> wrote:
> U can use EMR if u want to run. On a cluster
> Kr
>
> On 2 Feb 2017 12:30 pm, "Anahita Talebi" <anahita.t.am...@gmail.com>
> wr
Dear all,
I am trying to run a spark code on multiple machines using submit job in
google cloud platform.
As the inputs of my code, I have a training and testing datasets.
When I use small training data set like (10kb), the code can be
successfully ran on the google cloud while when I have a
Hello,
Thanks a lot Dinko.
Yes, now it is working perfectly.
Cheers,
Anahita
On Fri, Jan 13, 2017 at 2:19 PM, Dinko Srkoč <dinko.sr...@gmail.com> wrote:
> On 13 January 2017 at 13:55, Anahita Talebi <anahita.t.am...@gmail.com>
> wrote:
> > Hi,
> >
> > Th
ave tested this code on Spark version on your local machine
> version of which may be different to whats in Google Cloud Storage.
> You need to select appropraite Spark version when you submit your job.
>
> On 12 January 2017 at 15:51, Anahita Talebi <anahita.t.am...@gmail.com>
&
Dear all,
I am trying to run a .jar file as a job using submit job in google cloud
console.
https://cloud.google.com/dataproc/docs/guides/submit-job
I actually ran the spark code on my local computer to generate a .jar file.
Then in the Argument folder, I give the value of the arguments that I
Dear friends,
I am trying to run a run a spark code on Google cloud using submit job.
https://cloud.google.com/dataproc/docs/tutorials/spark-scala
My question is about the part "argument".
In my spark code, they are some variables that their values are defined in
a shell file (.sh), as
20 matches
Mail list logo