tersection(_).getArea))
De : Lukasz Tracewski
<lukasz.tracew...@outlook.com><mailto:lukasz.tracew...@outlook.com>
Envoyé : samedi 22 juillet 2017 00:18
À : user@spark.apache.org<mailto:user@spark.apache.org>
Objet : [Spark] Working with JavaPairRDD
k.apache.org
Objet : [Spark] Working with JavaPairRDD from Scala
Hi,
I would like to call a method on JavaPairRDD from Scala and I am not sure how
to write a function for the "map". I am using a third-party library that uses
Spark for geospatial computations and it happens that it return
Hi,
I would like to call a method on JavaPairRDD from Scala and I am not sure how
to write a function for the "map". I am using a third-party library that uses
Spark for geospatial computations and it happens that it returns some results
through Java API. I'd welcome a hint ho
and after applying the mapValues
>> transformation for a JavaPairRDD. As expected the number of records were
>> same before and after.
>>
>> Now, I counted number of distinct keys before and after applying the
>> mapValues transformation for the same JavaPairRDD. How
gt; transformation for a JavaPairRDD. As expected the number of records were
> same before and after.
>
> Now, I counted number of distinct keys before and after applying the
> mapValues transformation for the same JavaPairRDD. However, I get less
> count after applying the tra
Hi,
I am finding it difficult to understand the following problem :
I count the number of records before and after applying the mapValues
transformation for a JavaPairRDD. As expected the number of records were
same before and after.
Now, I counted number of distinct keys before and after
Hello guys!
First of all, if you want to take a look in a more readable question, take
a look in my StackOverflow question
<http://stackoverflow.com/questions/33422560/how-to-run-fpgrowth-algorithm-with-a-javapairrdd-object>
(I've made the same question there).
I want to test Spark m
e readable question, take
> a look in my StackOverflow question
> <http://stackoverflow.com/questions/33422560/how-to-run-fpgrowth-algorithm-with-a-javapairrdd-object>
> (I've made the same question there).
>
> I want to test Spark machine learning algorithms and I have some quest
confHadoop = new Configuration();
JavaPairRDDLongWritable,Text sourceFile=sc.newAPIHadoopFile(
hdfs://cMaster:9000/wcinput/data.txt,
DataInputFormat.class,LongWritable.class,Text.class,confHadoop);
Now I want to handle the javapairrdd data from LongWritable, Text to
another LongWritable, Text
sourceFile=sc.newAPIHadoopFile(
hdfs://cMaster:9000/wcinput/data.txt,
DataInputFormat.class,LongWritable.class,Text.class,confHadoop);
Now I want to handle the javapairrdd data from LongWritable, Text to
another LongWritable, Text, where the Text content is different. After
that, I want to write
Thanks, will try this out and get back...
On Tue, Jun 23, 2015 at 2:30 AM, Tathagata Das t...@databricks.com wrote:
Try adding the provided scopes
dependency !-- Spark dependency --
groupIdorg.apache.spark/groupId
artifactIdspark-core_2.10/artifactId
Hi,
I have the following piece of code, where I am trying to transform a spark
stream and add min and max to it of eachRDD. However, I get an error saying
max call does not exist, at run-time (compiles properly). I am using
spark-1.4
I have added the question to stackoverflow as well:
Hi Tathagata,
When you say please mark spark-core and spark-streaming as dependencies how
do you mean?
I have installed the pre-build spark-1.4 for Hadoop 2.6 from spark
downloads. In my maven pom.xml, I am using version 1.4 as described.
Please let me know how I can fix that?
Thanks
Nipun
On
I think you may be including a different version of Spark Streaming in your
assembly. Please mark spark-core nd spark-streaming as provided
dependencies. Any installation of Spark will automatically provide Spark in
the classpath so you do not have to bundle it.
On Thu, Jun 18, 2015 at 8:44 AM,
is not allowed.
Any help of how to achieve this in another way?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Filling-Parquet-files-by-values-in-Value-of-a-JavaPairRDD-tp23188.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
) )*
I try to split tuple._2() and create new JavaPairRDD but I can't.
How can I get that ?
Have a nice day
yasemin
--
hiç ender hiç
*( (46551), (0,1,0,0,0,0,0,0) )*
I try to split tuple._2() and create new JavaPairRDD but I can't.
How can I get that ?
Have a nice day
yasemin
--
hiç ender hiç
://www.koctas.com.tr/reyon/el-aletleri/7,(0,1,0,0,0,0,0,0,46551)) in
my *JavaPairRDDString, Tuple2String, String *and I want to get
*( (46551), (0,1,0,0,0,0,0,0) )*
I try to split tuple._2() and create new JavaPairRDD but I can't.
How can I get that ?
Have a nice day
yasemin
--
hiç ender hiç
time with me.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/STREAMING-KAFKA-Direct-Approach-JavaPairRDD-cannot-be-cast-to-HasOffsetRanges-tp22568.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
= ((HasOffsetRanges) rdd).offsetRanges();
i am using the version 1.3.1 if is it a bug in this version ?
Thank you for spending time with me.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/STREAMING-KAFKA-Direct-Approach-JavaPairRDD-cannot-be-cast
20 matches
Mail list logo