I am in a situation where I am using Apache Spark and its map/reduce
functionality. I am now at a stage where I have been able to map to a data
set that conceptually has many "rows" of data.

Now what I am needing is to do a reduce which usually is a straight forward
thing. My real need though is to reduce on "overlapping" rows. For example,
the first reduce uses "rows" 1-30, the second uses 11-40, the third 21-50
and so on. How would this work in a Spark environment?

I appreciate any insight or directions anyone can give,

Jeff Richley



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Mapping-to-multiple-groups-in-Apache-Spark-tp25156.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to