.1001560.n3.nabble.com/Example-of-Geoprocessing-with-Spark-tp14274p14710.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional
)));
}
}
}
def get(env:Envelope) =
spatialIndex.query(env).asScala
}
}
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Example-of-Geoprocessing-with-Spark-tp14274p14752.html
Sent from
Hi Abel,
Pretty interesting. May I ask how big is your point CSV dataset?
It seems you are relying on searching through the FeatureCollection of
polygons for which one intersects your point. This is going to be
extremely slow. I highly recommend using a SpatialIndex, such as the
many that
.1001560.n3.nabble.com/Example-of-Geoprocessing-with-Spark-tp14274p14710.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands
Now i have a better version, but now the problem is that the saveAsTextFile
do not finish the Job, in the hdfs repository only exist a partial
temporary file, someone can tell me what is wrong:
Thanks !!
object SimpleApp {
def main(args: Array[String]){
val conf = new
Here an example of a working code that takes a csv with lat lon points and
intersects with polygons of municipalities of Mexico, generating a new
version of the file with new attributes.
Do you think that could be improved?
Thanks.
The Code:
import org.apache.spark.SparkContext
import