Hi,
I was working with a simple task (running locally). Just reading a file (35 mb)
with about 200 features and making a random forest with 5 trees with 5 depth.
While saving the file with:
predictions.select("VisitNumber", "probability")
.write.format("json") // tried different formats
Any suggestions any one?
Using version 1.5.1.
Regards
Ankush Khanna
On Nov 10, 2015, at 11:37 AM, Ankush Khanna wrote:
Hi,
I was working with a simple task (running locally). Just reading a file (35 mb)
with about 200 features and making a random forest with 5 trees
Hi
I get exactly the same problem here, do you've found the problem ?
Thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/NoSuchElementException-key-not-found-when-changing-the-window-lenght-and-interval-in-Spark-Streaming-tp9010p9283.html
Sent from
the problem ?
Thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/NoSuchElementException-key-not-found-when-changing-the-window-lenght-and-interval-in-Spark-Streaming-tp9010p9283.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
in context:
http://apache-spark-user-list.1001560.n3.nabble.com/NoSuchElementException-key-not-found-tp6743p7157.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
I think I know what is going on! This probably a race condition in the
DAGScheduler. I have added a JIRA for this. The fix is not trivial though.
https://issues.apache.org/jira/browse/SPARK-2002
A not-so-good workaround for now would be not use coalesced RDD, which is
avoids the race condition.
Hi Tathagata,
Thanks for your help! By not using coalesced RDD, do you mean not
repartitioning my Dstream?
Thanks,
Mike
On Tue, Jun 3, 2014 at 12:03 PM, Tathagata Das tathagata.das1...@gmail.com
wrote:
I think I know what is going on! This probably a race condition in the
DAGScheduler. I
I am not sure what DStream operations you are using, but some operation is
internally creating CoalescedRDDs. That is causing the race condition. I
might be able help if you can tell me what DStream operations you are using.
TD
On Tue, Jun 3, 2014 at 4:54 PM, Michael Chang m...@tellapart.com
Hi all,
Seeing a random exception kill my spark streaming job. Here's a stack
trace:
java.util.NoSuchElementException: key not found: 32855
at scala.collection.MapLike$class.default(MapLike.scala:228)
at scala.collection.AbstractMap.default(Map.scala:58)
at
Do you have the info level logs of the application? Can you grep the
value 32855
to find any references to it? Also what version of the Spark are you using
(so that I can match the stack trace, does not seem to match with Spark
1.0)?
TD
On Mon, Jun 2, 2014 at 3:27 PM, Michael Chang
10 matches
Mail list logo