Now I am getting different error as below :
com.datastax.spark.connector.types.TypeConversionException: Cannot convert
object [] of type class
org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema to
com.datastax.driver.core.LocalDate.
at
com.datastax.spark.connector.types.TypeConverter$$
What are you trying to do? It looks like you are mixing multiple
SparkContexts together.
On Fri, Nov 4, 2016 at 5:15 PM, Lev Tsentsiper
wrote:
> My code throws an exception when I am trying to create new DataSet from
> within SteamWriter sink
>
> Simplified version of the code
>
> val df = sp
My code throws an exception when I am trying to create new DataSet from within
SteamWriter sink
Simplified version of the code
val df = sparkSession.readStream
.format("json")
.option("nullValue", " ")
.option("headerFlag", "true")
.option("spark.sql.shuffle.partitions", 1)
rify.platform.pipeline.TableWriter$$anonfun$close$5.apply(TableWriter.scala:109)
This code works when run locally, but fails in cluster deployment.
Can anyone suggest better way to handle creation and processing of DataSet
within ForeachWriter?
Thanks you
--
View this message in con
I'm running into an error that's not making a lot of sense to me, and
couldn't find sufficient info on the web to answer it myself. BTW, you can
reply at Stack Overflow too:
http://stackoverflow.com/questions/36254005/nosuchelementexception-in-chisqselector-fit-method-version-1-6-
Hi All,
I'm running into an error that's not making a lot of sense to me, and
couldn't find sufficient info on the web to answer it myself.
BTW, you can also reply on Stack Overflow:
http://stackoverflow.com/questions/36254005/nosuchelementexception-in-chisqselector-fit-metho
Hi,
Well, I finally was able to figure it out. I was using VectorIndexer with max
category 2 (min value is also 2) for my features, with a increase dimension of
the features vector I landed into problem of no such element found in Vector
Indexer.
It sounds a bit straight forward now, but i wa
Any suggestions any one?
Using version 1.5.1.
Regards
Ankush Khanna
On Nov 10, 2015, at 11:37 AM, Ankush Khanna wrote:
Hi,
I was working with a simple task (running locally). Just reading a file (35 mb)
with about 200 features and making a random forest with 5 trees with 5 depth.
While savi
Hi,
I was working with a simple task (running locally). Just reading a file (35 mb)
with about 200 features and making a random forest with 5 trees with 5 depth.
While saving the file with:
predictions.select("VisitNumber", "probability")
.write.format("json") // tried different formats
.
Hi,
I got NoSuchElementException when I tried to iterate through a Map which
contains some elements (not null, not empty). When I debug my code
(below). It seems the first part of the code which fills the Map is
executed after the second part that iterates the Map. The 1st part and
2nd part
lem ?
> Thanks
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/NoSuchElementException-key-not-found-when-changing-the-window-lenght-and-interval-in-Spark-Streaming-tp9010p9283.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
Hi
I get exactly the same problem here, do you've found the problem ?
Thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/NoSuchElementException-key-not-found-when-changing-the-window-lenght-and-interval-in-Spark-Streaming-tp9010p9283.html
Sent fro
Hi all,
I'm writing a Spark Streaming program that uses reduceByKeyAndWindow(), and
when I change the windowsLenght or slidingInterval I get the following
exceptions, running in local mode
14/07/06 13:03:46 ERROR actor.OneForOneStrategy: key not found:
1404677026000 ms
java.util.NoSuchElementExc
context:
http://apache-spark-user-list.1001560.n3.nabble.com/NoSuchElementException-key-not-found-tp6743p7157.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
I am not sure what DStream operations you are using, but some operation is
internally creating CoalescedRDDs. That is causing the race condition. I
might be able help if you can tell me what DStream operations you are using.
TD
On Tue, Jun 3, 2014 at 4:54 PM, Michael Chang wrote:
> Hi Tathagat
Hi Tathagata,
Thanks for your help! By not using coalesced RDD, do you mean not
repartitioning my Dstream?
Thanks,
Mike
On Tue, Jun 3, 2014 at 12:03 PM, Tathagata Das
wrote:
> I think I know what is going on! This probably a race condition in the
> DAGScheduler. I have added a JIRA for thi
I think I know what is going on! This probably a race condition in the
DAGScheduler. I have added a JIRA for this. The fix is not trivial though.
https://issues.apache.org/jira/browse/SPARK-2002
A "not-so-good" workaround for now would be not use coalesced RDD, which is
avoids the race condition.
I only had the warning level logs, unfortunately. There were no other
references of 32855 (except a repeated stack trace, I believe). I'm using
Spark 0.9.1
On Mon, Jun 2, 2014 at 5:50 PM, Tathagata Das
wrote:
> Do you have the info level logs of the application? Can you grep the value
> "3285
Do you have the info level logs of the application? Can you grep the
value "32855"
to find any references to it? Also what version of the Spark are you using
(so that I can match the stack trace, does not seem to match with Spark
1.0)?
TD
On Mon, Jun 2, 2014 at 3:27 PM, Michael Chang wrote:
>
Hi all,
Seeing a random exception kill my spark streaming job. Here's a stack
trace:
java.util.NoSuchElementException: key not found: 32855
at scala.collection.MapLike$class.default(MapLike.scala:228)
at scala.collection.AbstractMap.default(Map.scala:58)
at scala.collectio
20 matches
Mail list logo