Hi All,
My spark Configuration is following.
spark = SparkSession.builder.master(mesos_ip) \
.config('spark.executor.cores','3')\
.config('spark.executor.memory','8g')\
.config('spark.es.scroll.size','1')\
.config('spark.network.timeout','600s')\
This is running in YARN cluster mode. It was restarted automatically and
continued fine.
I was trying to see what went wrong. AFAIK there were no task failure.
Nothing in executor logs. The log I gave is in driver.
After some digging, I did see that there was a rebalance in kafka logs
around this
Does restarting after a few minutes solves the problem? Could be a
transient issue that lasts long enough for spark task-level retries to all
fail.
On Tue, Feb 7, 2017 at 4:34 PM, Srikanth wrote:
> Hello,
>
> I had a spark streaming app that reads from kafka running for a
Hello,
I had a spark streaming app that reads from kafka running for a few hours
after which it failed with error
*17/02/07 20:04:10 ERROR JobScheduler: Error generating jobs for time
148649785 ms
java.lang.IllegalStateException: No current assignment for partition mt_event-5
at
Hi,
I am running a simple job on Spark 1.6 in which I am trying to leftOuterJoin a
big RDD with a smaller one. I am not yet broadcasting the smaller RDD yet
but I am stilling running into FetchFailed errors with finally the job
getting killed.
I have already partitioned the data to 5000
I was trying to enable SSL in Spark 1.6.2 and got this exception.
Not sure if I'm missing something or my keystore / truststore files got bad
although keytool showed that both files are fine...
=
*16/09/01 04:01:41 WARN NativeCodeLoader: Unable to load native-hadoop
library for your
Hi all,
Doing some simple column transformations (e.g. trimming strings) on a
DataFrame using UDFs. This DataFrame is in Avro format and being loaded off
HDFS. The job has about 16,000 parts/tasks.
About half way through the job, then fails with a message;
org.apache.spark.SparkException: Job
parkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Exception-in-Spark-sql-insertIntoJDBC-com
etprops)
Thanks
Sri
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Exception-in-Spark-sql-insertIntoJDBC-command-tp24655p25640.html
Sent from the Apache Spark User List mailing list a
. or if there is any other way to do the same in 1.4.1 version.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Exception-in-Spark-sql-insertIntoJDBC-command-tp24655.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
gt;
> wordCountPair.foreachRDD(rdd =>
> rdd.saveToCassandra("nexti","direct_api_test",AllColumns))
>
> ssc.start()
> ssc.awaitTermination()
> }
> catch {
> case ex: Exception =>{
> println(">>>>>>>
t;direct_api_test",AllColumns))
ssc.start()
ssc.awaitTermination()
}
catch {
case ex: Exception =>{
println(">>>>>>>> Exception UNKNOWN Only.")
}
}
}
I am sure that missing out on something, please provide your in
Hi all,
I got an exception like
“org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call
to dataType on unresolved object” when using some where condition queries.
I am using 1.4.0 version spark. But its perfectly working in hive .
Please refer the following query. I have
to dataType on unresolved object” when using some where condition queries.
I am using 1.4.0 version spark. Is this exception resolved in latest spark?
Regards,
Ravi
,
We got an exception like
“org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call
to dataType on unresolved object” when using some where condition queries.
I am using 1.4.0 version spark. Is this exception resolved in latest spark?
Regards,
Ravi
“org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call
to dataType on unresolved object” when using some where condition queries.
I am using 1.4.0 version spark. Is this exception resolved in latest spark?
Regards,
Ravi
Hi all,
We got an exception like
“org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call
to dataType on unresolved object” when using some where condition queries.
I am using 1.4.0 version spark. Is this exception resolved in latest spark?
Regards,
Ravi
Hi, all
I have a question about spark access hbase with yarn-cluster mode on a
kerberos yarn Cluster. Is it the only way to enable Spark access HBase by
distributing the keytab to each NodeManager?
It seems that Spark doesn't provide a delegation token like MR job, am
I right?
...@gmail.com wrote:
Any Ideas on this?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Weird-exception-in-Spark-job-tp22195p22204.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
Any Ideas on this?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Weird-exception-in-Spark-job-tp22195p22204.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
... 4 more
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Weird-exception-in-Spark-job-tp22195.html
Sent from the Apache
A workaround trick is found and put in the ticket
https://issues.apache.org/jira/browse/SPARK-4854. Hope this would be useful.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Custom-UDTF-with-Lateral-View-throws-ClassNotFound-exception-in-Spark-SQL-CLI
.
Thank you.
Shenghua
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Custom-UDTF-with-Lateral-View-throws-ClassNotFound-exception-in-Spark-SQL-CLI-tp20689.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
.nabble.com/Custom-UDTF-with-Lateral-View-throws-ClassNotFound-exception-in-Spark-SQL-CLI-tp20689.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr
.n3.nabble.com/KryoSerializer-exception-in-Spark-Streaming-JAVA-tp15479p20647.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional
ah makes sense - Thanks Michael!
On Mon, Nov 17, 2014 at 6:08 PM, Michael Armbrust mich...@databricks.com
wrote:
You are perhaps hitting an issue that was fixed by #3248
https://github.com/apache/spark/pull/3248?
On Mon, Nov 17, 2014 at 9:58 AM, Sadhan Sood sadhan.s...@gmail.com
wrote:
While testing sparkSQL, we were running this group by with expression query
and got an exception. The same query worked fine on hive.
SELECT from_unixtime(floor(xyz.whenrequestreceived/1000.0 - 25200),
'/MM/dd') as pst_date,
count(*) as num_xyzs
FROM
all_matched_abc
GROUP BY
You are perhaps hitting an issue that was fixed by #3248
https://github.com/apache/spark/pull/3248?
On Mon, Nov 17, 2014 at 9:58 AM, Sadhan Sood sadhan.s...@gmail.com wrote:
While testing sparkSQL, we were running this group by with expression
query and got an exception. The same query worked
)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Please help to resolve this
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/KryoSerializer-exception-in-Spark-Streaming-JAVA-tp15479.html
Hi All,
I wanted Spark on Yarn to up and running.
I did *SPARK_HADOOP_VERSION=2.3.0 SPARK_YARN=true ./sbt/sbt assembly*
Then i ran
*SPARK_JAR=./assembly/target/scala-2.9.3/spark-assembly-0.8.1-incubating-hadoop2.3.0.jar
30 matches
Mail list logo