When I read spark source code, I found an abstract class AtomicType. It's
defined like this:
protected[sql] abstract class AtomicType extends DataType {
private[sql] type InternalType
private[sql] val tag: TypeTag[InternalType]
private[sql] val ordering: Ordering[InternalType]
@transient
Hi guys,
I wrote a spark streaming program which consume 1000 messages from one
topic of Kafka, did some transformation, and wrote the result back to
another topic. But only found 988 messages in the second topic. I checked
log info and confirmed all messages was received by receivers. But I
Hi guys,
I'm testing sparkSql 1.5.1, and I use hadoop-2.5.0-cdh5.3.2.
One sql which can ran successfully using hive failed when I ran it using
sparkSql.
I got the following errno:
Hi guys,
I'm testing sparkSql 1.5.1, and I use hadoop-2.5.0-cdh5.3.2.
One sql which can ran successfully using hive failed when I ran it using
sparkSql.
I got the following errno:
I increased executor memory from 6g to 10g, but it still failed and report
the same error and because of my company security policy, I cannot write the
sql out. But I'm sure that this error occurred in the compute method of
HadoopRDD, and this error happened in one of executors.
--
View this
Hi guys,
A job hanged about 16 hours when I run random forest algorithm, I don't know
why that happened.
I use spark 1.4.0 on yarn and here is the code
http://apache-spark-user-list.1001560.n3.nabble.com/file/n24047/1.png
and following picture is from spark ui
Hi guys
When I run random forest algorithm, a job hanged for 15.8h, I can not figure
out why that happened.
Here is the code
http://apache-spark-user-list.1001560.n3.nabble.com/file/n24046/%E5%B1%8F%E5%B9%95%E5%BF%AB%E7%85%A7_2015-07-29_%E4%B8%8A%E5%8D%8810.png
And I use spark 1.4.0 on yarn
Hi guys,
I want to add my custom Rules(whatever the rule is) when the sql Logical
Plan is being analysed.
Is there a way to do that in the spark application code?
Thanks
--
View this message in context: