class MyRegistrator implements KryoRegistrator {
public void registerClasses(Kryo kryo) {
kryo.register(ImpressionFactsValue.class);
}
}
change this class to public and give a try
--
View this message in context:
http://apache-spark-user-list.1001560.n
Is the class com.dataken.spark.examples.MyRegistrator public? if not, change
it to public and give a try.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/KryoRegistrator-exception-and-Kryo-class-not-found-while-compiling-tp10396p20646.html
Sent from the Apac
Hi Ted,
Here is the information about the Regions:
Region Server Region Count
http://regionserver1:60030/ 44
http://regionserver2:60030/ 39
http://regionserver3:60030/ 55
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Loading-
Hi,
Here is the configuration of the cluster:
Workers: 2
For each worker,
Cores: 24 Total, 0 Used
Memory: 69.6 GB Total, 0.0 B Used
For the spark.executor.memory, I didn't set it, so it should be the default
value 512M.
How much space does one row only consisting of the 3 columns consume?
the s
I am trying to load a large Hbase table into SPARK RDD to run a SparkSQL
query on the entity. For an entity with about 6 million rows, it will take
about 35 seconds to load it to RDD. Is it expected? Is there any way to
shorten the loading process? I have been getting some tips from
http://hbase.ap
Hi all,
I am new to Spark and currently I am trying to run a SparkSQL query on HBase
entity. For an entity with about 4000 rows, it will take about 12 seconds.
Is it expected? Is there any way to shorten the query process?
Here is the code snippet:
SparkConf sparkConf = new
SparkConf().setMaste