Hi Spark Users,
We just upgrade our spark version from 1.0 to 1.1. And we are trying to
re-run all the written and tested projects we implemented on Spark 1.0.
However, when we try to execute the spark streaming project that stream data
from Kafka topics, it yields the following error message. I
Yeah, I forgot to build the new jar file for spark 1.1...
And now the errors are gone.
Thank you very much!
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Kafka-Spark-Streaming-on-Spark-1-1-tp14597p14604.html
Sent from the Apache Spark User List mailing
Hi TD,
I tried some different setup on maven these days, and now I can at least get
something when running mvn test. However, it seems like scalatest cannot
find the test cases specified in the test suite.
Here is the output I get:
Hi Khanderao and TD
Thank you very much for your reply and the new example. I have resolved the
problem. The zookeeper port I used wasn't right, the default port is not the
one that I suppose to use. So I set the
hbase.zookeeper.property.clientPort to the correct port and everything
worked.
Thank you TD,
I have worked around that problem and now the test compiles.
However, I don't actually see that test running. As when I do mvn test, it
just says BUILD SUCCESS, without any TEST section on stdout.
Are we suppose to use mvn test to run the test? Are there any other
methods can be
Hello Spark Users,
I have a spark streaming program that stream data from kafka topics and
output as parquet file on HDFS.
Now I want to write a unit test for this program to make sure the output
data is correct (i.e not missing any data from kafka).
However, I have no idea about how to do
This helps a lot!!
Thank you very much!
Jiajia
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Unit-Test-for-Spark-Streaming-tp11394p11396.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
Hi Cheng Hao,
Thank you very much for your reply.
Basically, the program runs on Spark 1.0.0 and Hive 0.12.0 .
Some setups of the environment are done by running SPARK_HIVE=true sbt/sbt
assembly/assembly, including the jar in all the workers, and copying the
hive-site.xml to spark's conf dir.
Hello Spark Users,
I am new to Spark SQL and now trying to first get the HiveFromSpark example
working.
However, I got the following error when running HiveFromSpark.scala program.
May I get some help on this please?
ERROR MESSAGE:
org.apache.thrift.TApplicationException: Invalid method name:
Hi,
I am trying to write a program that take input from kafka topics, do some
process and write the output to a hbase table.
I basically followed the MatricAggregatorHBase example TD created
(https://issues.apache.org/jira/browse/SPARK-944), but the problem is that I
always get
10 matches
Mail list logo