Is there anyone else who is facing this problem of writing to HBase when
running Spark on YARN mode or Standalone mode using this example?
If not, then do I need to explicitly, specify something in the classpath?
Regards,
Gaurav
On Wed, Jun 11, 2014 at 1:53 PM, Gaurav Dasgupta
wrote:
> Hi Kan
Hi Kanwaldeep,
I have tried your code but arrived into a problem. The code is working fine
in *local* mode. But if I run the same code in Spark stand alone mode or
YARN mode, then it is continuously executing, but not saving anything in
the HBase table. I guess, it is stopping data streaming once
Thanks Tobias for replying.
The problem was that, I have to provide the dependency jars' paths to the
StreamingContext within the code. So, providing all the jar paths, resolved
my problem. Refer the below code snippet:
*JavaStreamingContext ssc = new JavaStreamingContext(args[0],
"S
Hi,
I am unable to understand how to write data directly on HBase table from a
Spark (pyspark) Python program. Is this possible in the current Spark
releases? If so, can someone provide an example code snippet to do this?
Thanks in advance.
Regards,
Gaurav
--
View this message in context:
ht
Hi Tathagata,
I am very new to Spark streaming and I have never used the pipe() function
yet.
I have written a Spark streaming program (JAVA API) which is receiving data
from Kafka and simply printing now.
*JavaStreamingContext ssc = new JavaStreamingContext(args[0],
Few more details I would like to provide (Sorry as I should have provided
with the previous post):
*- Spark Version = 0.9.1 (using pre-built spark-0.9.1-bin-hadoop2)
- Hadoop Version = 2.4.0 (Hortonworks)
- I am trying to execute a Spark Streaming program*
Because I am using Hortornworks Hadoo
Hi,
Even I am encountering the same problem and exactly the same console logs
while running custom Spark programs using YARN. I have checked all the
information provided elsewhere and confirmed the same in my system:
*- Set HADOOP_CONF_DIR=/etc/hadoop/conf
- Set YARN_CONF_DIR=/etc/hadoop/conf