Hi everybody.
I’m totally new in Spark and I wanna know one stuff that I do not manage to
find. I have a full ambary install with hbase, Hadoop and spark. My code
reads and writes in hdfs via hbase. Thus, as I understood, all data stored
are in bytes format in hdfs. Now, I know that it’s possible to request in
hdfs directly via Spark, but I don’t know if Spark will support the format
of  those data stored from hbase. 

I know that it’s possible to manage hbase from Spark but I wanna to directly
request in hdfs. 

Thanks to confirm it and to say me how to do it.
Regards,




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/hbase-spark-hdfs-tp28661.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to