Hi everybody.

I'm totally new in Spark and I wanna know one stuff that I do not manage to
find. I have a full ambary install with hbase, Hadoop and spark. My code
reads and writes in hdfs via hbase. Thus, as I understood, all data stored
are in bytes format in hdfs. Now, I know that it's possible to request in
hdfs directly via Spark, but I don't know if Spark will support the format
of  those data stored from hbase. 

 

I know that it's possible to manage hbase from Spark but I wanna to directly
request in hdfs. 

 

Thanks to confirm it and to say me how to do it.

Regards,

 

Mathieu FERLAY

R&D Engineer

GNUBILA/MAAT France
174, Imp. Pres d'en Bas
74370 Argonay (France)


 <http://www.gnubila.fr/> www.gnubila.fr
 <mailto:mfer...@gnubila.fr> mfer...@gnubila.fr



 

 






PRIVACY DESIGNER

 

Reply via email to