Hey Mich,
Are you setting the column family / qualifier values in the config?
e.g.
config.set(TableInputFormat.SCAN_COLUMN_FAMILY, "cf") // column family
config.set(TableInputFormat.SCAN_COLUMNS, "cf1:cq1 cf1:cq2") // column
qualifier
As you already have the results when you use newAPIHadoopRDD
Hi,
I have a routine in Spark that iterates through Hbase rows and tries to
read columns.
My question is how can I read the correct ordering of columns?
example
val hBaseRDD = sc.newAPIHadoopRDD(conf, classOf[TableInputFormat],
classOf[org.apache.hadoop.hbase.io.ImmutableBytesWritable],