Hi I trying to integrated Hbase-solr-spark. Solr is indexing all the documents from Hbase through hbase-indexer . Through the Spark I am manipulating all datasets .Thing is after getting the solrdocuments from the solr query ,it has the rowkey and rowvalues .So directly i got the rowkeys and corresponding values
question is 'its really need once again scan Hbase table through Get with rowkey from solrdocument'? example code HTable table = new HTable(conf, ""); Get get = null; List<Get> list = new ArrayList<Get>(); String url = " "; SolrServer server = new HttpSolrServer(url); SolrQuery query = new SolrQuery(" "); query.setStart(0); query.setRows(10); QueryResponse response = server.query(query); SolrDocumentList docs = response.getResults(); for (SolrDocument doc : docs) { get = new Get(Bytes.toBytes((String) doc.getFieldValue("rowkey"))); list.add(get); } *Result[] res = table.get(list);//This is really need? because it takes extra time to scan right?* This piece of code i got from http://www.programering.com/a/MTM5kDMwATI.html please correct if anything wrong :) Thanks Beesh