Re: Hbase Mapreduce API - Reduce to a file is not working properly.

2014-08-02 Thread Parkirat
(); } context.write(key, new IntWritable(sum)); } } = Regards, Parkirat Bagga. -- View this message in context: http://apache-hbase.679495.n3.nabble.com/Hbase-Mapreduce-API-Reduce-to-a-file-is-not-working-properly

Re: Hbase Mapreduce API - Reduce to a file is not working properly.

2014-08-01 Thread Parkirat
or 1 so 1 test1 test1 this1 to 1 to 1 works 1 Regards, Parkirat Bagga -- View this message in context: http://apache-hbase.679495.n3.nabble.com/Hbase-Mapreduce-API-Reduce-to-a-file-is-not-working-properly

Hbase Mapreduce API - Reduce to a file is not working properly.

2014-07-31 Thread Parkirat
and value as nullwritable, but it seems *hbase mapreduce api dont consider reducer*, and outputs both key and value as text. Moreover if the same key comes twice, it goes to the file twice, even if my reducer want to log it only once. Could anybody help me with this problem? Regards, Parkirat Singh

HBase Disabled Table Resource Consumption?

2014-06-27 Thread Parkirat
. Will it affect hbase region server block cache or hbase write/read performance in any way? Or will it affect if regions from other tables increase over time and regions from this disabled table adding up may go beyond 200 per region server? Regards, Parkirat Singh Bagga. -- View this message in context

Re: HBase Disabled Table Resource Consumption?

2014-06-27 Thread Parkirat
Thanks Ted, for a super fast reply... cheers..!! -- View this message in context: http://apache-hbase.679495.n3.nabble.com/HBase-Disabled-Table-Resource-Consumption-tp4060823p4060826.html Sent from the HBase User mailing list archive at Nabble.com.

Bulk Delete not deleting the data in storefile

2014-04-07 Thread Parkirat
should have been reduced. Even the Major Compaction has not reduced the size of the file. How can, I reduce the size of the table? Regards, Parkirat Singh Bagga. -- View this message in context: http://apache-hbase.679495.n3.nabble.com/Bulk-Delete-not-deleting-the-data-in-storefile-tp4057937.html

Re: Bulk Delete not deleting the data in storefile

2014-04-07 Thread Parkirat
cleaned up in Major Compaction or I need to manually run it? Regards, Parkirat Singh Bagga. -- View this message in context: http://apache-hbase.679495.n3.nabble.com/Bulk-Delete-not-deleting-the-data-in-storefile-tp4057937p4057943.html Sent from the HBase User mailing list archive at Nabble.com.

Re: Bulk Delete not deleting the data in storefile

2014-04-07 Thread Parkirat
Hi, Got the solution to the problem. I was not doing flush, before doing the major_compact. After doing flush before major_compact, the size of the store file got reduced. Regards, Parkirat Singh Bagga -- View this message in context: http://apache-hbase.679495.n3.nabble.com/Bulk-Delete

HBase Rowkey Scan Taking more than 10 minutes.

2014-03-08 Thread Parkirat
=1392135639003, value=[] 15 row(s) in 880.4260 seconds == Regards, Parkirat -- View this message in context: http://apache-hbase.679495.n3.nabble.com/HBase-Rowkey-Scan-Taking-more-than-10-minutes-tp4056816.html Sent from the HBase User mailing

Re: HBase Rowkey Scan Taking more than 10 minutes.

2014-03-08 Thread Parkirat
Key, to get the results as HBase store's the data lexicographically. But, it seem's it does not optimise there. Thanks for the reply. :) Regards, Parkirat -- View this message in context: http://apache-hbase.679495.n3.nabble.com/HBase-Rowkey-Scan-Taking-more-than-10-minutes-tp4056816p4056819

Re: HBase Rowkey Scan Taking more than 10 minutes.

2014-03-08 Thread Parkirat
=1393510696484, value=953xx5 U_mykey|AABrCwADAAABP/IQXolH.token column=datafile:value, timestamp=1392135639003, value=[] 15 row(s) in 0.0540 seconds = Regards, Parkirat -- View this message in context: http://apache