();
}
context.write(key, new IntWritable(sum));
}
}
=
Regards,
Parkirat Bagga.
--
View this message in context:
http://apache-hbase.679495.n3.nabble.com/Hbase-Mapreduce-API-Reduce-to-a-file-is-not-working-properly
or 1
so 1
test1
test1
this1
to 1
to 1
works 1
Regards,
Parkirat Bagga
--
View this message in context:
http://apache-hbase.679495.n3.nabble.com/Hbase-Mapreduce-API-Reduce-to-a-file-is-not-working-properly
and value as nullwritable, but it seems *hbase
mapreduce api dont consider reducer*, and outputs both key and value as
text.
Moreover if the same key comes twice, it goes to the file twice, even if my
reducer want to log it only once.
Could anybody help me with this problem?
Regards,
Parkirat Singh
.
Will it affect hbase region server block cache or hbase write/read
performance in any way?
Or will it affect if regions from other tables increase over time and
regions from this disabled table adding up may go beyond 200 per region
server?
Regards,
Parkirat Singh Bagga.
--
View this message in context
Thanks Ted, for a super fast reply... cheers..!!
--
View this message in context:
http://apache-hbase.679495.n3.nabble.com/HBase-Disabled-Table-Resource-Consumption-tp4060823p4060826.html
Sent from the HBase User mailing list archive at Nabble.com.
should have been reduced.
Even the Major Compaction has not reduced the size of the file.
How can, I reduce the size of the table?
Regards,
Parkirat Singh Bagga.
--
View this message in context:
http://apache-hbase.679495.n3.nabble.com/Bulk-Delete-not-deleting-the-data-in-storefile-tp4057937.html
cleaned up in Major Compaction or I need to
manually run it?
Regards,
Parkirat Singh Bagga.
--
View this message in context:
http://apache-hbase.679495.n3.nabble.com/Bulk-Delete-not-deleting-the-data-in-storefile-tp4057937p4057943.html
Sent from the HBase User mailing list archive at Nabble.com.
Hi,
Got the solution to the problem.
I was not doing flush, before doing the major_compact.
After doing flush before major_compact, the size of the store file got
reduced.
Regards,
Parkirat Singh Bagga
--
View this message in context:
http://apache-hbase.679495.n3.nabble.com/Bulk-Delete
=1392135639003, value=[]
15 row(s) in 880.4260 seconds
==
Regards,
Parkirat
--
View this message in context:
http://apache-hbase.679495.n3.nabble.com/HBase-Rowkey-Scan-Taking-more-than-10-minutes-tp4056816.html
Sent from the HBase User mailing
Key, to get
the results as HBase store's the data lexicographically.
But, it seem's it does not optimise there.
Thanks for the reply. :)
Regards,
Parkirat
--
View this message in context:
http://apache-hbase.679495.n3.nabble.com/HBase-Rowkey-Scan-Taking-more-than-10-minutes-tp4056816p4056819
=1393510696484, value=953xx5
U_mykey|AABrCwADAAABP/IQXolH.token
column=datafile:value, timestamp=1392135639003, value=[]
15 row(s) in 0.0540 seconds
=
Regards,
Parkirat
--
View this message in context:
http://apache
11 matches
Mail list logo