Manjeet:
Have you looked at HDFS-10540 ?
Not sure if the distribution you use has the fix.
FYI
On Wed, Dec 14, 2016 at 9:34 PM, Manjeet Singh
wrote:
> Once I read some where that do not run HDFS balancer otherwise it will
> spoil Meta of Hbase
>
> I am getting bellow erroe when I add new node
Once I read some where that do not run HDFS balancer otherwise it will
spoil Meta of Hbase
I am getting bellow erroe when I add new node
Dec 15, 8:23:14.549 AM ERROR
org.apache.hadoop.hdfs.server.datanode.DiskBalancer
Disk Balancer is not enabled.
can any onesuggest me what to do now
Thanks
Unfortunately the difference in available space per node won't be
considered by the HBase balancer, but it shouldn't matter since you are
adding more space. Worst case scenario is that you will start filling up
the other disks first before the one with the 6TB.
cheers,
esteban.
--
Cloudera, Inc
Question > What if I have uneven storage
I have 6 HDD of 300 GB
now i want to add single Node with HDD of 6 TB so what will be the impact?
and how and what things I do need to take care?
Thanks
Manjeet
On Tue, Dec 6, 2016 at 11:14 PM, Esteban Gutierrez
wrote:
> Hi Manjeet,
>
> HBase will perfor
Thanks Esteban for your reply :)
On 6 Dec 2016 23:14, "Esteban Gutierrez" wrote:
> Hi Manjeet,
>
> HBase will perform the balancing for you automatically as long as the HBase
> balancer hasn't been disabled. You can check for the state of the balancer
> using the hbase shell:
>
> $ hbase shell
>
Hi Manjeet,
HBase will perform the balancing for you automatically as long as the HBase
balancer hasn't been disabled. You can check for the state of the balancer
using the hbase shell:
$ hbase shell
hbase(main):008:0> balancer_enabled
true
Assuming the balancer hasn't been turned off, that shou
Hi All,
Can anyone help me if I want add new node of same configuration in cluser,
then how and what step I need to perform for distribution of data via load
balancer.
Thanks
Manjeet
On Mon, Jul 30, 2012 at 11:58 AM, Alex Baranau wrote:
> Glad to hear that answers & suggestions helped you!
>
> The format you are seeing is the output of
> org.apache.hadoop.hbase.util.Bytes.toStringBinary(..) method [1]. As you
> can see below, for "printable characters" it outputs the character
Glad to hear that answers & suggestions helped you!
The format you are seeing is the output of
org.apache.hadoop.hbase.util.Bytes.toStringBinary(..) method [1]. As you
can see below, for "printable characters" it outputs the character itself,
while for "non-printable" characters it outputs data in
On Fri, Jul 27, 2012 at 6:03 PM, Alex Baranau wrote:
> Yeah, your row keys start with \x00 which is = (byte) 0. This is not the
> same as "0" (which is = (byte) 48). You know what to fix now ;)
>
>
I made required changes and it seems to be load balancing it pretty well. I
do have a follow up que
You can also do an online merge to merge the regions together and then
resplit it ... https://issues.apache.org/jira/browse/HBASE-1621
--S
On Sat, Jul 28, 2012 at 11:07 AM, Mohit Anchlia wrote:
> On Fri, Jul 27, 2012 at 6:03 PM, Alex Baranau wrote:
>
>> Yeah, your row keys start with \x00 which i
On Fri, Jul 27, 2012 at 6:03 PM, Alex Baranau wrote:
> Yeah, your row keys start with \x00 which is = (byte) 0. This is not the
> same as "0" (which is = (byte) 48). You know what to fix now ;)
>
>
Thanks for checking! I'll make the required changes to my split. Is it
possible to alter splits or o
Yeah, your row keys start with \x00 which is = (byte) 0. This is not the
same as "0" (which is = (byte) 48). You know what to fix now ;)
Alex Baranau
--
Sematext :: http://blog.sematext.com/ :: Hadoop - HBase - ElasticSearch -
Solr
On Fri, Jul 27, 2012 at 8:43 PM, Mohit Anchlia wrote:
> On
On Fri, Jul 27, 2012 at 4:51 PM, Alex Baranau wrote:
> Can you scan your table and show one record?
>
> I guess you might be confusing Bytes.toBytes("0") vs byte[] {(byte) 0} that
> I mentioned in the other thread. I.e. looks like first region holds records
> which key starts with any byte up to "
Can you scan your table and show one record?
I guess you might be confusing Bytes.toBytes("0") vs byte[] {(byte) 0} that
I mentioned in the other thread. I.e. looks like first region holds records
which key starts with any byte up to "0", which is (byte) 48. Hence, if you
set first byte of your ke
On Fri, Jul 27, 2012 at 11:48 AM, Alex Baranau wrote:
> You can read metrics [0] from JMX directly [1] or use Ganglia [2] or other
> third-party tools like [3] (I'm a little biased here;)).
>
> [0] http://hbase.apache.org/book.html#hbase_metrics
> [1] http://hbase.apache.org/metrics.html
> [2] htt
You can read metrics [0] from JMX directly [1] or use Ganglia [2] or other
third-party tools like [3] (I'm a little biased here;)).
[0] http://hbase.apache.org/book.html#hbase_metrics
[1] http://hbase.apache.org/metrics.html
[2] http://wiki.apache.org/hadoop/GangliaMetrics
[3] http://sematext.com/
Thank you so much for your valuable information. I had not yet used any
monitoring tool .. can please suggest me a good monitor tool .
Syed Abdul kather
send from Samsung S3
On Jul 27, 2012 11:37 PM, "Alex Baranau" wrote:
> -rwxr-xr-x 3 root root 1993369 2012-07-26 13:59
>
> /hbase/SESSION_TIMEL
-rwxr-xr-x 3 root root 1993369 2012-07-26 13:59
/hbase/SESSION_TIMELINE1/0a5f6fadd0435898c6f4cf11daa9895a/S_T_MTX/1566523617482885717
"1993369" is the size. Oh sorry. It is 2MB, not 2GB. Yeah, that doesn't
tell a lot. Looks like all data is in Memstore. As I said, you should try
flushing the table
Alex Baranau,
Can please tell how did you found it has 2GB of data from
"0a5f6fadd0435898c6f4cf11daa9895a" . I am pretty much intrested to know it .
Thanks and Regards,
S SYED ABDUL KATHER
On Fri, Jul 27, 2012 at 7:51 PM, Alex Baranau wrote:
> From what you posted above
>From what you posted above, I guess one of the regions
(0a5f6fadd0435898c6f4cf11daa9895a,
note that it has 2 files 2GB each [1], while others regions are "empty") is
getting hit with writes. You may want to run "flush 'mytable'" command from
hbase shell before looking at hdfs - this way you make s
Hi,
by node do you mean regionserver node ?
if you referring to RegionServer node: you can go to the hbase master web
interface master:65510/master.jsp to see load for each regionserver. That's
the overall load. If you want to see load per node per table, you will need
to query on .META. table (c
Is there a way to see how much data does each node have per Hbase table?
On Thu, Jul 26, 2012 at 5:53 PM, syed kather wrote:
> First check whether the data in hbase is consistent ... check this by
> running hbck (bin/hbase hbck ) If all the region is consistent .
> Now check no of splits in loca
First check whether the data in hbase is consistent ... check this by
running hbck (bin/hbase hbck ) If all the region is consistent .
Now check no of splits in localhost:60010 for the table mention ..
On Jul 27, 2012 4:02 AM, "Mohit Anchlia" wrote:
> I added new regions and the performance didn'
24 matches
Mail list logo