Hi,
I'm testing HBase to choose the right hardware configurations for a heavy write
use case. I'm testing using YCSB.
The cluster consist of 2 masters, and 5 regionservers(4 cores, 14GB ram,
4x512GB SSD).
I've created a new table in HBase, presplit it to 50 regions. I'm running 3
clients
Hi,
I modified the default settings for logging in HBASE, by editing
log4j.properties in $HBASE_HOME/conf directory.
Here is settings,
...
# Rolling File Appender properties
hbase.log.maxfilesize=256MB
hbase.log.maxbackupindex=15
# Rolling File Appender
And is there any information about logging settings which is outputted to log
file when hbase daemon starts?
Thank you
On Jan 28, 2016, at 11:44 AM, Akmal Abbasov <akmal.abba...@icloud.com> wrote:
Hi,
I modified the default settings for logging in HBASE, by editing
log4j.prop
Yu <yuzhih...@gmail.com> wrote:
>
> The description covers 0.98.x to 1.1.y as well.
>
> Rolling upgrade is preferred because there is no downtime.
>
> Cheers
>
>> On Nov 9, 2015, at 2:53 AM, Akmal Abbasov <akmal.abba...@icloud.com> wrote:
>>
>> H
09:38, Ted Yu <yuzhih...@gmail.com> wrote:
>
> Please take a look at
> http://hbase.apache.org/book.html#_upgrade_paths
>
>> On Nov 8, 2015, at 11:47 PM, Akmal Abbasov <akmal.abba...@icloud.com> wrote:
>>
>> Hi all,
>> I’m planing to upgra
09:38, Ted Yu <yuzhih...@gmail.com> wrote:
>
> Please take a look at
> http://hbase.apache.org/book.html#_upgrade_paths
>
>> On Nov 8, 2015, at 11:47 PM, Akmal Abbasov <akmal.abba...@icloud.com> wrote:
>>
>> Hi all,
>> I’m planing to upgra
Hi all,
I’m planing to upgrade my HBase. Currently it’s hbase-0.98.7-hadoop2. I was
wondering can I upgrade directly to current stable version, which is 1.1.2?
Moreover, I can tolerate downtime, so in this case upgrade process will include,
a. Installation of the new version
b. Distribution of
n
>
>> Afaik, it should be done through zookeeper, but through which API it will
> be more convenient?
> no,no,no
> use hdfs-site.xml configuration.
> You need to add configuration for remote NN HA and your local hdfs client
> would correctly resolve active NN.
>
> 2015-09
Hi all,
I would like to know the best practice when exporting a snapshot to remote
hbase cluster with ha configuration.
My assumption is:
1. to know which of the HDFS namenode is active
2. export snapshot to active namenode
Since I need to do this programmatically what is the best way to know
big number of snapshots?
Thanks!
> On 15 Sep 2015, at 00:13, Akmal Abbasov <akmal.abba...@icloud.com> wrote:
>
> Yes, there are a lot of following messages
> 2015-09-14 22:03:14,930 WARN [FifoRpcScheduler.handler1-thread-7]
> ipc.RpcServer: RpcServer.respondercallId: 12
Hi all,
I’m having problems with accessing to HBase WebUI on active master node, while
I still can access to the WebUI of the standby master.
I’ve checked the state of HBase using hbase hbck, and it is in consistent state.
Checked the node which holds hbase:meta table, the snippet from its logs
Hi all,
I’m having problems with accessing to HBase WebUI on active master node,
while I still can access to the WebUI of the standby master.
I’ve checked the state of HBase using hbase hbck, and it is in consistent
state.
Checked the node which holds hbase:meta table, the snippet from its logs
was: null
Thanks.
> On 15 Sep 2015, at 00:04, Ted Yu <yuzhih...@gmail.com> wrote:
>
> Have you checked the log on 192.168.0.54 ?
>
> Cheers
>
> On Mon, Sep 14, 2015 at 3:02 PM, Akmal Abbasov <akmal.abba...@icloud.com>
> wrote:
>
>> Yes, hbase she
s what it could be?
Thanks.
> On 14 Sep 2015, at 15:09, Ted Yu <yuzhih...@gmail.com> wrote:
>
> Can you check master log for the period when you accessed master web UI ?
> Does hbase shell function properly ?
>
> Thanks
>
>
>
>> On Sep 14, 2015, at 4:36 AM, A
Hi all,
I’m having problems with accessing to HBase WebUI on active master node, while
I still can access to the WebUI of the standby master.
I’ve checked the state of HBase using hbase hbck, and it is in consistent state.
Checked the node which holds hbase:meta table, the snippet from its logs
Hi,
I would like to know about pros and cons against small region sizes.
Currently I have cluster with 5 nodes, which serve 5 tables, but there are ~80
regions per node, while actual data(total size of all hstores) is ~50GB.
Isn’t it an overhead, since there is a table which is ~30MB which has 96
The if is correct, my fault.
On 13 Jul 2015, at 14:58, Akmal Abbasov akmal.abba...@icloud.com wrote:
In applyCompactionPolicy method of ExploringCompactionPolicy
http://hbase.apache.org/xref/org/apache/hadoop/hbase/regionserver/compactions/ExploringCompactionPolicy.html
class
()
|| (selection.size() == bestSelection.size() size bestSize);
}
which compares two selections and what I see here is when mightBeStuck =
false selection with more files will
always be preferred.
Correct if I am wrong.
-Vlad
On Thu, May 28, 2015 at 8:00 AM, Akmal Abbasov
/compactions/ExploringCompactionPolicy.html#182
}
I am using the default setting, so ratio is 1.2
Is there something I am doing wrong here?
Thank you.
Akmal
On 13 Jul 2015, at 15:03, Akmal Abbasov akmal.abba...@icloud.com wrote:
The if is correct, my fault.
On 13 Jul 2015, at 14:58, Akmal Abbasov
On Jul 7, 2015, at 12:05 AM, Akmal Abbasov akmal.abba...@icloud.com wrote:
Have you run the following command in hbase shell ?
balance_switch true
I’ve tried, and this did the trick. Thank you.
One more thing is not clear for me is what I can do with ~4000 znodes in
/hadoop-ha/testhbase1
, will the system try to complete
all of these applications?
Thank you.
On 07 Jul 2015, at 00:16, Ted Yu yuzhih...@gmail.com wrote:
Have you run the following command in hbase shell ?
balance_switch true
Cheers
On Mon, Jul 6, 2015 at 12:16 PM, Akmal Abbasov akmal.abba...@icloud.com
something similar to the following ?
master.HMaster: Not running balancer because 1 region(s) in transition
You can search backwards for balancer / assignment related logs.
Cheers
On Mon, Jul 6, 2015 at 8:49 AM, Akmal Abbasov akmal.abba...@icloud.com
wrote:
What error(s) did you get when
recreated the rs with disks of the
previous one, cluster started working.
But now, only 3 rs host the regions, other 2 have 0 regions.
I’ve tried to start balancer manually, but it returned false?
Any idea?
I am using hbase hbase-0.98.7-hadoop2.
Thank you.
Kind regards,
Akmal Abbasov
this time ? If there was region in
transition, balancer wouldn't balance.
Cheers
On Mon, Jul 6, 2015 at 8:29 AM, Akmal Abbasov akmal.abba...@icloud.com
wrote:
Hi all,
I have a strange behaviour in my HBase cluster. I have 5 rs and 2 masters.
One of the rs stopped working, restart didn’t
Hi, is there a metrics to monitor hbase snapshotting?
Thank you.
:
700b34f5d2a3aa10804eff35906fd6d8 - meta: Initiating major compaction (all
files)
Cheers
On Tue, May 12, 2015 at 2:06 PM, Akmal Abbasov akmal.abba...@icloud.com
wrote:
Hi Ted,
Thank you for reply.
I am running with the default settings.
Sent from my iPhone
On 12 May 2015, at 22:02, Ted Yu yuzhih...@gmail.com
.
But as you can see from the logs the files selected have: 4.7K, 5.1K, 3.8K and
10.8M.
Why it is including 10.8M file?
Which setting should be tuned to avoid this?
Thank you.
Kind regards,
Akmal Abbasov
On 28 May 2015, at 16:54, Ted Yu yuzhih...@gmail.com wrote:
bq. Completed major
/b8b6210d9e6d4ec2344238c6e9c17ddf
As I understood this files were copied to archive folder after compaction.
The part I didn’t understand is, why the file with 638 K was also selected for
compaction?
Any ideas?
Thank you.
Kind regards,
Akmal Abbasov
Any suggestions?
Thank you.
Regards,
Akmal Abbasov
On 12 May 2015, at 18:52, Akmal Abbasov akmal.abba...@icloud.com wrote:
HI,
I am using HBase 0.98.7.
I am using HBase snapshots to backup data. I create snapshot of tables each
our.
Each create snapshot process will cause the flush
On Tue, May 12, 2015 at 9:52 AM, Akmal Abbasov akmal.abba...@icloud.com
wrote:
HI,
I am using HBase 0.98.7.
I am using HBase snapshots to backup data. I create snapshot of tables
each our.
Each create snapshot process will cause the flush of the memstore, and
creation of hfiles.
When
in case I have memory
pressure(hbase.regionserver.global.memstore.lowerLimit),
and trigger periodical flushes each 24 hours.
Is this approach which will work, or I am doing things wrong.
Is there any best practices for my case?
Thank you.
Kind regards,
Akmal Abbasov
On 08 May 2015, at 17:16, lars
and regionservers.
What could be a reason for this?
Thank you.
Regards,
Akmal Abbasov
Hi Sean,
Thank you for a quick reply.
I am not sure about next upgrade time for now.
So the only solution in that case would be to parse the output of the HBase
shell command?
Thank you.
Regards,
Akmal Abbasov
On 05 May 2015, at 15:25, Sean Busbey bus...@cloudera.com wrote:
Hi Akmal
of the HBase shell command in version
I am using?
Thank you.
Regards,
Akmal Abbasov
Hi David,
I have HDFS HA. I was supposing that maybe the namenode with an active hbase
master is not active. But no, both namenode and hbase master are active in hb1m.
So is it supposed to work in that way, or it is a problem with my
configurations?
Thank you.
Regards,
Akmal Abbasov
On 04 May
Hi Ted,
I am using hadoop-2.5.1 and hbase-0.98.7-hadoop2.
The command for snapshot export is:
hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot snappy -copy-to
hdfs://hb1m/hbase -overwrite
Thank you
Regards,
Akmal Abbasov
On 03 May 2015, at 16:57, Ted Yu yuzhih...@gmail.com wrote
)
at
org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:995)
But, when I try with the standby hbase master, everything is working.
Is it the correct way of working?
Thank you.
Regards,
Akmal Abbasov
Hi Ted,
Yes, I can confirm that hb1m is active
I tested it using ./hbase-jruby get-active-master.rb which is in HBASE_DIR/bin/
And also I can see that hb1m is an active one from my dashboard.
Thank you.
Regards,
Akmal Abbasov
On 03 May 2015, at 17:17, Ted Yu yuzhih...@gmail.com wrote:
bq
Hi all,
Is it possible to specify the output HDFS directory for HBase snapshot?
It could be very helpful to keep snapshots sorted according to a date.
Thank you.
Regards,
Akmal Abbasov
Hi Matteo,
Thank you, for a quick reply.
In this case, after exporting a snapshot to the location I want in my local
machine,
can I delete the same snapshot from the default location,
/hbase/.hbase-snapshot ?
Thank you.
Regards,
Akmal Abbasov
On 30 Apr 2015, at 00:17, Matteo Bertozzi
Ok, thank you.
Regards,
Akmal Abbasov
On 30 Apr 2015, at 00:31, Matteo Bertozzi theo.berto...@gmail.com wrote:
you should not access/modify the /hbase directory directly.
but after export you can use the delete_snapshot command or
Admin.deleteSnapshot() API to delete it.
Matteo
Hi, I have 2 clusters running HBase, and I want to export a snapshot from
cluster A to cluster B.
When I am doing exportSnapshot I am getting java.io.FileNotFoundException,
because it is searching for a jar file in hdfs, not in my local storage.
Any ideas how it could be solved?
Here is an
release are you using ?
Thanks
On Mar 17, 2015, at 4:38 AM, Akmal Abbasov akmal.abba...@icloud.com wrote:
Hi, I have 2 clusters running HBase, and I want to export a snapshot from
cluster A to cluster B.
When I am doing exportSnapshot I am getting java.io.FileNotFoundException
Hi, I am new to Hadoop Hbase. I have a Hbase cluster in one datacenter, and I
need to create a backup in the second one. Currently the second HBase cluster
is ready, and I would like to import data from first cluster.
I would like to use exportSnapshot tool for this, I’ve tried it one my test
Hi, I am new to Hadoop Hbase. I have a Hbase cluster in one datacenter, and I
need to create a backup in the second one. Currently the second HBase cluster
is ready, and I would like to import data from first cluster.
I would like to use exportSnapshot tool for this, I’ve tried it one my test
45 matches
Mail list logo