For question #1, which release(s) are you using / interested in ?
Cheers
On Fri, Dec 18, 2015 at 9:21 AM, Dominic KUMAR wrote:
> Hi HBase,
>
> Is there any HBase End of Product Life Cycle date / release ? What is the
> road-map of HBase ?
>
>
>
> Regards,
>
> Dominic Vivek SHANTHA KUMAR
>
>
es when the Table object gets
> created? Or is it right back when the connection is established?
>
> --
> Chris
>
>
>
> On 18 December 2015 at 17:18, Ted Yu wrote:
>
> > Have you polled Ranger community with this question ?
> >
> > http://ranger.apache.
Have you polled Ranger community with this question ?
http://ranger.apache.org/mail-lists.html
Cheers
On Fri, Dec 18, 2015 at 9:04 AM, Chris Gent <
chris.g...@bigdatapartnership.com> wrote:
> Hi,
>
> We have a webservice that performs reads/writes on HBase tables and have a
> requirement to aut
at sun.tools.jstack.JStack.main(JStack.java:106)
> Caused by: sun.jvm.hotspot.debugger.DebuggerException: cannot open binary
> file
> at sun.jvm.hotspot.debugger.linux.LinuxDebuggerLocal.attach0(Native
> Method)
> at
> sun.jvm.hotspot.debugger.linux.LinuxDebuggerLocal.access$100(LinuxDebuggerLocal.java:62)
> at
> sun.jvm.hotspot.debugger.linux.LinuxDebuggerLocal$1AttachTask.doit(LinuxDebuggerLocal.java:269)
> at
> sun.jvm.hotspot.debugger.linux.LinuxDebuggerLocal$LinuxDebuggerLocalWorkerThread.run(LinuxDebuggerLocal.java:138)
>
>
>
> On 17/12/2015 3:01 PM, Ted Yu wrote:
>
>> ps aux | grep aster
>>
>
>
eers
>
>
> On 17/12/2015 2:53 PM, Ted Yu wrote:
>
>> I noticed Phoenix config parameters. Are Phoenix jars in place ?
>>
>> Can you capture jstack of the master when this happens ?
>>
>> Cheers
>>
>> On Dec 16, 2015, at 7:46 PM, F21 wrote:
>
I noticed Phoenix config parameters. Are Phoenix jars in place ?
Can you capture jstack of the master when this happens ?
Cheers
> On Dec 16, 2015, at 7:46 PM, F21 wrote:
>
> Background:
>
> I am prototyping a HBase cluster using docker. Docker is 1.9.1 and is running
> in a Ubuntu 15.10 64-
wrote:
> Thanks for your advices.
>
> For option three, I think major compaction on a large region will affect
> performance of the region server. So the down time shall be down time for
> all the table on that RS, am i Right?
>
>
>
>
> On 12/16/15, 5:12 AM, "Te
w.r.t. option #1, also consider
http://hbase.apache.org/book.html#arch.bulk.load
FYI
On Tue, Dec 15, 2015 at 12:17 PM, Frank Luo wrote:
> I am in a very similar situation.
>
> I guess you can try one of the options.
>
> Option one: avoid online insert by preparing data off-line. Do something
>
Colin:
You may want to take a look at HDFS-8298 where the posted stack trace looks
similar to what you described.
Cheers
On Mon, Dec 14, 2015 at 5:17 PM, Colin Kincaid Williams
wrote:
> We had a namenode go down due to timeout with the hdfs ha qjm journal:
>
>
>
> 2015-12-09 04:10:42,723 WARN
>
the entire row, all column families, being wiped. Is that
> > expected?
> >
> > On Sun, Dec 13, 2015 at 6:30 PM, Ted Yu wrote:
> >
> > > The Maximum Number of Versions for a Column Family applies to the row.
> > > In your case, subsequent wri
overwrite just the affected cells or affect
> everything in the column family or even the entire row?
>
> Thanks,
>
> Mike
>
> On Sun, Dec 13, 2015 at 5:14 PM, Ted Yu wrote:
>
> > The put for q4, q5, q6 and q7 wouldn't overwrite existing rows.
> >
> > W
The put for q4, q5, q6 and q7 wouldn't overwrite existing rows.
When were the columns q1 to q3 written ?
What is the TTL for your table ?
Thanks
On Sun, Dec 13, 2015 at 12:36 PM, Mike Thomsen
wrote:
> I noticed that our test data set is suddenly missing a lot of data, and I
> am wondering if i
Interesting.
Which exact 0.98 release are you using ?
Can you inspect logs to see when the duplicate HFiles were introduced
(during one bulk load run or multiple bulk load runs) ?
bq. Will a compaction eventually take care of this?
I think so.
Thanks
On Wed, Dec 9, 2015 at 7:18 AM, Anthony Ng
bq. Would they eventually be taken care of during a compaction and
converted over?
Yes. Compaction would produce v3 HFiles.
On Mon, Dec 7, 2015 at 9:48 PM, Anthony Nguyen
wrote:
> Hi all,
>
> I believe I have successfully done a rolling upgrade to a small test
> cluster that I've stood up to te
I think you can.
See the following:
http://hbase.apache.org/book.html#_upgrade_paths
It is advisable to use 1.1.2 client so that you get the full feature set
from 1.1.2
Cheers
On Fri, Dec 4, 2015 at 9:36 PM, Li Li wrote:
> I want to set up a hbase cluster. I found the latest stable release is
Looks like the row key prefix has fixed length (40 characters).
Please take a look at FuzzyRowFilter
Example can be found in:
hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFuzzyRowFilter.java
Cheers
On Fri, Dec 4, 2015 at 1:10 PM, Arun Patel wrote:
> I am storing multiple ap
the check.
>
> Best regards,
>
> On Fri, Dec 4, 2015 at 4:35 PM, Ted Yu wrote:
>
> > hasFamily() just checks the in-memory Map:
> >
> > public boolean hasFamily(final byte [] familyName) {
> >
> > return families.containsKey(familyName);
> &g
hasFamily() just checks the in-memory Map:
public boolean hasFamily(final byte [] familyName) {
return families.containsKey(familyName);
bq. try to create it I will have an exception stating that the CF is
already existing.
In this case you can catch the exception and proceed, right ?
Ch
Created HBASE-14928 and attached patch there.
FYI
On Thu, Dec 3, 2015 at 9:05 PM, Ted Yu wrote:
> Thanks for the response, Jerry.
>
> I created a patch:
>
> http://pastebin.com/xisGVHt8
>
> All REST tests passed.
>
> I know Ben logged a JIRA on this subject already
Have you looked at HBASE-6721 ?
> On Dec 4, 2015, at 12:08 AM, manohar mc wrote:
>
> Hi All,
> We are using hbase to store data on different customers. As part of
> design one of the key goal is to segregate data of each customers.
> I came across namespace but it looks like namespace do
Thanks for the response, Jerry.
I created a patch:
http://pastebin.com/xisGVHt8
All REST tests passed.
I know Ben logged a JIRA on this subject already.
Not sure if that should be re-opened or, a new JIRA should be created.
Once we have an open JIRA, I will attach my patch there.
Cheers
On T
gt;
> I found in my practice, it is always needed.
>
> 2015-12-04 4:48 GMT+08:00 Ted Yu :
>
> > There is get_splits command but it only shows the splits.
> >
> > status 'detailed' would show you enough information
> > e.g.
> >
> > "t1,30
There is get_splits command but it only shows the splits.
status 'detailed' would show you enough information
e.g.
"t1,30,1449175546660.da5f3853f6e59d1ada0a8554f12885ab."
numberOfStores=1, numberOfStorefiles=0,
storefileUncompressedSizeMB=0, lastMajorCompactionTimestamp=0,
sto
at
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:5475)
> ... 10 more
> Caused by: java.lang.ClassNotFoundException: Class
> org.apache.hadoop.hbase.regionserver.transactional.TransactionalRegion not
> found
> at
>
> org.apache.hadoop.conf.Configu
Do you mind pastebin snippet of region server log where the region stuck in
transition was hosted ?
This would give us some clue.
Cheers
On Wed, Dec 2, 2015 at 12:14 PM, Amanda Moran
wrote:
> Hi there All-
>
> I apologize if this issue has been raised before... I have done ALOT of
> googling o
bq. current MR implementation my OOME if there is too many columns
This is related:
HBASE-14696 Support setting allowPartialResults in mapreduce Mappers
but it is not in any hbase release yet.
FYI
On Tue, Dec 1, 2015 at 7:16 AM, Jean-Marc Spaggiari wrote:
> I can not say if you are crazy or n
Have you read http://hbase.apache.org/book.html#rowkey.design ?
bq. we can store more than one row for a row-key value.
Can you clarify your intention / use case ? If row key is the same, key
values would be in the same row.
On Mon, Nov 30, 2015 at 8:30 AM, Rajeshkumar J
wrote:
> Hi,
>
> I a
bq. duplicate data to two different tables, one with (salt-productId-timestamp)
and other with (salt-productId-place) keys
I suggest think twice about the above schema. It may become tricky keeping
data in the two tables in sync.
Meaning, when update to table1 succeeds but update to table2 fails,
others?
>
> On Thu, Nov 26, 2015 at 8:32 PM, Ted Yu wrote:
>
> > Excerpt from hbase-shell//src/main/ruby/shell/commands/major_compact.rb :
> >
> > Examples:
> > Compact all regions in a table:
> > hbase> major_compact 't1
Excerpt from hbase-shell//src/main/ruby/shell/commands/major_compact.rb :
Examples:
Compact all regions in a table:
hbase> major_compact 't1'
Cheers
On Wed, Nov 25, 2015 at 10:00 PM, Rajeshkumar J wrote:
> Hi Ted Yu,
>
> No I have n
Please take a look at:
http://hbase.apache.org/book.html#_endpoint_example
The Endpoint Coprocessor runs server side. So it should be very efficient.
Cheers
On Wed, Nov 25, 2015 at 6:03 AM, Arul wrote:
> Hi,
>
> I am new to Hbase and doing an POC. We have a detail table in which rows
> are
> c
After loading the data, have you major compacted the table ?
You can include STARTROW, STOPROW and TIMERANGE for your scan to narrow the
scope.
FYI
On Wed, Nov 25, 2015 at 2:36 AM, Rajeshkumar J
wrote:
> Hi,
>
>
> I am new to Apache Hbase and I am using hbase-0.98.13 and I have created a
> tab
Can you trace this region through master / region server log to see if there is
some clue ?
Cheers
> On Nov 21, 2015, at 2:56 AM, Pankaj kr wrote:
>
> Hi Folks,
>
> We met a very weird scenario.
> We are running PE tool, during testing we found all regions are in transition
> in state FAILED
regionservers will reject client connection requests if there is an RPC
>>> version mismatch.
>>>
>>> 1.x and 0.98 client and servers have been tested to be rolling upgrade
>>> compatible (meaning that older clients can work with newer server ver
ws up the
> error) and HMaster machines in this particular case are not time-synced. I
> notice a day's gap but I assume that NTP time-sync is only a requirement
> for Hbase master/ region servers and not also for their clients.
>
> Thanks,
> Sumit
>
>
; >:
> > >
> > > > Hi
> > > >
> > > > I have already replaced the hbase version with
> > "*hbase95.version=1.1.2*"
> > > in
> > > > libraries.properties file and compiled it, but I am getting the same
> > >
development is the use of the wrong client, I think about
> how to avoid.For example, we even upgrade to 1.0 but they may use 2.0
> version.
>
>
>
>
> ------ 原始邮件 --
> 发件人: "Ted Yu";;
> 发送时间: 2015年11月18日(星期三) 晚上7:37
> 收件人: &q
See http://hbase.apache.org/book.html#hbase.rolling.upgrade
For example, in Rolling upgrade from 0.98.x to HBase 1.0.0, we state that
it is possible to do a rolling upgrade between hbase-0.98.x and hbase-1.0.0.
Cheers
On Wed, Nov 18, 2015 at 12:22 AM, 聪聪 <175998...@qq.com> wrote:
> We recently
any advise on if I can somehow avoid it in first place?
>
> Thanks,
> Sumit
>
> --
> *From:* Ted Yu
> *To:* Sumit Nigam
> *Cc:* "user@hbase.apache.org"
> *Sent:* Sunday, November 15, 2015 3:34 PM
> *Subject:* Re: About exceptions
Caller.java:114)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:833)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:810)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:842)
> at
> com.thinkaurelius.titan.diskstorage.hbase.HBaseKeyColumnValueStore.getHelper(HBaseKeyColumnVal
bq. TableNotEnabledExceptionTableNotFoundExceptionIOException
Can you show log snippets where these exceptions occurred ?
Which release of hbase are you using ?
Have you run hbck to repair the inconsistencies ?
See http://hbase.apache.org/book.html#hbck.in.depth
Cheers
On Sat, Nov 14, 2015 at
As far as I can tell, the hybrid approach is amenable to better exception
handling on client side,
Cheers
On Fri, Nov 13, 2015 at 8:24 PM, Andrew Mains
wrote:
> Hi all,
>
> I'm developing an endpoint coprocessor at the moment for one of our
> applications, and I'm trying to figure out the best
Can you show the code snippet which retrieves the binary column and saves
in a file ?
Cheers
On Sat, Nov 14, 2015 at 12:53 AM, hbaseuser wrote:
> I have a hbase table , with two columns : id and binary . ( for storing
> files
> ) .
>
> I have a HTML page , with links to files , when each link l
ing my lease to other nodes rather quickly
> logs here
> http://pastebin.com/4EgsXCDd
>
>
> On Wed, Nov 11, 2015 at 6:20 AM, Ted Yu wrote:
>
>> Please note that IncreasingToUpperBoundRegionSplitPolicy was in effect.
>>
>> See http://hbase.apache.org/book.html#_
Please note that IncreasingToUpperBoundRegionSplitPolicy was in effect.
See http://hbase.apache.org/book.html#_custom_split_policies for related
information (there is link to
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/regionserver/IncreasingToUpperBoundRegionSplitPolicy.html
).
In
bq. The region exits are forcing the data to start transferring to other
regions,
Can you elaborate a bit more on the above ? Did you mean some regions were
transferred to other region server(s) ?
Can you determined the effective RegionSplitPolicy ?
Snippets of server logs would help illustrate
daemons
> I’ve checked hbck and the state is consistent. Is it valid way of
> upgrading an hbase, I know there is a downtime, but I can tolerate this.
>
> Thank you.
>
>
> > On 09 Nov 2015, at 15:23, Ted Yu wrote:
> >
> > The description covers 0.98.x to 1.1.
but does
> it apply for 0.98.x to 1.1.x as well?
> Moreover I found that only rolling upgrade is discussed, what about the
> upgrade with turning off the cluster.
> Thank you.
>
>
>> On 09 Nov 2015, at 09:38, Ted Yu wrote:
>>
>> Please take a look at
Please take a look at
http://hbase.apache.org/book.html#_upgrade_paths
> On Nov 8, 2015, at 11:47 PM, Akmal Abbasov wrote:
>
> Hi all,
> I’m planing to upgrade my HBase. Currently it’s hbase-0.98.7-hadoop2. I was
> wondering can I upgrade directly to current stable version, which is 1.1.2?
> Mo
Looks like there is more to be done to make the build against hbase 1.x
succeed.
See PIG-4728
On Wed, Nov 4, 2015 at 9:59 AM, Daniel Dai wrote:
> Will need to change ivy/libraries.properties, specify the right hbase
> version and compile again.
>
> On Wed, Nov 4, 2015 at 6:31 AM, T
hbase.client.Scan.setCacheBlocks(Z)Vat
>
> org.apache.pig.backend.hadoop.hbase.HBaseStorage.initScan(HBaseStorage.java:405)
> at
>
> org.apache.pig.backend.hadoop.hbase.HBaseStorage.(HBaseStorage.java:346)
> at
>
> org.apache.pig.backend.hadoop.hbase.HBaseStorage.(HBas
Naresh:
Can you pastebin the full error ?
It should be in pig_.log
Cheers
> On Nov 3, 2015, at 9:07 PM, Naresh Reddy
> wrote:
>
> Hi
>
> I am getting the below error while loading bulk data from pig to hbase
> through HBaseStorage.Please help me to resolve this issue.Thanks in advance.
>
Have you set the following config ?
hbase.master.keytab.file
hbase.master.kerberos.principal
hbase.regionserver.keytab.file
hbase.regionserver.kerberos.principal
Refer to http://hbase.apache.org/book.html for their meaning / sample value.
Please show the stack trace of the exception you got.
Ch
Can you give a bit more detail:
release of hbase you're building
OS
version of Java
snippet of error
On Mon, Nov 2, 2015 at 9:48 AM, Marimuthu wrote:
> I am getting error like "Can not find symbol" for most part in the code.
> Can you suggest whether I am missing any part of the packages or lib
e a question here, so far we don't provide code for HBase 1.0 as
> we see it has some gap with 1.1. From your perspective, do you see many
> HBase 1.0 deployments so Kylin need to support as well, or directly support
> 1.1 is good?
>
> Thank you!
>
> 2015-10-27 10:54
You will need to implement a custom filter.
Cheers
On Fri, Oct 30, 2015 at 8:35 AM, Eric Owhadi wrote:
> Hi Hbasers,
>
> I am investigating improving predicate pushdown for Trafodion. Looking at
> the various filters available, I am not seeing one that would behave like
> the SingleColumnValueF
Which release of hbase are you using ?
Can you pastebin master log snippet related to this table ?
Thanks
On Thu, Oct 29, 2015 at 11:32 AM, Sam William wrote:
> Hi,
> I have been trying to clean up of cluster clearing out a lot of unused
> tables. There is this one table that had grown too
Please take a look at:
https://issues.apache.org/jira/browse/HBASE-8751
On Thu, Oct 29, 2015 at 11:33 AM, anil gupta wrote:
> Hi,
>
> We have a requirement in which we want to replicate only one CF of a table
> whereas that table has 2 CF.
>
> I believe, its possible because replication_scope is
ache.hadoop.hbase.ipc.RpcClient$Connection.readResponse(RpcClient.java:1101)
> ~[hbase-client-0.96.1.1-hadoop2.jar:0.96.1.1-hadoop2]
>
> at
> org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:721)
> ~[hbase-client-0.96.1.1-hadoop2.jar:0.96.1.1-hadoop2]
>
>
r.doRead(RpcServer.java:770)
> at
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:563)
> at
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:538)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(Thread
ee anything unusual there.
>>> Is there something specific I should look for? Things owned by hbase user
>>> or hdfs or yarn? Hm, here, I don't really see anything interesting
>>
>>
>>
>>> Thanks,
>>> Otis
>>> --
>>> Mon
Currently such parameters are only read from conf
FYI
On Wed, Oct 28, 2015 at 8:42 AM, Nicolae Marasoiu <
nicolae.maras...@adswizz.com> wrote:
> Hi,
>
>
> To pass some TableInputFormat params to hbase Export command, we either
> need to edit/duplicate an existing hbase-site.conf and give its loc
up if we want to run the
> command “hbase org.apache.hadoop.hbase.mapreduce.Import”
>
>
> thanks again,Best wishes.
>
>
>
>
> > 在 2015年10月27日,上午11:42,Ted Yu 写道:
> >
> > Please note that the scheme was hdfs.
> >
> > Normally htrace-core-3.1.0-
ion to false.
>
>
>
> > 在 2015年10月27日,上午11:13,Ted Yu 写道:
> >
> > Can you give us a bit more information:
> >
> > Is the cluster secure ?
> > Have you checked permission for hdfs://mgfscluster/home/
> > hadoop/hbase-1.1.2/lib/htrace-core-3.
Can you give us a bit more information:
Is the cluster secure ?
Have you checked permission for hdfs://mgfscluster/home/
hadoop/hbase-1.1.2/lib/htrace-core-3.1.0-incubating.jar (accessible by the
user running Import) ?
Cheers
On Mon, Oct 26, 2015 at 7:58 PM, panghaoyuan wrote:
> hi,all
>
> our
When I use 1.1.2 for hbase version, I got the following:
http://pastebin.com/fY3mnz9L
Any plan to support 1.1 release in the future ?
Thanks
On Mon, Oct 26, 2015 at 7:45 PM, ShaoFeng Shi
wrote:
> The Apache Kylin team is pleased to announce the immediate availability
> of the 1.1 release. The
t; 192.168.39.22:60292: output error
> 2015-10-23 17:49:45,513 WARN [RpcServer.handler=6,port=6] ipc.RpcServer:
> RpcServer.respondercallId: 130945 service: MasterService methodName:
> ListTableDescriptorsByNamespace size: 48 connection: 192.168.39.22:60286:
> output error
> 2015-10-23
Which specific release of 0.98 are you using ?
Have you used lsof to see which files were being held onto ?
Thanks
On Fri, Oct 23, 2015 at 7:21 PM, Otis Gospodnetić <
otis.gospodne...@gmail.com> wrote:
> Hello,
>
> Is/was there a known issue with HBase 0.98 "holding onto" files?
>
> We noticed
Were other region servers functioning normally around 17:33 ?
Which hbase release are you using ?
Can you pastebin more of the region server log ?
Thanks
On Fri, Oct 23, 2015 at 8:28 AM, 聪聪 <175998...@qq.com> wrote:
> hi,all:
>
>
> This afternoon,The whole Hbase cluster is suddenly unable to r
Can you give some detail on why the 3 children names need to be in same
cell (instead of under different columns) ?
I assume the combination of children names varies. If you want to query
data for specific child (e.g. Child1Name), you may read unnecessary data
which is discarded after parsing.
Che
Hbase-spark module currently is only in master branch.
The feature can be back ported to branch-1 in the future.
Cheers
> On Oct 20, 2015, at 11:26 PM, Amit Hora wrote:
>
> I included cloud era's spark on hbase and after resolving dependencies it is
> working fine
>
> I believe hbase-spar
yjo...@cloudera.com> wrote:
>
> > My fault, working on it. Sorry about that!
> >
> > On Wed, Oct 21, 2015 at 12:50 AM, Ted Yu wrote:
> >
> >> Hi,
> >> I couldn't access the following URL (404):
> >> http://hbase.apache.org/book.html
> >&
Hi,
I couldn't access the following URL (404):
http://hbase.apache.org/book.html
The above is linked from http://hbase.apache.org
Where can I find the refguide ?
I can access http://hbase.apache.org/apache_hbase_reference_guide.pdf BTW
Thanks
It is feasible to launch multiple region servers on the same node.
You should set hbase.regionserver.info.port.auto to true so that the
instances don't collide on the same port.
FYI
On Mon, Oct 19, 2015 at 3:16 PM, rahul malviya
wrote:
> Hi,
>
> Is there a way to start multiple region servers o
RS were on Kafka brokers with two disks shared between
> Kafka logs and HDFS. HBase 1.1.1. The new RS servers are 3 disks each, with
> RS, DN and NodeManagers on them.
> On Oct 17, 2015 11:36 AM, "Ted Yu" wrote:
>
> bq. once I added more regionservers
>
> Were any of
bq. once I added more regionservers
Were any of the new regionservers on the Kafka broker nodes ?
Which release of hbase are you using ? Looks like 1.x since the log was
added by HBASE-11240
Thanks
On Sat, Oct 17, 2015 at 8:22 AM, Artem Ervits wrote:
> Hello all, trying to address a sudden ch
Please read http://hbase.apache.org/book.html#_hotspotting , if you haven't.
Cheers
On Thu, Oct 15, 2015 at 3:15 PM, Ted Yu wrote:
> Here're a few metrics (per server) to consider for finding hot spot:
>
> read request count
> write request count
> compaction q
Here're a few metrics (per server) to consider for finding hot spot:
read request count
write request count
compaction queue size
memstore size
Cheers
On Thu, Oct 15, 2015 at 2:51 PM, Gevorg Hari wrote:
> Hello,
>
> I'm afraid that my cluster is suffering of a bit of hotspotting, what's the
>
230fa68> (a java.lang.Object)
> at
> org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:138)
> at
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadP
Can you give a bit more detail on why block eviction was cause for the slow
region movement?
Did you happen to take stack traces ?
Thanks
> On Oct 15, 2015, at 10:32 AM, Randy Fox wrote:
>
> Hi,
>
> We just upgraded from 0.94 to 1.0.0 and have noticed that region moves are
> super slow (ord
See recent thread: http://search-hadoop.com/m/YGbbQfg0W1Onv5j
On Thu, Oct 15, 2015 at 3:42 AM, whodarewin2006
wrote:
> sorry,the subject is wrong,we want to transfer data from hbase0.98.6 to
> hbase 1.0.1.1
>
>
>
>
>
>
>
>
> At 2015-10-15 18:34:17, "whodarewin2006" wrote:
> >hi,
> >We upgr
Looks like you are using per cell TTL feature.
Which hbase release are you using ?
Can you formulate your description with either sequence of shell commands
or a unit test ?
Thanks
On Tue, Oct 13, 2015 at 8:13 PM, Colak, Emre
wrote:
> Hi,
>
> I have an HBase table with the following descripti
y deploying 1.1.2 next.
>
> Thanks
> Suresh
>
>
>
> On Mon, Oct 12, 2015 at 3:46 PM, Ted Yu wrote:
>
> > bq. cluster enabled for secure HBase with kerberos
> >
> > I assume your hdfs cluster has also been kerberized.
> >
> > Please pasteb
t; hbase.coprocessor.master.classes
>
>
> org.apache.hadoop.hbase.security.visibility.VisibilityController,org.apache.hadoop.hbase.security.access.AccessController
>
>
>
> hbase.coprocessor.region.classes
>
>
> org.apache.hadoop.hbase.security.visibility.V
Have you checked master to see if region assignment went okay ?
Cheers
> On Oct 12, 2015, at 7:56 AM, Jurian Broertjes
> wrote:
>
> Hi all,
>
> I'm using hbase (1.1.2) with phoenix (4.5.2-HBase-1.1) and had some (minor)
> HDFS issues. The HDFS issues are resolved and when I try to bring HBa
Please take a look at:
http://hbase.apache.org/book.html#_compaction
http://hbase.apache.org/book.html#exploringcompaction.policy
http://hbase.apache.org/book.html#compaction.ratiobasedcompactionpolicy.algorithm
FYI
On Sat, Oct 10, 2015 at 6:53 PM, Liren Ding
wrote:
> Hi,
>
> I am trying to de
n 0.0160 seconds*
>
> *hbase(main):006:0> scan 'visibilityTest', {RAW=>TRUE}*
> *ROW COLUMN+CELL
> *
> * r1 column=f1:, timestamp=1444530064056,
> type=DeleteFamily*
> * r1 column=f1:c1, timesta
I tried the sequence of commands from your example on a secure 1.1.2
cluster with the following config:
hbase.coprocessor.master.classes
org.apache.hadoop.hbase.security.access.AccessController,org.apache.hadoop.hbase.security.visibility.VisibilityController
hbase.copr
The exception was due to un-protobuf'ed data in peer state znode.
Which release of hbase are you using ?
Consider posting the question on ngdata forum.
Cheers
> On Oct 10, 2015, at 3:24 AM, beeshma r wrote:
>
> Hi
>
> i created Solr index using *HBase-indexer(NGDATA/hbase-indexer*)
>
qiang0...@163.com"
> wrote:
>
> the exception client get will be masterNotRunningException sometimes
> and the maste will print log:
> responseTooSlow
>
>
>
> wangyongqiang0...@163.com
>
> From: Ted Yu
> Date: 2015-10-10 16:31
> To: user@hbase.apache.
Did this exception happen repeatedly or intermittently ?
Does your cluster run secure hbase ?
Cheers
On Sat, Oct 10, 2015 at 1:23 AM, wangyongqiang0...@163.com <
wangyongqiang0...@163.com> wrote:
> we use hbase0.98.10, see the exception as flows:
>
>
> Caused by: java.lang.reflect.UndeclaredThr
I agree with Anil w.r.t. upgrade.
Nicolae:
If you can afford a few hours for the table to be offline, it seems
upgrading to 1.x first would be more beneficial.
On Fri, Oct 9, 2015 at 6:46 AM, Anil Gupta wrote:
> Hi Nicolas,
>
> For a table with 5k regions, it should not take more than 10 min fo
When triggerring full GC, better do in rolling fashion so that region
servers don't incur pause at the same time.
FYI
On Wed, Oct 7, 2015 at 8:24 PM, Vladimir Rodionov
wrote:
> Can you trigger full GC across the cluster? If slow queries will go down
> after that - you get the answer.
> GC s-t-w
it does not have
> that fix. Easy enough to put in, though I tried to build hbase and ran
> into many issues.
> Is there a doc somewhere on the prereqs for building hbase (1.0.0) in
> particular and the mvn flags?
>
> Thanks,
>
> Randy
>
>
>
> On 10/7/15, 1:34
bq. which seems of limited use to me
Agree.
Mind logging an improvement JIRA ?
Cheers
On Wed, Oct 7, 2015 at 12:00 PM, James Hartshorn
wrote:
> Since we started using the HBase bucket cache I've noticed the region
> server status pages not loading completely in Chrome, and causing
> significa
Have you seen the following code ?
ThreadPoolExecutor pool = (selectNow &&
s.throttleCompaction(compaction.getRequest().getSize()))
? longCompactions : shortCompactions;
Looks like throttleCompaction() returned false.
Please see the following method in RatioBasedCompactionPolicy :
pub
7;s flushing as soon as it
> reaches 1MB.
> INFO [MemStoreFlusher.1] regionserver.HRegion: Finished memstore flush of
> ~1.53 MB/1604616, currentsize=2.75 MB/2880080 for region
> t2,,1444121020375.a117935c77004424afa1a9cf47b2d9f3. in 129ms,
> sequenceid=86681, compaction requested=true
>
Subject:
> > To: user-thread.49...@hbase.apache.org
> > Content-Type: multipart/alternative;
> boundary=001a113cd19a0e28d60521164d88
> >
> >
> > --
> >
> >
> >
> > -- Fo
bq. We use hbase: 0.94.2
0.94.2 was released 3 years ago. Any reason you haven't upgraded to newer
release ?
Can you pastebin the output from 'describe "t1"' (without the single quotes)
Cheers
On Thu, Oct 1, 2015 at 1:17 AM, Hansi Klose wrote:
> Hi,
>
> I have the problem that we have key in
Mind giving us a bit more detail:
release of hbase you're using
the actual challenge you faced (with stack trace, log snippet, etc)
Cheers
On Thu, Oct 1, 2015 at 2:21 PM, Siva wrote:
> Hi Everyone,
>
> Is anyone used Qlik Sense reporting reading from Hbase?
>
> We have some challenges reading
901 - 1000 of 3936 matches
Mail list logo