ache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
> at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
> ...
>
>
> I actually tried this before, but my conclusion was that all nodes that
> are running a YARN No
bq. hbase.zookeeper.property.server.7
I searched 1.2 codebase but didn't find config parameter in the above form.
http://hbase.apache.org/book.html didn't mention it either.
May I ask where you obtained such config ?
For hbase.zookeeper.quorum, do you have zookeeper running on the 12 nodes
Looks like there is no such support at the moment.
Logged HBASE-17523
FYI
On Tue, Jan 24, 2017 at 1:42 PM, jeff saremi wrote:
> We are enabling reader replicas. We're also using Thrift endpoints for our
> HBase. How could we enable Consistency.Timeline for Thrift
Currently there're a few tasks (such as HBASE-16179) in the pipeline for
hbase-spark module.
There is no hbase release with hbase-spark module yet.
FYI
On Tue, Jan 24, 2017 at 1:17 PM, Chetan Khatri
wrote:
> Hello HBase Folks,
>
> Currently I am using HBase 1.2.4
doing manual splitting how do I choose the threshold?
>
> Thanks,
> Pradheep
>
>
>
>
> On 1/20/17, 5:41 PM, "Ted Yu" <yuzhih...@gmail.com> wrote:
>
> >For #1, you can consider plugging in ConstantSizeRegionSplitPolicy
> >for hbase
For #1, you can consider plugging in ConstantSizeRegionSplitPolicy
for hbase.regionserver.region.split.policy
For #2, regions are spread across servers. There is no centralized control
for the underlying table that prevents region splits from happening at the
same time.
For #3,
and I just enable all setting which mention in above
>> url only missing part is
>>
>> HColumnDescriptor hcd = new HColumnDescriptor(“f”);
>> hcd.setMobEnabled(true);
>> hcd.setMobThreshold(102400L);
>>
>> please any buddy tell if its ok?
>>
>> T
You can use the following method from HBaseAdmin:
public ClusterStatus getClusterStatus() throws IOException {
where ClusterStatus has getter for retrieving live server count:
public int getServersSize() {
and getter for retrieving dead server count:
public int getDeadServers() {
FYI
Currently ExportSnapshot utility doesn't support incremental export.
Here is the help message for overwrite:
static final Option OVERWRITE = new Option(null, "overwrite", false,
"Rewrite the snapshot manifest if already exists.");
Managing dependencies across snapshots may not be
On a 5 node hbase 1.1.2 cluster, I created a table with 2000 regions.
Then I issued the following command in shell:
alter 'user', NAME => 'cf', VERSIONS => 5
Here was the output:
http://pastebin.com/Ph06M8eX
My cluster was not loaded.
YMMV
On Tue, Jan 17, 2017 at 2:04 AM, nh kim
The off peak parameters apply to major compaction.
Please take a look at SortedCompactionPolicy#selectCompaction() and related
code.
On Sat, Jan 14, 2017 at 3:41 AM, spats wrote:
> i thought offpeak setting will affect only minor compactions and not major
>
ect results.
>
>
>
>> On Fri, Jan 13, 2017 at 11:34 PM, Ted Yu <yuzhih...@gmail.com> wrote:
>>
>> According to your description, MUST_PASS_ONE should not be used.
>>
>> Please use MUST_PASS_ALL.
>>
>> Cheers
>>
>> On Fri, J
Please see bullet #7 in
http://hbase.apache.org/book.html#compaction.ratiobasedcompactionpolicy.algorithm
Search for 'hbase.hstore.compaction.ratio.offpeak' and you will see related
config parameters.
On Fri, Jan 13, 2017 at 8:43 PM, spats wrote:
> Thanks Ted,
>
> Yes
For #1, see the following config:
hbase.hregion.majorcompaction.jitter
0.50
A multiplier applied to hbase.hregion.majorcompaction to
cause compaction to occur
a given amount of time either side of hbase.hregion.majorcompaction.
The smaller the number,
the closer the
ession type is "Snappy" and durability is SKIP_WAL.
>
>
> Regards,
> Pankaj
>
>
> -Original Message-
> From: Ted Yu [mailto:yuzhih...@gmail.com]
> Sent: Friday, January 13, 2017 10:30 PM
> To: d...@hbase.apache.org
> Cc: user@hbase.apache.org
&g
C#: OutofMemoryException
>
> Thanks Ted.
>
> I looked at this. We didn't know that a multipexing protocol existed until
> you mentioned it to us.
> We're using a stock thrift server that is shipped with hbase.
> If you perhaps point us to where we should be checking I'd be app
ou don't mind, I would like to take a stab
> at the JIRA you've created.
>
> For #1, any idea if this is the desired behavior?
>
> Thanks,
> Tim
>
> On Fri, Jan 13, 2017 at 10:27 AM, Ted Yu <yuzhih...@gmail.com> wrote:
>
> > Logged HBASE-17462 for #2.
&
t; public interface Iface {
>
>
>
>
> From: jeff saremi <jeffsar...@hotmail.com>
> Sent: Friday, January 13, 2017 10:39 AM
> To: user@hbase.apache.org
> Subject: Re: HBase Thrift Client for C#: OutofMemoryException
>
>
> oh i see.
ExecutionContext.Run(ExecutionContext
> executionContext, ContextCallback callback, Object state, Boolean
> preserveSyncCtx)
>at System.Threading.ExecutionContext.Run(ExecutionContext
> executionContext, ContextCallback callback, Object state)
>at System.Threading.ThreadHelper.ThreadStart()
>
&g
Logged HBASE-17462 for #2.
FYI
On Thu, Jan 12, 2017 at 8:49 AM, Ted Yu <yuzhih...@gmail.com> wrote:
> For #2, I think MemstoreSizeCostFunction belongs to the same category if
> we are to adopt moving average.
>
> Some factors to consider:
>
> The data structure used b
Which thrift version did you use to generate c# code ?
hbase uses 0.9.3
Can you pastebin the whole stack trace for the exception ?
I assume you run your code on 64-bit machine.
Cheers
On Fri, Jan 13, 2017 at 9:53 AM, jeff saremi wrote:
> I have cloned the latest
; returning columns that was not passed in the Qualifierfilter.
>
> Thanks,
> Prahalad
>
>
>
> On Fri, Jan 13, 2017 at 8:33 PM, Ted Yu <yuzhih...@gmail.com> wrote:
>
> > Can you illustrate how the two filters were combined (I assume through
> > FilterList
Can you illustrate how the two filters were combined (I assume through
FilterList) ?
I think the order of applying the filters should be RowFilter followed by
QualifierFilter.
Cheers
On Fri, Jan 13, 2017 at 6:55 AM, Prahalad kothwal
wrote:
> Hi ,
>
> Can I pass both
In the second case, the error happened when writing hfile. Can you track down
the path of the new file so that further investigation can be done ?
Does the table use any encoding ?
Thanks
> On Jan 13, 2017, at 2:47 AM, Pankaj kr wrote:
>
> Hi,
>
> We met a weird issue
How big are your video files expected to be ?
I assume you have read:
http://hbase.apache.org/book.html#hbase_mob
Is the example there not enough ?
Cheers
On Thu, Jan 12, 2017 at 9:33 AM, Manjeet Singh
wrote:
> Hi All,
>
> can any buddy help me to know How to
On Wed, Jan 11, 2017 at 5:51 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> For #2, I think it makes sense to try out using request rates for cost
> calculation.
>
> If the experiment result turns out to be better, we can consider using
> such measure.
>
> Thanks
>
>
Can you describe the 5 requests in more detail (were they gets / puts, how
many rows were involved) ?
Can you take jstack of the busy region server and pastebin it ?
Log snippet from the busy region server would also help.
Cheers
On Thu, Jan 12, 2017 at 1:23 AM, hongphong1805
For #2, I think it makes sense to try out using request rates for cost
calculation.
If the experiment result turns out to be better, we can consider using such
measure.
Thanks
On Wed, Jan 11, 2017 at 5:34 PM, Timothy Brown wrote:
> Hi,
>
> I have a couple of questions
As refguide states, hbase.client.scanner.caching works
with hbase.client.scanner.max.result.size to try and use the network
efficiently.
Make sure the release you use is 1.1.0+ which had important bug fixes
w.r.t. max result size.
On Wed, Jan 11, 2017 at 9:46 AM, Josh Elser
Question #1 seems better suited on the Ambari mailing list.
Have you checked whether hdfs balancer (not hbase balancer) was active from
the restart to observation of locality drop ?
For StochasticLoadBalancer, there is this cost factor:
private static final String LOCALITY_COST_KEY =
Can you provide a bit more detail ?
version of hbase (hadoop)
tail of master log when the auto-stop happened (use pastebin - attachment
may not go through)
Cheers
On Thu, Jan 5, 2017 at 6:02 PM, QI Congyun
wrote:
> Hi,
>
> Here is a micro full distributed
SNMP trap goes through UDP port.
To my knowledge, hbase doesn't support SNMP natively.
Suggest polling vendor's mailing list.
On Thu, Jan 5, 2017 at 3:22 AM, Manjeet Singh
wrote:
> Hi All
>
> We are using Cloudera Enterprise edition which is not supporting SNMP
>
Please take a look at http://hbase.apache.org/book.html#hbase_mob
On Tue, Jan 3, 2017 at 9:14 PM, Manjeet Singh
wrote:
> Hi All,
>
> I have question regarding, does hbase support to store content like Media
> files (audio, video, pictures etc)
> as I have read one
There is hbase.client.keyvalue.maxsize which defaults to 10485760
You can find its explanation
in hbase-common/src/main/resources/hbase-default.xml
On Tue, Jan 3, 2017 at 9:09 PM, Manjeet Singh
wrote:
> Hi All,
>
> I have question regarding does Hbase impose any
quot; is too many storeFiles . I
> saw my monitor and found storeFileCount is 33K , but ulimit is 65535 。 The
> reason why so many stofeFiles seens compaction not worked.
>
>
>
> But confused me is why rs not exit .
>
>
> 2017-01-03 23:05 GMT+08:00 Ted Yu <
Switching to user@
What's the version of hbase / hadoop you're using ?
Before issuing, "kill -9", did you capture stack trace of the region server
process ?
Have you read 'Limits on Number of Files and Processes' under
http://hbase.apache.org/book.html#basic.prerequisites ?
On Tue, Jan 3, 2017
ing...@hotmail.com> wrote:
> Thanks for your response Ted.
>
>
> I did the change, unfortunately it doesn't make any difference.
>
>
> Best,
>
> ________
> De: Ted Yu <yuzhih...@gmail.com>
> Enviado: lunes, 02 de enero de 2017 07:58 p.
> Surname: Jobs
>
>
> And I want to query for Rows that doesn't have Name 'Bill'
>
> NOT (Name='Bill')
>
>
> What I get as result from Hbase with this NotFilter is
>
> Row 2
>
> Surname: Jobs
>
>
> I suppose it's related to the cell "Name: Steve" skipped in the first place
>
reversing only filterRow() method of FilterList, but I got
> the same behaviour (the original cell/value missing).
>
>
> Best,
>
>
>
> De: Ted Yu <yuzhih...@gmail.com>
> Enviado: viernes, 30 de diciembre de 2016 12:56 p.m.
> Para: user@hb
ec 30, 2016 at 8:19 AM, Enrico Olivelli <eolive...@gmail.com>
wrote:
> Hi Ted,
>
> 2016-12-30 17:11 GMT+01:00 Ted Yu <yuzhih...@gmail.com>:
>
> > For scanHBase094Table():
> >
> > baseDefaults.addResource("hbase094_hbase_default.xml");
&g
For scanHBase094Table():
baseDefaults.addResource("hbase094_hbase_default.xml");
where hbase.rootdir points to file://
How is the user supposed to plug in hbase-site.xml for the 0.94 cluster ?
For TableMigrationManager, I don't see where setBatchSize() is called. Does
this mean that
be harder.
>
>
> Best,
>
> ____________
> De: Ted Yu <yuzhih...@gmail.com>
> Enviado: jueves, 29 de diciembre de 2016 06:10 p.m.
> Para: user@hbase.apache.org
> Asunto: Re: Is it possible to implement a NOT filter in Hbase?
>
> You can try negating the ReturnCode from filterKey
Last line should have read:
(a != '123') OR (b != '456')
On Thu, Dec 29, 2016 at 1:10 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> You can try negating the ReturnCode from filterKeyValue() (at the root of
> FilterList):
>
> abstract public ReturnCode filterKeyValue(fi
You can try negating the ReturnCode from filterKeyValue() (at the root of
FilterList):
abstract public ReturnCode filterKeyValue(final Cell v) throws
IOException;
INCLUDE -> SKIP
SKIP -> INCLUDE
Alternatively, you can use De Morgan's law to transfer the condition:
NOT (a = '123' AND b =
Can you show your code involving usage of hbase.regionserver.impl ?
Please also show the full stack trace of NPE.
A quick check across 0.98, branch-1 and master doesn't reveal difference
around this config.
Cheers
On Thu, Dec 29, 2016 at 7:51 AM, George Forman
the control will switch to?
>
> On Wed, Dec 28, 2016 at 6:04 PM, Ted Yu <yuzhih...@gmail.com> wrote:
>
> > You can start from http://hbase.apache.org/book.html#hregion.scans
> >
> > To get to know internals, you should look at the code - in IDE such as
> > Eclips
You can start from http://hbase.apache.org/book.html#hregion.scans
To get to know internals, you should look at the code - in IDE such as
Eclipse.
Start from StoreScanner and read the classes which reference it.
Cheers
On Wed, Dec 28, 2016 at 12:59 AM, Rajeshkumar J
fixforversion/12334929>,
> 1.2.2 <https://issues.apache.org/jira/browse/HBASE/fixforversion/12335440>
> , 0.98.20
> <https://issues.apache.org/jira/browse/HBASE/fixforversion/12335472>.
>
> Is it a bug in Hbase 1.2.0?
>
> Regards,
> San
>
> On Tue, Dec 27, 2016 at 2
ersion 1.2.0-cdh5.8.2
>
> Regards,
> San
>
>
>
> On Tue, Dec 27, 2016 at 11:15 AM, Ted Yu <yuzhih...@gmail.com> wrote:
>
>> Which release are you using ?
>>
>> Have you taken a look at
>> https://issues.apache.org/jira/browse/HBASE-14818
>>
Which release are you using ?
Have you taken a look at
https://issues.apache.org/jira/browse/HBASE-14818
> On Dec 26, 2016, at 9:33 PM, sandeep vura wrote:
>
> Hi Team,
>
> I have given "R" permission from hbase shell to namespace "d1" to user
> (svura) and group
Which hbase release are you using ?
There is heartbeat support when scanning.
Looks like the version you use doesn't have this support.
Cheers
> On Dec 21, 2016, at 4:02 AM, Rajeshkumar J
> wrote:
>
> Hi,
>
> Thanks for the reply. I have properties as below
Have you checked region server logs around this time to see if there was some
clue ?
Which hbase release are you using ?
You may turn on DEBUG log if current log level is INFO.
Cheers
> On Dec 18, 2016, at 11:37 PM, 郭伟(GUOWEI) wrote:
>
> Hi all,
>
> Spark streaming
atter? if you truelly able to
> answer you should understand what other people want to ask
> if not you can find millions of mistake.
>
> -Manjeet
>
> On Fri, Dec 16, 2016 at 9:57 PM, Ted Yu <yuzhih...@gmail.com> wrote:
>
> > impetus-n519 is not accessible to oth
impetus-n519 is not accessible to other people on the mailing list.
Can you clarify what you wanted to ask about these UI's ?
Thanks
On Fri, Dec 16, 2016 at 12:32 AM, Manjeet Singh
wrote:
> Hi All,
>
>
>
> I have one basic question regarding Hbase log
>
> I want to
Was dev1 hosting hbase:meta table (before the stop) ?
Looks like you embedded some image which didn't go through.
Consider using third party site if text is not enough to convey the message.
On Thu, Dec 15, 2016 at 7:02 PM, lk_hbase wrote:
> hi,all:
> I'm useing hbase 1.2.3
bq. preReplicateLogEntries and postReplicateLogEntries gets called on the
slave cluster region server
This is by design.
These two hooks are around ReplicationSinkService#replicateLogEntries().
ReplicationSinkService represents the sink.
Can you tell us what you need to know on the source
Manjeet:
Have you looked at HDFS-10540 ?
Not sure if the distribution you use has the fix.
FYI
On Wed, Dec 14, 2016 at 9:34 PM, Manjeet Singh
wrote:
> Once I read some where that do not run HDFS balancer otherwise it will
> spoil Meta of Hbase
>
> I am getting
.. how do
> i send them this post ?
>
> regds,
> Karan Alang
>
> On Wed, Dec 14, 2016 at 5:21 PM, Ted Yu-3 [via Apache HBase] <
> ml-node+s679495n4085137...@n3.nabble.com> wrote:
>
> > I took a look at opentsdb.log which shows asynchbase logs.
> >
>
I took a look at opentsdb.log which shows asynchbase logs.
Have you considered polling asynchbase mailing list ?
On Wed, Dec 14, 2016 at 4:59 PM, karanalang wrote:
> I've kerberized HDP 2.4 (HBase version - 1.1.2.2.4.0.0-169, openTSDB
> version
> - 2.2.1)
> On startiong
This is clearly specified here:
http://hbase.apache.org/book.html#recommended_configurations.zk
On Wed, Dec 14, 2016 at 2:58 AM, kiran <kiran.sarvabho...@gmail.com> wrote:
> Sure ??
>
> ZK is not managed by hbase. Can some one confirm ??
>
> On Wed, Dec 14, 2016 at 3:
You should add it in hbase-site.xml
Note: it is bounded by 20 times tickTime
Cheers
> On Dec 14, 2016, at 12:47 AM, Sandeep Reddy wrote:
>
> Hi,
>
>
> We are using hbase-0.98.7 with zookeeper-3.4.6 where our zookeeper is not
> managed by HBase.
>
> We set
y rows with too many columns without actually reading them all?
>
> Thanks.
>
> ----
> Saad
>
>
> On Sat, Dec 3, 2016 at 3:20 PM, Ted Yu <yuzhih...@gmail.com> wrote:
>
> > I took a look at the stack trace.
> >
> > Region server log would give us more deta
Congratulations , Josh.
> On Dec 10, 2016, at 11:47 AM, Nick Dimiduk wrote:
>
> On behalf of the Apache HBase PMC, I am pleased to announce that Josh Elser
> has accepted the PMC's invitation to become a committer on the project. We
> appreciate all of Josh's generous
:
>> WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error,
>> closing socket connection and attempting reconnect
>>
>> zookeeper.ClientCnxn: Opening socket connection to server localhost/
>> 127.0.0.1:2181. Will not attempt to authenticate using SASL (u
Please check the zookeeper server logs to see what might be happening.
bq. fontana-04,fontana-03
Why only two servers in the quorum ? Can you add one more zookeeper server ?
Cheers
On Thu, Dec 8, 2016 at 1:18 AM, Vincent Fontana <74f...@gmail.com> wrote:
> now i have this...
>
> 16/12/08
Please take a look at:
hbase-endpoint/src/main/java/org/apache/hadoop/hbase/coprocessor/AggregateImplementation.java
Can you tell us more about how the scan object is formed (I assume you have
set start / stop rows) ?
Cheers
On Tue, Dec 6, 2016 at 10:07 AM, Paramesh Nc
bq. to prevent a single row being put(wrote) columns over much?
Can you clarify the above ?
Don't you have control over the schema ?
Thanks
On Mon, Dec 5, 2016 at 7:52 PM, 聪聪 <175998...@qq.com> wrote:
> Recently, I have a problem that confused me a long time. The problem is
> that as we all
I was in China the past 10 days where I didn't have access to gmail.
bq. repeat this sequence a thousand times
You mean proceeding with the next parameter ?
bq. use hashing mechanism to transform this long string
How is the hash generated ?
The hash prefix should presumably evenly distribute
gt;
> On Sat, Dec 3, 2016 at 6:20 AM Saad Mufti <saad.mu...@gmail.com> wrote:
>
> > No.
> >
> >
> > Saad
> >
> >
> > On Fri, Dec 2, 2016 at 3:27 PM, Ted Yu <ted...@yahoo.com.invalid> wrote:
> >
> > > Some how I couldn
for the stack trace from the region server that I obtained
> via
> > VisualVM:
> >
> > http://pastebin.com/qbXPPrXk
> >
> > Would really appreciate any insight you or anyone else can provide.
> >
> > Thanks.
> >
> >
> > Saad
> >
From #2 in the initial email, the hbase:meta might not be the cause for the
hotspot.
Saad:
Can you pastebin stack trace of the hot region server when this happens again ?
Thanks
> On Dec 2, 2016, at 4:48 AM, Saad Mufti wrote:
>
> We used a pre-split into 1024 regions
For #1, there is more than one way. Normally you can count the number of
regions per server - the count should be roughly the same across servers.
For #2, you can take a look at StochasticLoadBalancer to see what factors
affect region movement.
Cheers
On Thursday, December 1, 2016 2:08 AM,
on this. I hope I'm wrong on this one :)
Our challenge has been to understand what's HBase doing under various
scenarios. We monitor call queue lengths, sizes and latencies as the primary
alerting mechanism to tell us something is going on with HBase.
Thanks!-neelesh
On Wed, Nov 30, 2016 at 1:15
Neelesh:Can you share more details about the sluggish cluster performance (such
as version of hbase / phoenix, your schema, region server log snippet, stack
traces, etc) ?
As hbase / phoenix evolve, I hope the performance keeps getting better for your
use case.
Cheers
On Wednesday,
Have you looked at RowFilter ?
There is also FuzzyRowFilter which is versatile.
On Wednesday, November 30, 2016 1:16 AM, Devender Yadav
wrote:
Hi All,
HBase version: 1.2.2 (both server and Java API)
I am using SingleColumnValueFilter.
public
Did you copy the start key verbatim ?
Please take a look at ./hbase-shell/src/main/ruby/shell.rb to see example of
proper escaping.
Cheers
On Tuesday, November 29, 2016 1:58 AM, Ravi Kumar Bommada
wrote:
Hi,
I'm trying to delete a row from 'hbase:meta' by
Congratulations, Phil.
On Tuesday, November 29, 2016 1:49 AM, Duo Zhang
wrote:
On behalf of the Apache HBase PMC, I am pleased to announce that Phil Yang
has accepted the PMC's invitation to become a committer on the project. We
appreciate all of Phil's generous
Congratulations, Lijin.
On Tuesday, November 29, 2016 3:01 AM, Anoop John
wrote:
Congrats and welcome Binlijin.
-Anoop-
On Tue, Nov 29, 2016 at 3:18 PM, Duo Zhang wrote:
> On behalf of the Apache HBase PMC, I am pleased to announce that
Does the region server hosting hbase:meta have roughly the same number of
regions as the other servers ?
Did you find anything interesting in the server log (where hbase:meta is
hosted) ?
Have you tried major compacting the hbase:meta table ?
In 1.2, DEFAULT_HBASE_META_VERSIONS is still 10. See
This means that the data you ingested only spread across these two regions,
instead of 11 regions.
It was likely caused by the distribution of the row keys of ingested data.
Please examine the distribution and plan for different region boundaries if the
distribution is typical for future data.
Please see HBASE-17009 and HBASE-16713 which are related to connection pooling,
Hi manjeet,
I wrote about a connection pool I implemented at
https://richardstartin.com/2016/11/05/hbase-connection-management/
Cheers,
Richard
Mich:
Even though related rows are on the same region server, there is no intrinsic
transaction support.
For #1 under design considerations, multi column family is one possibility. You
should consider how the queries from RDBMS access the related data.
You can also evaluate Phoenix /
bq. it calls the persistence method asynchronously
Assuming the persistence method is still executing when the next threshold
value is reached, do you have other threads to do persistence ?
If so, how many threads can potentially run at the same time ?
How many regions does the table have ?
-Original Message-
> From: Sen [mailto:besent...@gmail.com]
> Sent: Tuesday, November 22, 2016 11:55 PM
> To: user@hbase.apache.org
> Subject: Re: problem in launching HBase
>
> Did you ensure your etc/hosts file has the IP addresses of the Hbase
> server?
>
> On Tue,
et : RE: Table is disabled an no way to get it back online
>
> Hello,
>
> Sadly I could not use the webui, it killed my firefox (probably way too
> much time). Here is the debug log... (11Mb uncompressed for maybe two
> minutes running !!)
>
> Best regards, Adam.
>
internet, the
> similar issue happened very few.
> I'm very bewildered, could you help to find the reasons?
>
> Thanks.
>
>
> -Original Message-
> From: Ted Yu [mailto:yuzhih...@gmail.com]
> Sent: Wednesday, November 16, 2016 11:13 AM
> To: user@hbase.apache.org
&g
Master log contained entries in the following form:
2016-11-22 13:13:41,836 INFO [ProcedureExecutor-3]
procedure2.ProcedureExecutor: Rolledback procedure DisableTableProcedure
(table=sentinel-meta) id=43538 owner=hbase state=ROLLEDBACK
exec-time=242hrs, 10mins, 28.896sec
orting the data using `hbase
> org.apache.hadoop.hbase.mapreduce.Import 'table.name' /path/to/backup`
> (The data was exported from an HBase instance on another cluster using
> `hbase org.apache.hadoop.hbase.mapreduce.Export` and then distcp'd between
> the clusters).
>
> On Mon
I did a quick search - there was no relevant JIRA or discussion thread at
first glance.
Which hbase release are you using ?
How do you import the data ?
More details would be helpful.
Thanks
On Mon, Nov 21, 2016 at 2:48 PM, Julian Jaffe
wrote:
> When importing data
Manjeet:
With 3 regions (actually 4, considering the region with empty start key)
for the table, data wouldn't be distributed onto 100 nodes - there are not
enough regions to spread across all the region servers.
Assuming the table would receive much data, you can split the table so that
the
try the inline images:
>>>>>>
>>>>>> Performance w/o offheap:
>>>>>>
>>>>>>
>>>>>> Performance w/ offheap:
>>>>>>
>>>>>>
>>>>>> Peak Get QPS of one single RS during Singles' Day (11/11):
>
Can you tell us the version of hbase you are using and the new version
which you plan to upgrade to ?
A bit more detail on your coprocessor would also help narrow the scope of
search.
Cheers
On Fri, Nov 18, 2016 at 4:28 PM, Albert Shau
wrote:
> Hi all,
> I'm
https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Admin.html#getTableDescriptor-org.apache.hadoop.hbase.TableName-
On Fri, Nov 18, 2016 at 10:44 AM, Ganesh Viswanathan
wrote:
> Hello,
>
> Is there a java API for HBase (using Admin or other libraries) to describe
Yu:
With positive results, more hbase users would be asking for the backport of
offheap read path patches.
Do you think you or your coworker has the bandwidth to publish backport for
branch-1 ?
Thanks
> On Nov 18, 2016, at 12:11 AM, Yu Li wrote:
>
> Dear all,
>
> We
In the JAAS config, have you tried adding the following ?
storeKey=true
On Thu, Nov 17, 2016 at 10:08 AM, Hugo Labra wrote:
> Hello,
>
> I am having a problem to connect to a Secure HBase cluster when using the
> JAAS config, I enabled Kerberos using the cloudera
By count you mean row count ?
Can you describe the incremental data in more detail ?
Thanks
On Tue, Nov 15, 2016 at 7:55 PM, 446463...@qq.com <446463...@qq.com> wrote:
>
> Hi:
> All ,I have a question :
> How to count incremental data in Hbase table?
>
>
> 446463...@qq.com
>
B) Cache
> > Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00%
> > Xceivers: 1
> > Last contact: Tue Nov 15 13:17:01 CST 2016
> > .
> >
>
0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Tue Nov 15 13:17:01 CST 2016
> .........
>
> ..
>
>
>
> -Original Message-
> From: Ted Yu [mailto:yuzhih...@gmail.com]
>
sh", the master process was
> opened at the beginning, afterwards it's closed by itself, meanwhile the
> zookeeper quorum process is always running until kill it manually. I had do
> the command "JPS" to observe the process.
>
> Thanks.
>
>
> -Original Me
2016-11-10 11:25:14,177 INFO [main-SendThread(localhost:2181)]
zookeeper.ClientCnxn: Opening socket connection to server localhost/
*127.0.0.1*:2181. Will not attempt to authenticate using SASL
(unknown error)
Was the zookeeper quorum running on the localhost ?
In the future, use
401 - 500 of 3641 matches
Mail list logo