bq. We use Hbase 1.0.0
1.0.0 was quite old.
Can you try more recent releases such as 1.3.0 (the hbase-thrift module
should be more robust) ?
If your nodes have enough memory, have you thought of using bucket cache to
improve read performance ?
Cheers
On Fri, Feb 3, 2017 at 1:34 PM, Akshat Maha
Have you looked at SingleColumnValueFilter ?
For SingleColumnValueFilter, there is a field:
protected boolean filterIfMissing = false;
You should call setFilterIfMissing(true) on the SingleColumnValueFilter
instance:
* Get whether entire row should be filtered if column is not found.
*
Can you take a look at TestMasterCoprocessorExceptionWithRemove to see if
it covers your case ?
If not, can it be modified to exhibit the behavior you described ?
Cheers
On Wed, Feb 1, 2017 at 5:45 AM, Steen Manniche wrote:
> I'm trying to specify some sanity checks in my coprocessor's start()
>
> Its mean I have no other way to load specific version value the only way to
> load all value at client side and get version of my choice.
>
> Manjeet
> On 1 Feb 2017 00:25, "Ted Yu" wrote:
>
> > For #3, you need to retrieve multiple ve
The lag would come down after the port opens.
On Tue, Jan 31, 2017 at 2:53 PM, marjana wrote:
> Yes the status command was run on source cluster. These are my peers:
>
> PEER_ID CLUSTER_KEY STATE TABLE_CFS
> 3
> zookeeper1.adm01.com,zookeeper2.adm01.com,zookeeper3.adm01.com:2181:/hbase
> ENABLE
ve 2 replcas in target cluster after restore
> eventhough the config has 3 as replication factor.
> Since it is a file level copy I guess the WAL will not have the edits and
> hence cannot change the number of copies based on target config.
>
> Thanks,
> Pradheep
>
>
>
Yes. It should work.
On Tue, Jan 31, 2017 at 1:28 PM, Pradheep Shanmugam <
pradheep.shanmu...@infor.com> wrote:
> Hi,
>
> Can the Hbase Snapshot work when I snap shot a table from a cluster with
> replication factor as 2 and restore it on a
> Cluster with replication factor as 3?
>
> Thanks,
> Pr
For #3, you need to retrieve multiple versions (to get to V2).
Take a look at
TestVisibilityLabelsWithDeletes#testDeleteColumnWithLatestTimeStampUsingMultipleVersions
around line 1368.
FYI
On Tue, Jan 31, 2017 at 9:58 AM, Manjeet Singh
wrote:
> Hi All
>
> can anyone tell me what is the Impact
I assume both clusters run hbase 1.2.0
How many servers are there in each cluster ?
Have you checked region server logs in the slave cluster to see if there is
some clue ?
Thanks
On Tue, Jan 31, 2017 at 9:14 AM, marjana wrote:
> It is 1.2.0 hbase version.
>
>
>
> --
> View this message in con
FYI
On Sat, Jan 28, 2017 at 8:29 AM, Ted Yu wrote:
> I haven't found the API you were looking for.
>
> Which release of hbase are you using ?
> I assume it supports tags.
>
> If you use tag to pass event-id, you can retrieve thru this method of
> WALEdit:
>
> pub
I haven't found the API you were looking for.
Which release of hbase are you using ?
I assume it supports tags.
If you use tag to pass event-id, you can retrieve thru this method of
WALEdit:
public ArrayList getCells() {
>From Cell, there're 3 methods for retrieving tag starting with:
byte
Example -
> https://github.com/tmalaska/SparkOnHBase/blob/master/src/main/scala/org/apache/hadoop/hbase/spark/example/hbasecontext/HBaseBulkPutExampleFromFile.scala
>
>> On Sat, Jan 28, 2017 at 9:11 AM, Ted Yu wrote:
>>
>> Have you looked at hbase-spark module (currently i
Have you looked at hbase-spark module (currently in master branch) ?
See
hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/example/datasources/AvroSource.scala
and
hbase-spark/src/test/scala/org/apache/hadoop/hbase/spark/DefaultSourceSuite.scala
for examples.
There may be other options.
Daniel:
For the underlying column family, do you use any data block encoding /
compression ?
Which hbase release do you use ?
Thanks
On Thu, Jan 26, 2017 at 2:12 PM, Dave Birdsall
wrote:
> My guess (and it is only a guess) is that you are traversing much less of
> the call stack when you fetch
48532_0001_01_02/job.jar:/tmp/hadoop-hdadmin/
> nm-local-dir/usercache/idstest/appcache/application_
> 1485362948532_0001/container_1485362948532_0001_01_02/
> hbase-common-1.2.4.jar:/tmp/hadoop-hdadmin/nm-local-dir/
> usercache/idstest/appcache/application_1485362948532_
>
eeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
> at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
> ...
>
>
> I actually tried this before, but my conclusion was that all nodes that
> are running a YARN NodeManager n
bq. hbase.zookeeper.property.server.7
I searched 1.2 codebase but didn't find config parameter in the above form.
http://hbase.apache.org/book.html didn't mention it either.
May I ask where you obtained such config ?
For hbase.zookeeper.quorum, do you have zookeeper running on the 12 nodes ?
Looks like there is no such support at the moment.
Logged HBASE-17523
FYI
On Tue, Jan 24, 2017 at 1:42 PM, jeff saremi wrote:
> We are enabling reader replicas. We're also using Thrift endpoints for our
> HBase. How could we enable Consistency.Timeline for Thrift server?
> thanks
>
> Jeff
>
Currently there're a few tasks (such as HBASE-16179) in the pipeline for
hbase-spark module.
There is no hbase release with hbase-spark module yet.
FYI
On Tue, Jan 24, 2017 at 1:17 PM, Chetan Khatri
wrote:
> Hello HBase Folks,
>
> Currently I am using HBase 1.2.4 and Hive 1.2.1, I am looking f
doing manual splitting how do I choose the threshold?
>
> Thanks,
> Pradheep
>
>
>
>
> On 1/20/17, 5:41 PM, "Ted Yu" wrote:
>
> >For #1, you can consider plugging in ConstantSizeRegionSplitPolicy
> >for hbase.regionserver.region.split.policy
>
For #1, you can consider plugging in ConstantSizeRegionSplitPolicy
for hbase.regionserver.region.split.policy
For #2, regions are spread across servers. There is no centralized control
for the underlying table that prevents region splits from happening at the
same time.
For #3, KeyPrefixRegionSpl
e all setting which mention in above
>> url only missing part is
>>
>> HColumnDescriptor hcd = new HColumnDescriptor(“f”);
>> hcd.setMobEnabled(true);
>> hcd.setMobThreshold(102400L);
>>
>> please any buddy tell if its ok?
>>
>> Thanks
>>
>>
You can use the following method from HBaseAdmin:
public ClusterStatus getClusterStatus() throws IOException {
where ClusterStatus has getter for retrieving live server count:
public int getServersSize() {
and getter for retrieving dead server count:
public int getDeadServers() {
FYI
Currently ExportSnapshot utility doesn't support incremental export.
Here is the help message for overwrite:
static final Option OVERWRITE = new Option(null, "overwrite", false,
"Rewrite the snapshot manifest if already exists.");
Managing dependencies across snapshots may not be tri
On a 5 node hbase 1.1.2 cluster, I created a table with 2000 regions.
Then I issued the following command in shell:
alter 'user', NAME => 'cf', VERSIONS => 5
Here was the output:
http://pastebin.com/Ph06M8eX
My cluster was not loaded.
YMMV
On Tue, Jan 17, 2017 at 2:04 AM, nh kim wrote:
>
Namespace support makes many tasks easy - security, quota, etc
Suppose your table is a.tsv, it would be stored under default namespace.
On hdfs, you would see something like:
/apps/hbase/data/data/default/a.tsv
When you put the table under namespace a (a:tsv), the layout would be:
/apps/hbase/d
The pictures didn't go through.
Dinesh:
Please put the pictures on third party site and post the links.
Have you checked region server log during the scan when 1.2.0 client was used ?
Taking a few stack traces in that period may also help provide some clue.
Thanks
> On Jan 16, 2017, at 3:55
like column family name
> and the colon (:).
>
> It's strange, how key is written in wrong format in HFile.
>
>
> Regards,
> Pankaj
>
> -Original Message-
> From: Ted Yu [mailto:yuzhih...@gmail.com]
> Sent: Saturday, January 14, 2017 9:41 AM
> To:
The off peak parameters apply to major compaction.
Please take a look at SortedCompactionPolicy#selectCompaction() and related
code.
On Sat, Jan 14, 2017 at 3:41 AM, spats wrote:
> i thought offpeak setting will affect only minor compactions and not major
> compactions? correct me if i am wrong
>
>
>
>> On Fri, Jan 13, 2017 at 11:34 PM, Ted Yu wrote:
>>
>> According to your description, MUST_PASS_ONE should not be used.
>>
>> Please use MUST_PASS_ALL.
>>
>> Cheers
>>
>> On Fri, Jan 13, 2017 at 10:02 AM, Prahalad
Please see bullet #7 in
http://hbase.apache.org/book.html#compaction.ratiobasedcompactionpolicy.algorithm
Search for 'hbase.hstore.compaction.ratio.offpeak' and you will see related
config parameters.
On Fri, Jan 13, 2017 at 8:43 PM, spats wrote:
> Thanks Ted,
>
> Yes reducing jitter value shou
For #1, see the following config:
hbase.hregion.majorcompaction.jitter
0.50
A multiplier applied to hbase.hregion.majorcompaction to
cause compaction to occur
a given amount of time either side of hbase.hregion.majorcompaction.
The smaller the number,
the closer the compact
is "Snappy" and durability is SKIP_WAL.
>
>
> Regards,
> Pankaj
>
>
> -Original Message-
> From: Ted Yu [mailto:yuzhih...@gmail.com]
> Sent: Friday, January 13, 2017 10:30 PM
> To: d...@hbase.apache.org
> Cc: user@hbase.apache.org
> Subject: Re: R
d.
>
> I looked at this. We didn't know that a multipexing protocol existed until
> you mentioned it to us.
> We're using a stock thrift server that is shipped with hbase.
> If you perhaps point us to where we should be checking I'd be appreciative.
>
>
>
>
ld like to take a stab
> at the JIRA you've created.
>
> For #1, any idea if this is the desired behavior?
>
> Thanks,
> Tim
>
> On Fri, Jan 13, 2017 at 10:27 AM, Ted Yu wrote:
>
> > Logged HBASE-17462 for #2.
> >
> > FYI
> >
> > On Th
{
>
>
>
>
> From: jeff saremi
> Sent: Friday, January 13, 2017 10:39 AM
> To: user@hbase.apache.org
> Subject: Re: HBase Thrift Client for C#: OutofMemoryException
>
>
> oh i see. sure i'll do that and report back.
>
>
&
tionContext, ContextCallback callback, Object state, Boolean
> preserveSyncCtx)
>at System.Threading.ExecutionContext.Run(ExecutionContext
> executionContext, ContextCallback callback, Object state)
>at System.Threading.ThreadHelper.ThreadStart()
>
>
> _
Logged HBASE-17462 for #2.
FYI
On Thu, Jan 12, 2017 at 8:49 AM, Ted Yu wrote:
> For #2, I think MemstoreSizeCostFunction belongs to the same category if
> we are to adopt moving average.
>
> Some factors to consider:
>
> The data structure used by StochasticLoadBalancer shou
Which thrift version did you use to generate c# code ?
hbase uses 0.9.3
Can you pastebin the whole stack trace for the exception ?
I assume you run your code on 64-bit machine.
Cheers
On Fri, Jan 13, 2017 at 9:53 AM, jeff saremi wrote:
> I have cloned the latest thrift and hbase code. Used t
not passed in the Qualifierfilter.
>
> Thanks,
> Prahalad
>
>
>
> On Fri, Jan 13, 2017 at 8:33 PM, Ted Yu wrote:
>
> > Can you illustrate how the two filters were combined (I assume through
> > FilterList) ?
> >
> > I think the order of applying the filt
Can you illustrate how the two filters were combined (I assume through
FilterList) ?
I think the order of applying the filters should be RowFilter followed by
QualifierFilter.
Cheers
On Fri, Jan 13, 2017 at 6:55 AM, Prahalad kothwal
wrote:
> Hi ,
>
> Can I pass both RowFilter and QualifierFilt
Looks like you can modify ColumnPrefixFilter#filterKeyValue() where if the
family is cf1, ReturnCode.INCLUDE is returned.
Otherwise check column prefix.
Cheers
On Thu, Jan 12, 2017 at 11:56 AM, Tokayer, Jason M. <
jason.toka...@capitalone.com> wrote:
> I would like to know whether there is a way
In the second case, the error happened when writing hfile. Can you track down
the path of the new file so that further investigation can be done ?
Does the table use any encoding ?
Thanks
> On Jan 13, 2017, at 2:47 AM, Pankaj kr wrote:
>
> Hi,
>
> We met a weird issue in our production envir
How big are your video files expected to be ?
I assume you have read:
http://hbase.apache.org/book.html#hbase_mob
Is the example there not enough ?
Cheers
On Thu, Jan 12, 2017 at 9:33 AM, Manjeet Singh
wrote:
> Hi All,
>
> can any buddy help me to know How to store video files and image file
heers
On Wed, Jan 11, 2017 at 5:51 PM, Ted Yu wrote:
> For #2, I think it makes sense to try out using request rates for cost
> calculation.
>
> If the experiment result turns out to be better, we can consider using
> such measure.
>
> Thanks
>
> On Wed, Jan 11, 2017 at
Can you describe the 5 requests in more detail (were they gets / puts, how
many rows were involved) ?
Can you take jstack of the busy region server and pastebin it ?
Log snippet from the busy region server would also help.
Cheers
On Thu, Jan 12, 2017 at 1:23 AM, hongphong1805
wrote:
> Hi all
For #2, I think it makes sense to try out using request rates for cost
calculation.
If the experiment result turns out to be better, we can consider using such
measure.
Thanks
On Wed, Jan 11, 2017 at 5:34 PM, Timothy Brown wrote:
> Hi,
>
> I have a couple of questions about the StochasticLoadB
As refguide states, hbase.client.scanner.caching works
with hbase.client.scanner.max.result.size to try and use the network
efficiently.
Make sure the release you use is 1.1.0+ which had important bug fixes
w.r.t. max result size.
On Wed, Jan 11, 2017 at 9:46 AM, Josh Elser wrote:
> Behind the
When HBASE-16179 is resolved, you would be able to query through Spark 2.0
The current hbase-spark module in master branch only supports Spark 1.6
FYI
On Mon, Jan 9, 2017 at 1:06 AM, Manjeet Singh
wrote:
> Hi All,
>
> I have to find which is the best way to query on Hbase will give best
> resu
ND_SEEK_NEXT_ROW nor INCLUDE_AND_NEXT_COL when debuging so adding
> them were not going to make any difference.
>
>
> Nevetherless i implement De Morgan's law as you second suggestion (it was
> easier as I expected) and it's working, so thanks again for that!
>
> Best,
>
>
Question #1 seems better suited on the Ambari mailing list.
Have you checked whether hdfs balancer (not hbase balancer) was active from
the restart to observation of locality drop ?
For StochasticLoadBalancer, there is this cost factor:
private static final String LOCALITY_COST_KEY =
"hbase.
Can you provide a bit more detail ?
version of hbase (hadoop)
tail of master log when the auto-stop happened (use pastebin - attachment
may not go through)
Cheers
On Thu, Jan 5, 2017 at 6:02 PM, QI Congyun
wrote:
> Hi,
>
> Here is a micro full distributed HBase running subsystem, total 3 virtu
SNMP trap goes through UDP port.
To my knowledge, hbase doesn't support SNMP natively.
Suggest polling vendor's mailing list.
On Thu, Jan 5, 2017 at 3:22 AM, Manjeet Singh
wrote:
> Hi All
>
> We are using Cloudera Enterprise edition which is not supporting SNMP
> support for Hbase and licence
Please take a look at http://hbase.apache.org/book.html#hbase_mob
On Tue, Jan 3, 2017 at 9:14 PM, Manjeet Singh
wrote:
> Hi All,
>
> I have question regarding, does hbase support to store content like Media
> files (audio, video, pictures etc)
> as I have read one blog I need to change hfile fo
There is hbase.client.keyvalue.maxsize which defaults to 10485760
You can find its explanation
in hbase-common/src/main/resources/hbase-default.xml
On Tue, Jan 3, 2017 at 9:09 PM, Manjeet Singh
wrote:
> Hi All,
>
> I have question regarding does Hbase impose any limit in term of maximum
> size
storeFiles . I
> saw my monitor and found storeFileCount is 33K , but ulimit is 65535 。 The
> reason why so many stofeFiles seens compaction not worked.
>
>
>
> But confused me is why rs not exit .
>
>
> 2017-01-03 23:05 GMT+08:00 Ted Yu :
>
>> Switch
Switching to user@
What's the version of hbase / hadoop you're using ?
Before issuing, "kill -9", did you capture stack trace of the region server
process ?
Have you read 'Limits on Number of Files and Processes' under
http://hbase.apache.org/book.html#basic.prerequisites ?
On Tue, Jan 3, 2017
> Thanks for your response Ted.
>
>
> I did the change, unfortunately it doesn't make any difference.
>
>
> Best,
>
> ____
> De: Ted Yu
> Enviado: lunes, 02 de enero de 2017 07:58 p.m.
> Para: user@hbase.apache.org
> Asunto:
ds that were not originally skipped.
>
>
> So for example, if i have two rows each one with two fields
>
> Row 1
>
> Name: Bill
>
> Surname: Gates
>
>
> Row 2
>
> Name: Steve
>
> Surname: Jobs
>
>
> And I want to query for Rows tha
method of FilterList, but I got
> the same behaviour (the original cell/value missing).
>
>
> Best,
>
>
>
> De: Ted Yu
> Enviado: viernes, 30 de diciembre de 2016 12:56 p.m.
> Para: user@hbase.apache.org
> Asunto: Re: Is it possible t
ri, Dec 30, 2016 at 8:19 AM, Enrico Olivelli
wrote:
> Hi Ted,
>
> 2016-12-30 17:11 GMT+01:00 Ted Yu :
>
> > For scanHBase094Table():
> >
> > baseDefaults.addResource("hbase094_hbase_default.xml");
> >
> > where hbase.rootdir points to file:/
For scanHBase094Table():
baseDefaults.addResource("hbase094_hbase_default.xml");
where hbase.rootdir points to file://
How is the user supposed to plug in hbase-site.xml for the 0.94 cluster ?
For TableMigrationManager, I don't see where setBatchSize() is called. Does
this mean that bat
ng at FilterList code if only INCLUDE/SKIP should be
> replaced, and which should be the correct replacement for
> INCLUDE_AND_NEXT_COL. What do you think? If not maybe i should try to
> implement DeMorgan's law but I think it would be harder.
>
>
> Best,
>
> ___
Last line should have read:
(a != '123') OR (b != '456')
On Thu, Dec 29, 2016 at 1:10 PM, Ted Yu wrote:
> You can try negating the ReturnCode from filterKeyValue() (at the root of
> FilterList):
>
> abstract public ReturnCode filterKeyValue(final Cell v) t
You can try negating the ReturnCode from filterKeyValue() (at the root of
FilterList):
abstract public ReturnCode filterKeyValue(final Cell v) throws
IOException;
INCLUDE -> SKIP
SKIP -> INCLUDE
Alternatively, you can use De Morgan's law to transfer the condition:
NOT (a = '123' AND b = '45
Can you show your code involving usage of hbase.regionserver.impl ?
Please also show the full stack trace of NPE.
A quick check across 0.98, branch-1 and master doesn't reveal difference
around this config.
Cheers
On Thu, Dec 29, 2016 at 7:51 AM, George Forman
wrote:
> Hi,
>
> I have upgraded
On Wed, Dec 28, 2016 at 6:04 PM, Ted Yu wrote:
>
> > You can start from http://hbase.apache.org/book.html#hregion.scans
> >
> > To get to know internals, you should look at the code - in IDE such as
> > Eclipse.
> > Start from StoreScanner and read the classes whic
You can start from http://hbase.apache.org/book.html#hregion.scans
To get to know internals, you should look at the code - in IDE such as
Eclipse.
Start from StoreScanner and read the classes which reference it.
Cheers
On Wed, Dec 28, 2016 at 12:59 AM, Rajeshkumar J wrote:
> Can anyone point m
1.2.2 <https://issues.apache.org/jira/browse/HBASE/fixforversion/12335440>
> , 0.98.20
> <https://issues.apache.org/jira/browse/HBASE/fixforversion/12335472>.
>
> Is it a bug in Hbase 1.2.0?
>
> Regards,
> San
>
> On Tue, Dec 27, 2016 at 2:48 PM, Ted Yu wrote:
&g
> Regards,
> San
>
>
>
> On Tue, Dec 27, 2016 at 11:15 AM, Ted Yu wrote:
>
>> Which release are you using ?
>>
>> Have you taken a look at
>> https://issues.apache.org/jira/browse/HBASE-14818
>>
>> > On Dec 26, 2016, at 9:33 PM, sandeep v
Which release are you using ?
Have you taken a look at
https://issues.apache.org/jira/browse/HBASE-14818
> On Dec 26, 2016, at 9:33 PM, sandeep vura wrote:
>
> Hi Team,
>
> I have given "R" permission from hbase shell to namespace "d1" to user
> (svura) and group (hadoopdev).
>
> when i logge
Which hbase release are you using ?
There is heartbeat support when scanning.
Looks like the version you use doesn't have this support.
Cheers
> On Dec 21, 2016, at 4:02 AM, Rajeshkumar J
> wrote:
>
> Hi,
>
> Thanks for the reply. I have properties as below
>
>
>hbase.regionserver.
Have you checked region server logs around this time to see if there was some
clue ?
Which hbase release are you using ?
You may turn on DEBUG log if current log level is INFO.
Cheers
> On Dec 18, 2016, at 11:37 PM, 郭伟(GUOWEI) wrote:
>
> Hi all,
>
> Spark streaming program will connect Hba
answer you should understand what other people want to ask
> if not you can find millions of mistake.
>
> -Manjeet
>
> On Fri, Dec 16, 2016 at 9:57 PM, Ted Yu wrote:
>
> > impetus-n519 is not accessible to other people on the mailing list.
> >
> > C
impetus-n519 is not accessible to other people on the mailing list.
Can you clarify what you wanted to ask about these UI's ?
Thanks
On Fri, Dec 16, 2016 at 12:32 AM, Manjeet Singh
wrote:
> Hi All,
>
>
>
> I have one basic question regarding Hbase log
>
> I want to see logs – (1) Hbase Master
Was dev1 hosting hbase:meta table (before the stop) ?
Looks like you embedded some image which didn't go through.
Consider using third party site if text is not enough to convey the message.
On Thu, Dec 15, 2016 at 7:02 PM, lk_hbase wrote:
> hi,all:
> I'm useing hbase 1.2.3 zookeeper 3.4.9 hado
bq. preReplicateLogEntries and postReplicateLogEntries gets called on the
slave cluster region server
This is by design.
These two hooks are around ReplicationSinkService#replicateLogEntries().
ReplicationSinkService represents the sink.
Can you tell us what you need to know on the source cluster
Manjeet:
Have you looked at HDFS-10540 ?
Not sure if the distribution you use has the fix.
FYI
On Wed, Dec 14, 2016 at 9:34 PM, Manjeet Singh
wrote:
> Once I read some where that do not run HDFS balancer otherwise it will
> spoil Meta of Hbase
>
> I am getting bellow erroe when I add new node
s post ?
>
> regds,
> Karan Alang
>
> On Wed, Dec 14, 2016 at 5:21 PM, Ted Yu-3 [via Apache HBase] <
> ml-node+s679495n4085137...@n3.nabble.com> wrote:
>
> > I took a look at opentsdb.log which shows asynchbase logs.
> >
> > Have you considered polli
I took a look at opentsdb.log which shows asynchbase logs.
Have you considered polling asynchbase mailing list ?
On Wed, Dec 14, 2016 at 4:59 PM, karanalang wrote:
> I've kerberized HDP 2.4 (HBase version - 1.1.2.2.4.0.0-169, openTSDB
> version
> - 2.2.1)
> On startiong OpenTSDB, i'm getting fo
This is clearly specified here:
http://hbase.apache.org/book.html#recommended_configurations.zk
On Wed, Dec 14, 2016 at 2:58 AM, kiran wrote:
> Sure ??
>
> ZK is not managed by hbase. Can some one confirm ??
>
> On Wed, Dec 14, 2016 at 3:02 PM, Ted Yu wrote:
>
> > Yo
You should add it in hbase-site.xml
Note: it is bounded by 20 times tickTime
Cheers
> On Dec 14, 2016, at 12:47 AM, Sandeep Reddy wrote:
>
> Hi,
>
>
> We are using hbase-0.98.7 with zookeeper-3.4.6 where our zookeeper is not
> managed by HBase.
>
> We set HBASE_MANAGES_ZK as false in hbase
s without actually reading them all?
>
> Thanks.
>
> ----
> Saad
>
>
> On Sat, Dec 3, 2016 at 3:20 PM, Ted Yu wrote:
>
> > I took a look at the stack trace.
> >
> > Region server log would give us more detail on the frequency and duration
> >
Congratulations , Josh.
> On Dec 10, 2016, at 11:47 AM, Nick Dimiduk wrote:
>
> On behalf of the Apache HBase PMC, I am pleased to announce that Josh Elser
> has accepted the PMC's invitation to become a committer on the project. We
> appreciate all of Josh's generous contributions thus far and
ntCnxn: Session 0x0 for server null, unexpected error,
>> closing socket connection and attempting reconnect
>>
>> zookeeper.ClientCnxn: Opening socket connection to server localhost/
>> 127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown
>> error)
>
Please check the zookeeper server logs to see what might be happening.
bq. fontana-04,fontana-03
Why only two servers in the quorum ? Can you add one more zookeeper server ?
Cheers
On Thu, Dec 8, 2016 at 1:18 AM, Vincent Fontana <74f...@gmail.com> wrote:
> now i have this...
>
> 16/12/08 10:08
Please take a look at:
hbase-endpoint/src/main/java/org/apache/hadoop/hbase/coprocessor/AggregateImplementation.java
Can you tell us more about how the scan object is formed (I assume you have
set start / stop rows) ?
Cheers
On Tue, Dec 6, 2016 at 10:07 AM, Paramesh Nc wrote:
> Hi All,
> What
bq. to prevent a single row being put(wrote) columns over much?
Can you clarify the above ?
Don't you have control over the schema ?
Thanks
On Mon, Dec 5, 2016 at 7:52 PM, 聪聪 <175998...@qq.com> wrote:
> Recently, I have a problem that confused me a long time. The problem is
> that as we all kn
I was in China the past 10 days where I didn't have access to gmail.
bq. repeat this sequence a thousand times
You mean proceeding with the next parameter ?
bq. use hashing mechanism to transform this long string
How is the hash generated ?
The hash prefix should presumably evenly distribute th
, 2016 at 6:20 AM Saad Mufti wrote:
>
> > No.
> >
> >
> > Saad
> >
> >
> > On Fri, Dec 2, 2016 at 3:27 PM, Ted Yu wrote:
> >
> > > Some how I couldn't access the pastebin (I am in China now).
> > > Did the region server
; John Leach
>
>
> > On Dec 2, 2016, at 1:20 PM, Saad Mufti wrote:
> >
> > Hi Ted,
> >
> > Finally we have another hotspot going on, same symptoms as before, here
> is
> > the pastebin for the stack trace from the region server that I obtained
> via
> > V
From #2 in the initial email, the hbase:meta might not be the cause for the
hotspot.
Saad:
Can you pastebin stack trace of the hot region server when this happens again ?
Thanks
> On Dec 2, 2016, at 4:48 AM, Saad Mufti wrote:
>
> We used a pre-split into 1024 regions at the start but we misc
For #1, there is more than one way. Normally you can count the number of
regions per server - the count should be roughly the same across servers.
For #2, you can take a look at StochasticLoadBalancer to see what factors
affect region movement.
Cheers
On Thursday, December 1, 2016 2:08 AM,
ong on this one :)
Our challenge has been to understand what's HBase doing under various
scenarios. We monitor call queue lengths, sizes and latencies as the primary
alerting mechanism to tell us something is going on with HBase.
Thanks!-neelesh
On Wed, Nov 30, 2016 at 1:15 PM, Ted Yu wro
Please take a look at RpcExecutor#startHandlers() for what the numbers mean in
RPC handler string:
String name = "RpcServer." + threadPrefix + ".handler=" + handlers.size()
+ ",queue=" + index + ",port=" + port;
port is the port which the RPC server listens on.
Which release of HBa
Neelesh:Can you share more details about the sluggish cluster performance (such
as version of hbase / phoenix, your schema, region server log snippet, stack
traces, etc) ?
As hbase / phoenix evolve, I hope the performance keeps getting better for your
use case.
Cheers
On Wednesday, Novembe
Have you looked at RowFilter ?
There is also FuzzyRowFilter which is versatile.
On Wednesday, November 30, 2016 1:16 AM, Devender Yadav
wrote:
Hi All,
HBase version: 1.2.2 (both server and Java API)
I am using SingleColumnValueFilter.
public SingleColumnValueFilter(byte[] family,
Did you copy the start key verbatim ?
Please take a look at ./hbase-shell/src/main/ruby/shell.rb to see example of
proper escaping.
Cheers
On Tuesday, November 29, 2016 1:58 AM, Ravi Kumar Bommada
wrote:
Hi,
I'm trying to delete a row from 'hbase:meta' by providing region name as belo
Congratulations, Phil.
On Tuesday, November 29, 2016 1:49 AM, Duo Zhang
wrote:
On behalf of the Apache HBase PMC, I am pleased to announce that Phil Yang
has accepted the PMC's invitation to become a committer on the project. We
appreciate all of Phil's generous contributions thus far a
Congratulations, Lijin.
On Tuesday, November 29, 2016 3:01 AM, Anoop John
wrote:
Congrats and welcome Binlijin.
-Anoop-
On Tue, Nov 29, 2016 at 3:18 PM, Duo Zhang wrote:
> On behalf of the Apache HBase PMC, I am pleased to announce that Lijin
> Bin(binlijin) has accepted the PMC's in
401 - 500 of 3936 matches
Mail list logo