right? Cells are only interpreted server side and are not returned on the
> client side...
>
> 2015-08-31 15:52 GMT-04:00 Ted Yu <yuzhih...@gmail.com>:
>
> > From the help message of put command, you can see the following:
> >
> > hbase> put 't1', 'r1',
the data of region on the
(distributed)FileSystem. It should only, update metadata of HBase.
Did you check diskio stats during region movement?
On Tue, Aug 25, 2015 at 10:40 AM, Ted Yu yuzhih...@gmail.com wrote:
Please see http://hbase.apache.org/book.html#regions.arch.assignment
shell without these nodes and a balanced cluster
(+- 3 regions per node), balancer ran very quickly, around 3 seconds.
On Thu, Aug 27, 2015 at 9:50 AM, Ted Yu yuzhih...@gmail.com wrote:
How balanced are the table regions in your cluster ?
Cheers
On Thu, Aug 27, 2015 at 6:15 AM, donmai
exactly what's going on?
On Wed, Aug 26, 2015 at 1:20 PM, donmai dood...@gmail.com wrote:
DEBUG is on, trying to look through the logs again. Thanks!
On Wed, Aug 26, 2015 at 12:59 PM, Ted Yu yuzhih...@gmail.com wrote:
Did you enable DEBUG logging ?
Can you find the logs for one
Did you enable DEBUG logging ?
Can you find the logs for one such occurrence and pastebin relevant portion
?
Thanks
On Wed, Aug 26, 2015 at 9:58 AM, donmai dood...@gmail.com wrote:
Hi,
Occasionally when I run restore_snapshot on HBase 0.98.10, it appears that
the table directory structure
Have you looked at http://hbase.apache.org/book.html#cp ?
Cheers
On Aug 26, 2015, at 12:30 AM, Buntu Dev buntu...@gmail.com wrote:
I'm planning on ingesting web page events via Flume to HBase and wanted to
know if there are any ways HBase related projects to define rules to
trigger an
You can refer to:
http://hbase.apache.org/book.html#arch.region.splits
Cheers
On Wed, Aug 26, 2015 at 6:38 PM, jackiehbaseuser jackiehbaseu...@126.com
wrote:
Hi
How many ways about hbase pre-split?
thank u very much!
Best regards!
Jackie
a region reassignment?
On Tue, Aug 25, 2015 at 12:40 PM, Ted Yu yuzhih...@gmail.com wrote:
Can you give a bit more information:
which filesystem you use
which hbase release you use
master log snippet for the long region assignment
Thanks
On Tue, Aug 25, 2015 at 9:30 AM, donmai dood
Can you give a bit more information:
which filesystem you use
which hbase release you use
master log snippet for the long region assignment
Thanks
On Tue, Aug 25, 2015 at 9:30 AM, donmai dood...@gmail.com wrote:
Hi,
I'm curious about how exactly region movement works with regard to data
,
Chandrash3khar Kotekar
Mobile - +91 8600011455
On Mon, Aug 24, 2015 at 6:15 PM, Ted Yu yuzhih...@gmail.com wrote:
Which hbase release are you using ?
Which version of thrift do you use in your app ?
Thanks
On Aug 24, 2015, at 5:00 AM, Chandrashekhar Kotekar
shekhar.kote
Which hbase release are you using ?
Which version of thrift do you use in your app ?
Thanks
On Aug 24, 2015, at 5:00 AM, Chandrashekhar Kotekar
shekhar.kote...@gmail.com wrote:
Hi,
I am trying to use following code to test HBase Thrift interface for
Node.js but it is not working.
getting ' ERROR thrift.ProcessFunction: Internal error processing get'
error in Thrift server logs. Shall I write separate thread about this error?
Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455
On Mon, Aug 24, 2015 at 6:45 PM, Ted Yu yuzhih...@gmail.com wrote:
Looking at pom.xml
When I clicked on http://pastebin.com/embed.php?i=r9uqr8iN , I didn't
see 'Internal
error ' message.
I tried the above operation both at home and at work.
Can you double check your code, considering the 'Invalid method name'
message ?
Thanks
On Mon, Aug 24, 2015 at 7:32 AM, Chandrashekhar
Related please see HBASE-13408 HBase In-Memory Memstore Compaction
FYI
On Mon, Aug 24, 2015 at 10:32 AM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
The split policy also uses the flush size to estimate how to split
tables...
It's sometime fine to upgrade thise number a bit. Like,
Have you read the following ?
http://hbase.apache.org/book.html#rowkey.design
Cheers
On Sun, Aug 23, 2015 at 8:01 AM, jackiehbaseuser jackiehbaseu...@126.com
wrote:
Hi
How many ways when i design the hbase rowkey ,and give some examples.
Thank u very much!
Best regards!
qiguo
Congratulations Stephen.
On Aug 20, 2015, at 7:09 PM, Andrew Purtell apurt...@apache.org wrote:
On behalf of the Apache HBase PMC, I am pleased to announce that Stephen
Jiang has accepted the PMC's invitation to become a committer on the
project. We appreciate all of Stephen's hard work
You can use opentsdb / ganglia
Cheers
On Aug 20, 2015, at 5:25 AM, jackiehbaseuser jackiehbaseu...@126.com wrote:
Hi
I want to the mornitor the Hbase(based hbase 0.96.2),which tool can i choose
!
Thank u very much!
Best regards!
qiguo
Have you run hbck on testnamespace:testtable ?
bq. major compaction was running for a table.
Which table was being compacted ?
Is it possible for you to share the region server log of the new server ?
Cheers
On Thu, Aug 20, 2015 at 4:58 AM, mukund murrali mukundmurra...@gmail.com
wrote:
Hi
Whether having HBase instance on each data node depends on the amount of
data you have and access pattern you expect.
bq. Is it any useful to have more HBase nodes than HDFS nodes?
I have never seen the above setup.
Do you have an hdfs cluster already ? Can you let us know your use case ?
bq. 'spilling map output' occupied most of whole time.
Do you mind giving more detail on the above (percentage of job runtime) ?
Which release of hadoop / hbase are you using ?
Cheers
On Tue, Aug 18, 2015 at 11:11 PM, dong.yajun dongt...@gmail.com wrote:
Hello,
Which is the fastest way to
gupta anilgupt...@gmail.com wrote:
As per my experience, Phoenix is way superior than Hive-HBase integration
for sql-like querying on HBase. It's because, Phoenix is built on top of
HBase unlike Hive.
On Tue, Aug 18, 2015 at 9:09 AM, Ted Yu yuzhih...@gmail.com wrote:
To my knowledge
- does regionserver localise the hfile
by downloading it to local and then uploading again in region directory? Or
it just moves to to region directory and wait for next compaction to get it
localise as in regionserver failure case?
On Mon, Aug 17, 2015 at 11:00 PM, Ted Yu yuzhih
ClientSmallScanner is used in cases where a single RPC would fetch scan
results.
It is instantiated when Scan.isSmall() indicates a small scan.
Cheers
On Tue, Aug 18, 2015 at 8:11 AM, whodarewin2006 whodarewin2...@126.com
wrote:
hi,
I was read the source code of hbase-client recently,and
...@gmail.com
wrote:
Thanks!
Which one is better for sqlkind of queries over hbase (queries involve
filter , key range scan), aggregates by column values.
.
1.Hive storage handlers
2.or Phoenix
On Tue, Aug 18, 2015 at 9:14 PM, Ted Yu yuzhih...@gmail.com wrote:
For #1, if you want
For #1, take a look at the following in hbase-default.xml :
namehbase.client.keyvalue.maxsize/name
value10485760/value
For #2, it would be easier to answer if you can outline access patterns in
your app.
For #3, adjustment according to current region boundaries is done client
side. Take
wrote:
~8-10 fields of size (5 of 20 bytes each )and 3 fields of size 200 bytes
each.
On Mon, Aug 17, 2015 at 9:55 PM, Ted Yu yuzhih...@gmail.com wrote:
How many fields such as F1 are you considering for embedding in row key ?
Suggested reading:
http://hbase.apache.org/book.html
instead of few if it would have been one large
table ?
On Mon, Aug 17, 2015 at 7:29 PM, Ted Yu yuzhih...@gmail.com wrote:
For #1, take a look at the following in hbase-default.xml :
namehbase.client.keyvalue.maxsize/name
value10485760/value
For #2, it would be easier
?
On Mon, Aug 17, 2015 at 8:27 PM, Ted Yu yuzhih...@gmail.com wrote:
For #1, it is the limit on a single keyvalue, not row, not key.
For #2, please see the following:
http://hbase.apache.org/book.html#store.memstore
http://hbase.apache.org/book.html
After predetermined number of days, would your table(s) not receive any read /
write requests ?
Have you considered using TTL for cleaning old data ?
Cheers
On Aug 14, 2015, at 2:54 AM, ShaoFeng Shi shaofeng...@gmail.com wrote:
Hello the community,
In my case, I want to cleanup the
I think Dinesh was referring to:
http://docs.oracle.com/javase/7/docs/api/java/lang/Process.html
Cheers
On Thu, Aug 13, 2015 at 3:23 PM, Nick Dimiduk ndimi...@gmail.com wrote:
dev to bcc
Hi Dinesh,
I'm not sure what you mean by process API. Are you launching the hbase
shell as an external
and blockLocalityIndex is 1.0 (min 0.0)
Yes after manual triggering the deletes purged. But we don't want to have
it manual. Any other config to avoid such scenario?
Thanks
On Mon, Aug 10, 2015 at 8:01 PM, Ted Yu yuzhih...@gmail.com wrote:
What release of hbase are you using ?
Can you
What release of hbase are you using ?
Can you pastebin region server log with DEBUG logging ?
I guess you have tried issuing manual command. Did it work ?
Thanks
On Mon, Aug 10, 2015 at 7:02 AM, mukund murrali mukundmurra...@gmail.com
wrote:
Any one help us in this :( Are we missing
I noticed the compaction was on this single file in
qproxy_request_info2,\x00\x00\x01_\x00\xC8\x01g\xA26,1415980522820.
fc5207e028a3f5e549bf79c089561793.
Can you examine this file using
http://hbase.apache.org/book.html#_hfile_tool ?
BTW 0.96 is really old. Various 1.x releases have been made.
14:51:26 UTC 2015 in 4511 milliseconds
The filesystem under path '/' is CORRUPT
-
Thank you for your time.
*Desde*: Ted Yu yuzhih...@gmail.com
*Enviado*: viernes, 07 de agosto de 2015 16:07
*Para*: user@hbase.apache.org user@hbase.apache.org,
av...@datknosys.com
*Asunto*: Re: RegionServers
Does 10.240.187.182 http://10.240.187.182:50010/ correspond with w-0 or m
?
Looks like hdfs was intermittently unstable.
Have you run fsck ?
Cheers
On Fri, Aug 7, 2015 at 12:59 AM, Adrià Vilà av...@datknosys.com wrote:
Hello,
HBase RegionServers fail once in a while:
- it can be any
.
James
On Fri, Aug 7, 2015 at 11:05 AM, Ted Yu yuzhih...@gmail.com wrote:
Some WAL related files were marked corrupt.
Can you try repairing them ?
Please check namenode log.
Search HDFS JIRA for any pending fix - I haven't tracked HDFS movement
closely recently.
Thanks
Which release of hbase are you using ?
Can you pastebin the log(s) from client / server side ?
Table regions spreading across region servers is common scenario. You
should be able to get this working.
Thanks
On Thu, Aug 6, 2015 at 1:43 PM, Lydia Ickler ickle...@googlemail.com
wrote:
Hi,
I
│ ├── da4fe34334824f6ea9f5a12dfb93cab9
│ └── f3a15c326535420cb68d505ce9798d40
└── recovered.edits
└── 2.seqid
4 directories, 12 files
I can test the same on 1.1.0 if you want, just let me know.
2015-08-05 13:18 GMT-04:00 Ted Yu yuzhih
.
At 2015-08-03 22:59:32, Ted Yu yuzhih...@gmail.com wrote:
If you use PageFilter alone, that sounds Okay. What happens when
PageFilter
is combined with other filters ?
Cheers
On Mon, Aug 3, 2015 at 7:53 AM, whodarewin2006 whodarewin2...@126.com
wrote:
hi,Ted
I thank we
Can you provide some more detail please ?
Release of hbase
region server log (snippet) showing that compaction is skipped
Thanks
On Wed, Aug 5, 2015 at 10:15 AM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Quick question here.
Compactions seems to be triggered when we flush
Your hbase-site.xml is effectively empty.
Have you followed this guide ?
http://hbase.apache.org/book.html#quickstart
http://hbase.apache.org/book.html#quickstart
Cheers
On Tue, Aug 4, 2015 at 7:23 AM, Daniel de Oliveira Mantovani
daniel.oliveira.mantov...@gmail.com wrote:
Good morning,
I
There have been some improvements to FuzzyRowFilter.java since 1.1.0.1
e.g. HBASE-13761
If you have a chance, please try out 1.1.1 release.
Cheers
On Tue, Aug 4, 2015 at 4:01 AM, Michal Haris michal.ha...@visualdna.com
wrote:
Hi all, I'm having the issue of OutOfOrderScannerNextException,
namehbase.zookeeper.property.dataDir/name
value/home/testuser/zookeeper/value
/property
/configuration
I even tried to use the Hbase's zookeeper and I have the same error
which
don' t tell anything.
:(
On Tue, Aug 4, 2015 at 11:55 AM, Ted Yu yuzhih...@gmail.com
)
.addFamily(FAMILY)
.setFilter(new ColumnPrefixFilter(Bytes.toBytes(sess)));
should work.
-Vlad
On Mon, Aug 3, 2015 at 2:41 PM, Ted Yu yuzhih...@gmail.com
javascript:;
wrote:
Is there column with prefix 'name' whose column name is longer than
'name'
(such as 'name0
Is there column with prefix 'name' whose column name is longer than 'name'
(such as 'name0') ?
If not, take a look at MultipleColumnPrefixFilter
Cheers
On Mon, Aug 3, 2015 at 1:39 PM, Dmitry Minkovsky dminkov...@gmail.com
wrote:
I'm trying to construct a `Get` the does two things:
- Gets a
will scan regionserver one by one,if we get enough records,we
can stop scan and return the records,is this OK?
At 2015-07-31 21:04:43, Ted Yu yuzhih...@gmail.com wrote:
Coordination across different region servers is likely to reduce
efficiency
of the filter.
Can you apply other attributes
and if you are scanning for more than 7
days, then you will read the older files anyway, no?
JM
Le 2015-08-02 05:57, Ted Yu yuzhih...@gmail.com a écrit :
Dave:
I wonder if Filter response can be enhanced in the following manner:
http://pastebin.com/sb6apTPm
My approach
Can you take jstack of the region server next time this happens ?
Btw please consider upgrading.
0.96.x is too old.
On Aug 3, 2015, at 5:18 AM, Chang Chen baibaic...@gmail.com wrote:
Hi All
We use HBase 0.96.1.1, and find a very strange issue which looks
like CompactSplitThread is
complexity to the job and gives up
the atomicity/consistency guarantees as new writes hit both column
families.
On Sat, Aug 1, 2015 at 9:07 AM, Ted Yu yuzhih...@gmail.com wrote:
Can you achieve your goal with two scans ?
The first scan specifies TimeRange corresponding to last day. This scan
Here is refined version:
http://pastebin.com/WXjYKmBG
Cheers
On Sun, Aug 2, 2015 at 2:57 AM, Ted Yu yuzhih...@gmail.com wrote:
Dave:
I wonder if Filter response can be enhanced in the following manner:
http://pastebin.com/sb6apTPm
My approach is based on using essential column family
AM, Ted Yu yuzhih...@gmail.com wrote:
Have you considered using essential column family feature (through
Filter)
?
In your case A would be the essential column family.
Within TimeRange for recent data, the filter would return both column
families.
Outside the TimeRange, only family
Have you considered using essential column family feature (through Filter) ?
In your case A would be the essential column family.
Within TimeRange for recent data, the filter would return both column
families.
Outside the TimeRange, only family A is returned.
Cheers
On Sat, Aug 1, 2015 at 7:17
Here're some other memstore related config parameters:
hbase.regionserver.global.memstore.size
hbase.regionserver.global.memstore.size.lower.limit
hbase.hregion.preclose.flush.size
hbase.hregion.memstore.block.multiplier
hbase.hregion.percolumnfamilyflush.size.lower.bound
The last one is for
Which HBase release are you using ?
Thanks
On Jul 31, 2015, at 3:51 AM, Shashi Vishwakarma shashi.vish...@gmail.com
wrote:
Hi
I am trying to create table in hbase namspace but it is giving me
permission exception but i confirmed with admin that he has given
permission to my user.
Coordination across different region servers is likely to reduce efficiency
of the filter.
Can you apply other attributes of Scan or combine with other filter(s) ?
Cheers
On Fri, Jul 31, 2015 at 4:42 AM, whodarewin2006 whodarewin2...@126.com
wrote:
hi,all
I am using PageFilter to limit
How many region servers do you have in the cluster ?
Would there be concurrent write load on the cluster if you choose to run major
compaction ? I ask this because the concurrent write would be slowed down
by the major compaction and compacting 10 TB of data would take some time.
Cheers
On Wed,
I think the decision would be made by 0.94 RM.
Cheers
On Tue, Jul 28, 2015 at 2:04 PM, Varun Sharma va...@pinterest.com.invalid
wrote:
Is it possible to upload one or is there a guide on how to upload ?
THanks
Varun
On Mon, Jul 27, 2015 at 2:24 PM, Ted Yu yuzhih...@gmail.com wrote
By PSQL did you mean PostgreSQL ?
Cheers
On Mon, Jul 27, 2015 at 12:39 AM, Jeetendra Gangele gangele...@gmail.com
wrote:
I have a production data in PSQL and i want ti migrant the data to Hbase.
Also if there are any changes in my PSQL data , I wanted to update the
Hbase.
Since I am
Please don't send job related email to this mailing list.
Cheers
On Mon, Jul 27, 2015 at 12:03 PM, Amit Tewari amittew...@gmail.com wrote:
Hi All
We have basic experience running HBase up to about 1TB data size. However
we are expecting our data size to increase soon and are worried that we
Please take a look at the following JIRA:
HBASE-12366 Add login code to HBase Canary tool
Cheers
On Mon, Jul 27, 2015 at 1:05 PM, Rose, Joseph
joseph.r...@childrens.harvard.edu wrote:
Folks,
I feel like I’m missing something basic, here. I’m trying to do a
privileged action using the
Talat has given summary of how to view config values.
You can also see the default values by searching
http://hbase.apache.org/book.html
Description for the config is available on the refguide.
Cheers
On Wed, Jul 22, 2015 at 1:32 AM, Talat Uyarer ta...@uyarer.com wrote:
Hi Shushant,
bq. Is it the size of a particular row
That was likely the cause.
Can you post the full stack trace ?
Thanks
On Sat, Jul 25, 2015 at 6:28 PM, F. Jerrell Schivers jerr...@bordercore.com
wrote:
Hello,
I'm getting the following error when I try to bulk load some data into
an HBase table at
)
--Jerrell
On Sat, Jul 25, 2015 at 9:51 PM, Ted Yu yuzhih...@gmail.com wrote:
bq. Is it the size of a particular row
That was likely the cause.
Can you post the full stack trace ?
Thanks
On Sat, Jul 25, 2015 at 6:28 PM, F. Jerrell Schivers
jerr...@bordercore.com
wrote
You can do rolling upgrade from 0.98.6.1 release to 1.1.0 release.
Cheers
Friday, July 24, 2015, 3:50 PM +0800 from Song Geng soul.gr...@me.com:
Hi,
I am a novice for hbase. Now I am trying to figure out an issue which is about
client scan timeout after delete. Basically, the reason is
)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)
On Fri, Jul 24, 2015 at 1:05 PM, Ted Yu yuzhih...@gmail.com
Can you provide us more information:
Release of HBase you use
Configuration change you made prior to restarting
By 'compaction is gone', do you mean that locality became poor again ?
Can you pastebin region server log when compaction got stuck ?
Thanks
Saturday, July 25, 2015, 2:20 AM +0800
, Ted Yu yuzhih...@gmail.com wrote:
What release of HBase do you use ?
I looked at the two log files but didn't find such information.
In the log for node 118, I saw something such as the following:
Failed to connect to /10.0.229.16:50010 for block, add to deadNodes and
continue
Was hdfs
What release of HBase do you use ?
I looked at the two log files but didn't find such information.
In the log for node 118, I saw something such as the following:
Failed to connect to /10.0.229.16:50010 for block, add to deadNodes and
continue
Was hdfs healthy around the time region server
For #1, with HDFS replication set to 3, HFile replication is handled by
hdfs. There shouldn't be HFile loss once bulk load completes.
For #3, multiple HFiles may be generated per region.
bq. If multiple does loadIncrementalHFiles merges these Hfiles to 1
There is no merging of HFiles in bulk
bq. the assignment is not always preserved
Can you provide more information on this scenario ?
Master should have retained region assignment across cluster restart.
If you can pastebin relevant portion of master log w.r.t. the region(s)
whose location was not retained, that would be nice.
this to be cleaned and so you need to major compact them.
HTH,
JM
2015-07-15 10:43 GMT-04:00 Ted Yu yuzhih...@gmail.com:
bq. that some times merge just does not work.
Can you identify under what scenario the merge doesn't work
(through
closer
Can you check corresponding region server to see if the server was
operating correctly ?
I went over some previous threads where some region server was using wrong
zookeeper quorum.
Cheers
On Thu, Jul 16, 2015 at 7:35 AM, dgoldenberg123 dgoldenberg...@gmail.com
wrote:
Could someone elaborate
Can you show us how you formed Get request for row key 0?
In general you can add logging in your observer so that you can find whether it
is called by examining region server log.
On Jul 16, 2015, at 6:18 AM, James Teng tenglinx...@outlook.com wrote:
hi all,Today when i tried to use hbase
How many servers are there in zookeeper quorum ?
Have you checked the log of zookeeper leader round the time master crashed ?
Cheers
On Wed, Jul 15, 2015 at 7:14 PM, Jo Young Zhang joyoungzh...@gmail.com
wrote:
I found hbase clutser crashed on-the-hour
HBase master running log as follows
bq. that some times merge just does not work.
Can you identify under what scenario the merge doesn't work (through closer
inspection of the region server log - assuming you have DEBUG logging
turned on) ?
bq. Are there minimum requirements for two regions to be merged?
If the two adjacent
bq. some minor additions (new API) in 0.9.2 [5]
I don't seem to find [5].
Mind sharing the link ?
Thanks
On Wed, Jul 8, 2015 at 11:42 AM, Srikanth Srungarapu srikanth...@gmail.com
wrote:
Hi Folks,
Currently, HBase is using Thrift 0.9.0 version, with the latest version
being 0.9.2.
you.
One more thing is not clear for me is what I can do with ~4000 znodes in
/hadoop-ha/testhbase1/rmstore/ZKRMStateRoot/RMAppRoot
What will happen with them if I’ll do nothing, will the system try to
complete all of these applications?
Thank you.
On 07 Jul 2015, at 00:16, Ted Yu
Thank you.
On 06 Jul 2015, at 17:37, Ted Yu yuzhih...@gmail.com wrote:
bq. I had to delete and recreate it
What error(s) did you get when trying to restart the region server ? Have
you checked its log files ?
bq. start balancer manually, but it returned false
Can you check
Have you looked at the recently fixed bug HBASE-13329 ?
From stack trace, there is big similarity.
The fix would be in the upcoming 1.2.0 release.
FYI
On Fri, Jul 3, 2015 at 11:30 PM, Dinh Duong Mai duongmd.2...@gmail.com
wrote:
Dear HBase team,
I reported an issue to HBase issue tracking
are causing the problem?
Can I delete all of this applications?
On 06 Jul 2015, at 18:45, Ted Yu yuzhih...@gmail.com wrote:
Do you see in the master log something similar to the following ?
master.HMaster: Not running balancer because 1 region(s) in transition
You can search
The following is available on Master JMX:
SnapshotNumOps : {
description : Number of ops for snapshot stats,
value : 104439
},
SnapshotAvgTime : {
description : Average time for snapshot stats,
value : 0.375
},
FYI
On Fri, Jul 3, 2015 at 1:15 AM, Akmal
Minor correction to my previous email:
BinaryComparator instance was initialized with value.getBytes().
Comparison to x.getBytes() and null gave return value of positive number
with the aforementioned fix.
Cheers
On Fri, Jul 3, 2015 at 3:04 PM, Ted Yu yuzhih...@gmail.com wrote:
You want
You want to check whether one of the conditions is met, right ?
Looking at the second variant of checkAndPut(), HTable uses
BinaryComparator.
I wrote some simple code involving call to BinaryComparator.compareTo().
I had to fix the following:
Caused by: java.lang.NullPointerException
at
Pardon.
Comparison to a.getBytes() gave positive value.
Comparison to null gave positive value.
Comparison to x.getBytes() gave negative value.
On Fri, Jul 3, 2015 at 3:08 PM, Ted Yu yuzhih...@gmail.com wrote:
Minor correction to my previous email:
BinaryComparator instance was initialized
You may have read http://hbase.apache.org/book.html#version.delete
Please see 'Scan Improvements in HBase 1.1.0' under
https://blogs.apache.org/hbase/
Cheers
On Thu, Jul 2, 2015 at 6:54 PM, Song Geng soul.gr...@me.com wrote:
Hi everyone,
I am a complete novice in hbase and the community.
Looking at Hbase.thrift, there is no parameter for passing numVersions.
You can log a JIRA for this enhancement.
Cheers
On Wed, Jul 1, 2015 at 9:39 AM, Navdeep Agrawal
navdeep_agra...@symantec.com wrote:
Hi ,
I am exploring thrift service in Hbase in python ,we have a scenario where
we
Looks like the hbase release you use doesn't have HBASE-12706 which is in
hbase 1.1.0
FYI
On Mon, Jun 29, 2015 at 2:40 AM, 俞忠静 yuzhongj...@bianfeng.com wrote:
Hi dear all,
I have an existing zookeeper ensemble, which is
kafka01:2181,kafka02:2182,kafka03:2183,data04:2184,data05:2185 (port
How do you configure BucketCache ?
Thanks
On Mon, Jun 29, 2015 at 8:35 PM, Louis Hust louis.h...@gmail.com wrote:
BTW, the hbase is hbase0.98.6 CHD5.2.0
2015-06-30 11:31 GMT+08:00 Louis Hust louis.h...@gmail.com:
Hi, all
When I scan a table using hbase shell, got the following
Please provide a bit more information:
the hbase / hadoop release you use
the type of data block encoding for the table
How often did this happen ?
thanks
On Sat, Jun 27, 2015 at 3:44 AM, غلامرضا g.r...@chmail.ir wrote:
hi
i got this exception in reduce task when task try to incement
bq. non strictly consistency of Zookeeper
Can you elaborate on what the above means ?
please read this:
http://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#ch_zkGuarantees
Cheers
On Sat, Jun 27, 2015 at 7:20 AM, Shushant Arora shushantaror...@gmail.com
wrote:
How Hbase uses
The CFP ends on July 10th for Big Data EU.
See:
http://events.linuxfoundation.org/events/apache-big-data-europe/program/cfp
Cheers
On Thu, Jun 25, 2015 at 11:29 AM, Nick Dimiduk ndimi...@apache.org wrote:
Hello developers, users, speakers,
As part of ApacheCON's inaugural Apache: Big Data,
Have you read this thread http://search-hadoop.com/m/YGbb1sOLh2W9Z9z ?
Cheers
On Thu, Jun 25, 2015 at 10:10 AM, Mateusz Kaczynski mate...@arachnys.com
wrote:
One of our clusters running HBase 0.98.6-cdh5.3.0 used to work (relatively)
smoothly until a couple of days ago, when out of the sudden
Please see the following section under
http://hbase.apache.org/book.html#zookeeper :
How many ZooKeepers should I run?
Cheers
On Thu, Jun 25, 2015 at 6:48 PM, Bharath Kumar bharath...@gmail.com wrote:
Hi Team,
I have a query , with having an ensemble of zookeeper instances .
Does
+1
Ran test suite against Java 1.8.0_45
Checked signature
Practiced basic shell commands
On Tue, Jun 23, 2015 at 4:25 PM, Nick Dimiduk ndimi...@apache.org wrote:
I'm happy to announce the first release candidate of HBase 1.1.1
(HBase-1.1.1RC0) is available for download at
bq. data node processes doesn’t die
Which hadoop version are you using ?
Have you read the following section in
http://hbase.apache.org/book.html#_hbase_and_hdfs ?
HDFS takes a while to mark a node as dead. You can configure HDFS to avoid
using stale DataNodes
Cheers
On Wed, Jun 24, 2015 at
bq. my hbase client keeps stuck
Can you provide stack trace for the client ?
Were region servers operating properly ? Can you check server logs during
that time frame ?
Cheers
On Thu, Jun 18, 2015 at 1:54 AM, Neutron sharc neutronsh...@gmail.com
wrote:
Btw, hbase 0.94.26 is on top of HDFS
Can you provide a bit more detail on how time based increment is expected
work ?
RateLimiter and its subclasses in hbase codebase are concerned with rpc
throttling.
Cheers
On Mon, Jun 22, 2015 at 7:06 AM, mukund murrali mukundmurra...@gmail.com
wrote:
Does hbase provide time based increment
://pastebin.com/3pgVYpYW
Thanks
On Thu, Jun 11, 2015 at 10:48 PM, Ted Yu yuzhih...@gmail.com wrote:
Looking at the revision history for ClientSmallReversedScanner.java which
appeared in the stack trace, there have been several bug fixes on top of
the hbase release you're using.
Can you try
I looked at RestoreSnapshotHelper.java from tip of 0.98 branch but didn't
see where StringBuilder.append() is called.
Can you tell us which hbase release you're using so that the line numbers
can be matched against source code ?
Thanks
On Thu, Jun 18, 2015 at 11:44 AM, Tianying Chang
Possibly the error was caused by orphaned ZK node for table 'tsdb'.
See CreateTableHandler.prepare().
After cleaning the orphaned znode, you should be able to create the table.
Cheers
On Thu, Jun 18, 2015 at 12:44 PM, Tianying Chang tych...@gmail.com wrote:
actually, I found even when I try
1001 - 1100 of 3641 matches
Mail list logo