How to debug/run HBase in eclipse

2011-11-20 Thread
I run HRegionServer whith program arguments that is start in eclipse. 2011-11-21 09:35:12,384 WARN [main] regionserver.HRegionServerCommandLine(56): Not starting a distinct region server because hbase.cluster.distributed is false but the following contents in $HBAE_HOME/conf/hbase-site.xml :

java.io.IOException: Connection reset by peer

2011-11-17 Thread
2011-11-18 13:35:06,252 INFO org.apache.hadoop.hbase.regionserver.HRegion: completed compaction on region EH, http://www.110.com/panli/s?a=r&tid=0&rid=18&cid=626&q=092B993648240AAD00399A4F0B3F7191,1320518253297.5bd2198a6cfe6c6a8d74cb8ee1974367. after 0sec 2011-11-18 13:45:13,213 WARN org.apache.had

Re: thread safety of incrementColumnValue

2011-07-23 Thread
> >>>> > > >>>> > > >>>> 1) the fact that HTable isn't thread-safe > > >>>> > > >>>> 2) how counters work > > >>>> > > >>>> Even if you are incrementing counters, you shou

How to convert HTable no compress to compressed with lzo.

2011-07-14 Thread
How to convert HTable no compress to compressed with lzo. --

Re: region max filesize

2011-07-12 Thread
create simple m/r job to migrate data to new tables and drop the old ones。 Could you put your code paste to here? On Wed, Jul 13, 2011 at 7:43 AM, Ravi Veeramachaneni < ravi.veeramachan...@gmail.com> wrote: > Albert, > > You doing partially right. The change will take into effect only to new >

Re: Cannot Run HBase + Hadoop on a single Node Cluster - Hdfs Problem

2011-07-06 Thread
Yes,HBase 0.90 worked well with hadoop-0.20.203.0. But I want to go back to hbase0.20.6. How to go backup hadoop-020.2 ? On Fri, May 20, 2011 at 2:55 AM, Jean-Daniel Cryans wrote: > The master log doesn't contain anything special, if anything weird > happened it would have been right after what

Re: why merge regions error

2011-04-24 Thread
catch (IOException e) { e = RemoteExceptionHandler.checkIOException(e); LOG.error("meta scanner error", e); metaScanner.close(); throw e; } } On Sun, Apr 24, 2011 at 3:32 PM, 陈加俊 wrote: > @Override &

Re: why merge regions error

2011-04-24 Thread
(); if(latestRegion != null) { regions.add(latestRegion); } return regions.toArray(new HRegionInfo[regions.size()]); } the method nextRegion may get Null ,So there is a bug in HBase0.20.6. On Sun, Apr 24, 2011 at 2:56 PM, 陈加俊 wrote: > I run jruby as follows : > &g

why merge regions error

2011-04-23 Thread
I run jruby as follows : # Name of this script NAME = "merge_table" # Print usage for this script def usage puts 'Usage: %s.rb TABLE_NAME' % NAME exit! end # Get configuration to use. c = HBaseConfiguration.new() # Set hadoop filesystem configuration using the hbase.rootdir. # Otherwise, we

Re: Region server OME

2011-04-17 Thread
Yes,I hava this problem too. So I want to know how to allocate the memery in hbase . Why OOME ? How to limit the heap space in hbase ? or It did not calc the memery ? On Mon, Apr 18, 2011 at 9:01 AM, Weiwei Xiong wrote: > Hi, > > > My HBase deployment has been working great but recently the regi

Re: Can't drop regionInTransition

2011-04-14 Thread
I just want to know how to delete the region that always in transition . It went on for days . Regions In Transition: 1 1 -> name=cjjHTML, http://sports.cn.yahoo.com/ypen/20110308/246339_4.html,1302739163242, state=PENDING_OPEN On Fri, Apr 15, 2011 at 12:27 PM, Stack wrote: > For sure no old ve

heap memory allocation

2011-04-14 Thread
How does heap memory allocated in regionserver ? Metrics: request=0.0, regions=1283, stores=2304, storefiles=1968, storefileIndexSize=246, memstoreSize=791, compactionQueueSize=0, usedHeap=5669, maxHeap=11991, blockCacheSize=2095157424, blockCacheFree=419681872, blockCacheCount=24410, blockCacheH

java.io.IOException: Filesystem closed

2011-04-13 Thread
2011-04-13 20:27:08,620 ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: Error closing cjjHTML, http://www.csh.gov.cn/article_346937.html,1299079217805 java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:234) at org.apache.had

Re: How to add family on HBase-0.90.2

2011-04-12 Thread
required in 0.20.6 > either so if it was accepting it it was a bug. > > J-D > > On Tue, Apr 12, 2011 at 1:54 AM, 陈加俊 wrote: > > I can add family by follow command In HBase-0.20.6 > > > >>alter 'cjjHTML', {NAME => 'responseHeader'

How to add family on HBase-0.90.2

2011-04-12 Thread
I can add family by follow command In HBase-0.20.6 >alter 'cjjHTML', {NAME => 'responseHeader', METHOD => 'add'} But in HBase-0.90.2 I can't do it . How ? -- Thanks & Best regards jiajun

Re: Client Error: "java.net.ConnectException: Connection refused: no further information" after update HBase-0.20.6 to HBase0.90.2

2011-04-11 Thread
Soryy I had make a mistake at hbase.zookeeper.property.clientPort On Tue, Apr 12, 2011 at 1:08 PM, 陈加俊 wrote: > WARN : 04-12 12:54:13 Session 0x0 for server null, unexpected error, > closing socket connection and attempting reconnect > java.net.ConnectException: Connection refused: n

Client Error: "java.net.ConnectException: Connection refused: no further information" after update HBase-0.20.6 to HBase0.90.2

2011-04-11 Thread
WARN : 04-12 12:54:13 Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused: no further information at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(Soc

Re: too many regions cause OME ?

2011-04-11 Thread
I wan to know why openRegion can case the heap OME ,how to calculate the size of heap space . On Tue, Apr 12, 2011 at 10:49 AM, Jean-Daniel Cryans wrote: > Were they opening the same region by any chance? > > On Mon, Apr 11, 2011 at 6:13 PM, 陈加俊 wrote: > > There is no big scan

Re: too many regions cause OME ?

2011-04-11 Thread
Yes ,I scan (or get or put) rows always . On Tue, Apr 12, 2011 at 10:49 AM, Jean-Daniel Cryans wrote: > Were they opening the same region by any chance? > > On Mon, Apr 11, 2011 at 6:13 PM, 陈加俊 wrote: > > There is no big scan,and just norma load. Also strange is when one RS &g

Re: too many regions cause OME ?

2011-04-11 Thread
g scans? Or just normal load? > > J-D > > On Mon, Apr 11, 2011 at 5:50 PM, 陈加俊 wrote: > > my configuration is follows: > > > > hbase.client.write.buffer > > 2097152 > >

Re: too many regions cause OME ?

2011-04-11 Thread
I > still don't know if that's really your issue or it's a configuration > issue (which I have yet to see). > > J-D > > On Mon, Apr 11, 2011 at 5:41 PM, 陈加俊 wrote: > > Can I limit the numbers of regions on one RegionServer ? > > > > On Tue, Apr

Re: too many regions cause OME ?

2011-04-11 Thread
There is one table has 1.4T*3(replication) data . On Tue, Apr 12, 2011 at 8:38 AM, Doug Meil wrote: > > Re: " maxHeap=3991" > > Seems like an awful lot of data to put in a 4gb heap. > > -Original Message- > From: 陈加俊 [mailto:cjjvict...@gmail.com] > Se

Re: too many regions cause OME ?

2011-04-11 Thread
Can I limit the numbers of regions on one RegionServer ? On Tue, Apr 12, 2011 at 8:37 AM, Jean-Daniel Cryans wrote: > It's really a lot yes, but it could also be weird configurations or > too big values. > > J-D > > On Mon, Apr 11, 2011 at 5:35 PM, 陈加俊 wrote: > >

too many regions cause OME ?

2011-04-11 Thread
Is it too many regions ? Is the memory enough ? HBase-0.20.6 2011-04-12 00:16:31,844 FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: OutOfMemoryError, aborting. java.lang.OutOfMemoryError: Java heap space at java.io.BufferedInputStream.(BufferedInputStream.java:178) at or

Re: I want to update the hbase to 0.90.x

2011-04-07 Thread
e candidate yet. > > St.Ack > > On Fri, Apr 1, 2011 at 8:38 AM, 陈加俊 wrote: > > Which is more stable ? > > > > On Fri, Apr 1, 2011 at 11:07 PM, Stack wrote: > >> > >> Our current stable release is 0.90.1 but 0.90.2 should be out start of > >> next w

Re: I want to update the hbase to 0.90.x

2011-04-05 Thread
re so I'd like to know if > there > >> are any downsides to that approach. > >> > >> Thanks, > >> Hari > >> > >> On Tue, Apr 5, 2011 at 5:01 PM, Eric > >> Charleswrote: > >> > >>> On 5/04/2011 10:34, 陈加俊 wro

Where to checkout build.xml

2011-04-05 Thread
I want to build HBase 0.90.2 ,where to checkout build.xml ? I can't find it after checkout https://svn.apache.org/repos/asf/hbase/trunkhbase. -- Thanks & Best regards jiajun

Re: I want to update the hbase to 0.90.x

2011-04-05 Thread
in production -- but > wait till start of next week to hear community view; there may be > issues found with the candidate yet. > > St.Ack > > On Fri, Apr 1, 2011 at 8:38 AM, 陈加俊 wrote: > > Which is more stable ? > > > > On Fri, Apr 1, 2011 at 11:07 PM, Stack wrot

Re: Regionserver is crashed frequently these days

2011-04-01 Thread
I will get the gc time in gc-hbase.log at the next chance,the gc log about 22:00 is lost. On Sat, Apr 2, 2011 at 12:17 AM, Stack wrote: > On Fri, Apr 1, 2011 at 9:01 AM, 陈加俊 wrote: > > 2011-04-01 19:13:40,413 WARN org.apache.hadoop.hbase.regionserver.Store: > > Fail

Re: What is overload?

2011-04-01 Thread
close. The RegionServer is explaining why it closed > the region (It was closed because of a balancer directive, the > balancer determined the regionserver 'overloaded'). > > St.Ack > > On Fri, Apr 1, 2011 at 8:32 AM, 陈加俊 wrote: > > I extracted from regionserver

Regionserver is crashed frequently these days

2011-04-01 Thread
Regionserver is crashed frequently these days.,but It worked fine many months before these days . Some logs of one RS's log is as follows: 2011-04-01 19:13:40,413 WARN org.apache.hadoop.hbase.regionserver.Store: Failed open of hdfs:// master.uc.uuwatch.com:9000/hbase/cjjHTML/1494733632/page/51734

Re: I want to update the hbase to 0.90.x

2011-04-01 Thread
Apr 1, 2011 at 12:39 AM, 陈加俊 wrote: > > I want to update the hbase to 0.90.x from 0.20.6 ,which version should I > use > > now ? and anynone has steps in detail ? > > > > -- > > Thanks & Best regards > > jiajun > > > -- Thanks & Best regards jiajun

What is overload?

2011-04-01 Thread
I extracted from regionserver's log : 2011-04-01 19:17:22,716 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_CLOSE: cjjHTML,http://news.ifeng.com/gundong/detail_2011_03/15/515 4913_0.shtml,1300245193111: Overloaded 2011-04-01 19:17:22,716 INFO org.apache.hadoop.hbase.regionser

I want to update the hbase to 0.90.x

2011-04-01 Thread
I want to update the hbase to 0.90.x from 0.20.6 ,which version should I use now ? and anynone has steps in detail ? -- Thanks & Best regards jiajun

Re: the files of one table is so big?

2011-03-31 Thread
n > not enough information about what you're trying to do. > > J-D > > On Thu, Mar 31, 2011 at 12:27 AM, 陈加俊 wrote: > > Can I skip the log files? > > > > On Thu, Mar 31, 2011 at 2:17 PM, 陈加俊 wrote: > >> > >> I found there is so many log files unde

what is this mean ?

2011-03-31 Thread
2011-03-30 20:25:12,798 WARN org.apache.zookeeper.ClientCnxn: Exception closing session 0x932ed83611540001 to sun.nio.ch.SelectionKeyImpl@54e184e6 java.io.IOException: Read error rc = -1 java.nio.DirectByteBuffer[pos=0 lim=4 cap=4] at org.apache.zookeeper.ClientCnxn$SendThread.doIO(ClientCn

Re: the files of one table is so big?

2011-03-31 Thread
Can I skip the log files? On Thu, Mar 31, 2011 at 2:17 PM, 陈加俊 wrote: > I found there is so many log files under the table folder and it is very > big ! > > > On Thu, Mar 31, 2011 at 2:16 PM, 陈加俊 wrote: > >> I fond there is so many log files under the table fold

Re: the files of one table is so big?

2011-03-30 Thread
I found there is so many log files under the table folder and it is very big ! On Thu, Mar 31, 2011 at 2:16 PM, 陈加俊 wrote: > I fond there is so many log files under the table folder and it is very big > ! > > > > > On Thu, Mar 31, 2011 at 1:37 PM, 陈加俊 wrote: > >>

Re: the files of one table is so big?

2011-03-30 Thread
I fond there is so many log files under the table folder and it is very big ! On Thu, Mar 31, 2011 at 1:37 PM, 陈加俊 wrote: > thank you JD > > the type of key is Long , and the family's versions is 5 . > > > > On Thu, Mar 31, 2011 at 12:42 PM, Jean-Daniel Cryans

Re: the files of one table is so big?

2011-03-30 Thread
3647', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {N AME => 'simpleTemplet', COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536&#

Re: the files of one table is so big?

2011-03-30 Thread
me, > qualifier and timestamp (plus length of each). Depending on how big > your keys are, it can grow your total dataset. So it's not just a > function of value sizes. > > J-D > > On Wed, Mar 30, 2011 at 9:34 PM, 陈加俊 wrote: > > I scan the table ,It just has 29000 rows and

the files of one table is so big?

2011-03-30 Thread
I scan the table ,It just has 29000 rows and each row only has not reached 1 k . I save it to files which has 18M. But I used /app/cloud/hadoop/bin/hadoop fs -copyFromLocal , it has 99G . Why ? -- Thanks & Best regards jiajun

Re: How to copy HTable from one cluster to another cluster ?

2011-03-30 Thread
011 at 12:44 AM, 陈加俊 wrote: > > My cluster 1 is in shanghai and cluster 2 is in beijing. So the cluster 1 > > can't see the cluster 2. > > > > any hints? > > > > 2011/3/10 shixing > > > >> Because the distcp's per map runs long time th

Re: Open Scanner Latency

2011-03-30 Thread
terrible ... org.apache.hadoop.hbase.client.ScannerTimeoutException: 338424ms passed since the last invocation, timeout is currently set to 30 On Tue, Feb 1, 2011 at 6:45 AM, Wayne wrote: > The file system buffer cache explains what is going on. The open scanner > reads the first block and

Re: I lost the table '.META.' ,could I restore the other tables by files of hdfs

2011-03-30 Thread
ror.new("Not supported yet") > elsif > > > You can't provide a table name. Run the script with just the path to the > table (no table name as the second argument) > > It worked. > > Good luck, > Cosmin > > On Mar 30, 2011, at 11:34 AM, 陈加俊 wrote: > &g

Re: I lost the table '.META.' ,could I restore the other tables by files of hdfs

2011-03-30 Thread
Any hints? On Wed, Mar 30, 2011 at 3:40 PM, 陈加俊 wrote: > Thank you > > bin/hbase org.jruby.Main bin/add_table.rb /hbase/cjjProgramme cjjProgramme > bin/add_table.rb:80: Not supported yet (IOError) > > My HBase's version is 0.20.6 and HDFS version is 0.20.2. > > On

Re: How to copy HTable from one cluster to another cluster ?

2011-03-30 Thread
m in > milliseconds. > > On Thu, Mar 10, 2011 at 1:14 PM, 陈加俊 wrote: > > > Thank you I used HBase-0.20.6. > > > > On Thu, Mar 10, 2011 at 1:26 AM, Suraj Varma > wrote: > > > > > What HBase version are you on? > > > If you are on 0.90+ there is

Re: I lost the table '.META.' ,could I restore the other tables by files of hdfs

2011-03-30 Thread
FS doc > > Good luck, > Cosmin > > On Mar 30, 2011, at 10:06 AM, 陈加俊 wrote: > > > Some body execute the command : /app/cloud/hadoop/bin/hadoop fs -rmr > > /hbase/.META. > > > > So the regions of all tables is lost ,Can I rebuild the tables by hdfs > files > > ? > > > > -- > > Thanks & Best regards > > jiajun > > -- Thanks & Best regards jiajun

I lost the table '.META.' ,could I restore the other tables by files of hdfs

2011-03-30 Thread
Some body execute the command : /app/cloud/hadoop/bin/hadoop fs -rmr /hbase/.META. So the regions of all tables is lost ,Can I rebuild the tables by hdfs files ? -- Thanks & Best regards jiajun

Re: serverAddress is wrong after copy files uesed distcp

2011-03-28 Thread
HBase-0.20.6 On Tue, Mar 29, 2011 at 1:27 AM, Jean-Daniel Cryans wrote: > Which HBase version? > > J-D > > On Mon, Mar 28, 2011 at 2:14 AM, 陈加俊 wrote: > > A cluster is 192.168.5.144 ... 192.168.5.157 > > b cluster is 192.168.0.181 ... 192.168.0.185 > > &g

serverAddress is wrong after copy files uesed distcp

2011-03-28 Thread
A cluster is 192.168.5.144 ... 192.168.5.157 b cluster is 192.168.0.181 ... 192.168.0.185 I copy the files of hdfs(/hbase/cjjProgramm) from A to B used distcp,and copy rows which prefix is cjjProgramm in .META. from A to B. It is strange ! I worked fine at first day ,but It broken now : org.apa

Re: HMaster startup is very slow, and always run into out-of-memory issue

2011-03-10 Thread
copy your hbase-site.xml here On Thu, Mar 10, 2011 at 3:01 PM, 茅旭峰 wrote: > It seems like there are lots of WAL files in .logs and .oldlogs > directories. > Is there any parameter to control > the size of those WAL files? Or the frequency at which to check the WAL > files. > > Thanks a lot! > >

Re: How to copy HTable from one cluster to another cluster ?

2011-03-09 Thread
/CopyTable.html > --Suraj > > On Wed, Mar 9, 2011 at 2:27 AM, 陈加俊 wrote: > > > How to copy HTable from one cluster to another cluster ? > > > > The table is very big . > > > > -- > > Thanks & Best regards > > jiajun > > > -- Thanks & Best regards jiajun

Re: How to copy HTable from one cluster to another cluster ?

2011-03-09 Thread
11:38 AM, 陈加俊 wrote: > /app/cloud/hadoop/bin/hadoop distcp hdfs:// > master.uc.uuwatch.com:9000/hbase/cjjProgramme hdfs:// > 192.168.0.181:9000/hbase/ > 11/03/10 11:36:24 INFO tools.DistCp: srcPaths=[hdfs:// > master.uc.uuwatch.com:9000/hbase/cjjProgramme] > 11/03/10 11:36:24

Re: How to copy HTable from one cluster to another cluster ?

2011-03-09 Thread
:9000/hbase 11/03/10 11:36:27 INFO tools.DistCp: srcCount=3180 11/03/10 11:36:28 INFO mapred.JobClient: Running job: job_201012131904_0001 11/03/10 11:36:29 INFO mapred.JobClient: map 0% reduce 0% and master.uc.uuwatch.com -> 192.168.5.151 On Thu, Mar 10, 2011 at 11:23 AM, 陈加俊 wrote: > T

Re: How to copy HTable from one cluster to another cluster ?

2011-03-09 Thread
dpnn2:/hbase/TABLE_NAME > > 2. Import the meta data from hbase1 '.META.' table where the row's prefix > is > TABLE_NAME to hbase2 '.META.' > > > > On Wed, Mar 9, 2011 at 6:27 PM, 陈加俊 wrote: > > > How to copy HTable from one cluster to another

How to copy HTable from one cluster to another cluster ?

2011-03-09 Thread
How to copy HTable from one cluster to another cluster ? The table is very big . -- Thanks & Best regards jiajun

Re: region servers dying - flush request - YCSB

2011-03-08 Thread
Htable had disabled when ctr+c ? 2011/3/8, M.Deniz OKTAR : > Something new came up! > > I tried to truncate the 'usertable' which had ~12M entries. > > Shell stayed at "disabling table" for a long time. The processes was there > but there were no requests. So I quit the state by ctrl-c. > > Then t

Re: I can't get many versions of the specified column,but only get the latest version of the specified column

2011-03-06 Thread
coded timestamps from your test case and your it will start to work > as you expected. > > Thanks, > > -- > Tatsuya Kawano > Tokyo, Japan > > > On Feb 24, 2011, at 5:01 PM, 陈加俊 wrote: > > > It will be right if according to the following process: > > > &

Re: min, max

2011-03-01 Thread
Yes I do it like this. But I hava another problem I can't count the rows of one table fast. On Wed, Mar 2, 2011 at 12:58 PM, Ted Yu wrote: > Weishung: > For max, you can enumerate the regions for your table. Start the scan from > the first row in the last region. > > On Tue, Mar 1, 2011 at 8:51

Re: java.lang.OutOfMemoryError: Java heap space

2011-02-28 Thread
ch of the heap their indices occupy. How much is it? Is it > a significant portion of your heap (See HBASE-3551 for more on what > I'm talking about)? > > Can you give hbase more heap? > > St.Ack > > > On Mon, Feb 28, 2011 at 12:06 AM, 陈加俊 wrote: > > My c

java.lang.OutOfMemoryError: Java heap space

2011-02-28 Thread
My cluster hava 12regionserver and HBase version is 0.20.6 . AverageLoad: 856.4 Dead: 0 Live Servers: 12 1-> 192.168.5.152:60020 [requests=86, regions=856, usedHeap=2763, maxHeap=2991] 2-> 192.168.5.146:60020 [requests=48, regions=855, usedHeap=2898, maxHeap=2995] 3-> 192.168.5.147

Re: NativeException: java.io.IOException: Unable to enable table

2011-02-24 Thread
RN org.apache.hadoop.hbase.master.BaseScanner: Region is split but not offline: cjjHTML, http://www.feelcars.com/20101217/c200607508.shtml,1292862148834 On Fri, Feb 25, 2011 at 2:51 PM, 陈加俊 wrote: > hbase(main):004:0> debug > NameError: undefined local variable or method `debug'

Re: NativeException: java.io.IOException: Unable to enable table

2011-02-24 Thread
but not offline: cjjHTML, http://wfwb.wfnews.com.cn/html/2010-11/26/node_62.htm,1291148553863 My question is : Is there any problem ? On Fri, Feb 25, 2011 at 2:51 PM, 陈加俊 wrote: > hbase(main):004:0> debug > NameError: undefined local variable or method `debug' for > # > > O

Re: NativeException: java.io.IOException: Unable to enable table

2011-02-24 Thread
ode is ON > > St.Ack > > On Thu, Feb 24, 2011 at 7:20 PM, 陈加俊 wrote: > > I want to alter table 'cjjHTML' ,and do ti as follows,but throw IOE > > > > Version: 0.20.6, r965666, Mon Jul 19 16:54:48 PDT 2010 > > hbase(main):001:0> disable 'cj

NativeException: java.io.IOException: Unable to enable table

2011-02-24 Thread
I want to alter table 'cjjHTML' ,and do ti as follows,but throw IOE Version: 0.20.6, r965666, Mon Jul 19 16:54:48 PDT 2010 hbase(main):001:0> disable 'cjjHTML' 0 row(s) in 210.5140 seconds hbase(main):002:0> alter 'cjjHTML', {NAME => 'responseHeader' , VERSIONS => 1 , METHOD => 'add'} 0 row(s) in

Re: I can't get many versions of the specified column,but only get the latest version of the specified column

2011-02-24 Thread
Thank you ! HBase version is 0.20.6 On Thu, Feb 24, 2011 at 6:33 PM, Tatsuya Kawano wrote: > > Hmmm, it's strange. Let me try your code on my cluster this weekend. What > the HBase version are you using? > > -- > Tatsuya Kawano > Tokyo, Japan > > > On F

Re: I can't get many versions of the specified column,but only get the latest version of the specified column

2011-02-24 Thread
PE? > > As Tatsuya pointed out, you are using the same time stamps: > > private final long ts2 = ts1 + 100; > > private final long ts3 = ts1 + 100; > > That cannot work, you are overriding cells. > > Lars > > On Thu, Feb 24, 2011 at 8:34 AM, 陈加俊

Re: I can't get many versions of the specified column,but only get the latest version of the specified column

2011-02-23 Thread
ble.close(); } On Thu, Feb 24, 2011 at 3:26 PM, Ryan Rawson wrote: > Does the HTable object have setAutoFlush(false) turned on by any chance? > > On Wed, Feb 23, 2011 at 11:22 PM, 陈加俊 wrote: > > line 89:final NavigableMap> > > familyMap = map.get(family); &

Re: How to limit the number of logs that producted by DailyRollingFileAppender

2011-02-23 Thread
t; > > Since this is a log4j question, you could get better answer if you ask your > question at log4j-user mailing list. > > Thanks, > > -- > Tatsuya Kawano > Tokyo, Japan > > > On Feb 17, 2011, at 10:17 AM, 陈加俊 < > cjjvict...@gmail.com> wrote: > >

Re: I can't get many versions of the specified column,but only get the latest version of the specified column

2011-02-23 Thread
; > vs: > assertTrue(versionMap.size() == 3); > > since the error messages from the former are more descriptive > "expected 3 was 2". > > looking at the code it looks like it should work... > > On Wed, Feb 23, 2011 at 11:07 PM, 陈加俊 w

Re: I can't get many versions of the specified column,but only get the latest version of the specified column

2011-02-23 Thread
ull descriptions of your tables, debugging is > > harder than it needs to be. It's probably a simple typo or something, > > check your code and table descriptions again. Many people rely on the > > multi version query capabilities and it is very unlikely to be broken > > in

Re: I can't get many versions of the specified column,but only get the latest version of the specified column

2011-02-23 Thread
your tables, debugging is > harder than it needs to be. It's probably a simple typo or something, > check your code and table descriptions again. Many people rely on the > multi version query capabilities and it is very unlikely to be broken > in a released version of hbase. > > O

Re: I can't get many versions of the specified column,but only get the latest version of the specified column

2011-02-23 Thread
testcase in Hbase? On Thu, Feb 24, 2011 at 9:56 AM, 陈加俊 wrote: > /** >* Create a sorted list of the KeyValue's in this result. >* >* @return The sorted list of KeyValue's. >*/ > public List list() { > if(this.kvs == null) { > readFiel

Re: I can't get many versions of the specified column,but only get the latest version of the specified column

2011-02-23 Thread
h! On Thu, Feb 24, 2011 at 9:45 AM, Buttler, David wrote: > Result.list() ? > Putting the hbase source into your IDE of choice (yay Eclipse!) is really > helpful > > Dave > > > -----Original Message- > From: 陈加俊 [mailto:cjjvict...@gmail.com] > Sent: Wednesd

Re: I can't get many versions of the specified column,but only get the latest version of the specified column

2011-02-23 Thread
What is your table schema set to? By default it holds 3 versions. > Also, you might iterating over KeyValues instead of using the Map since you > don't really care about the organization, just the time. > > Dave > > -Original Message- > From: 陈加俊 [mailto:cjjvic

Re: I can't get many versions of the specified column,but only get the latest version of the specified column

2011-02-23 Thread
,right? On Thu, Feb 24, 2011 at 2:06 AM, Stack wrote: > What do you get for a result? > > You are only entering a single version of each column, a single > version of FAMILY:q1, a single version FAMILY:q2, and a FAMILY:q3. > > St.Ack > > On Wed, Feb 23, 2011 at 2:54 AM, 陈加俊

I can't get many versions of the specified column,but only get the latest version of the specified column

2011-02-23 Thread
I can't get many versions of the specified column,but only get the latest version of the specified column. Is there anyone help me? //put data by version final Put p = new Put(key); // key final long ts = System.currentTimeMillis(); p.add(FAMILY, q1, ts,v1); p.add(FAMILY, q2, ts,v

Re: How to limit the number of logs that producted by DailyRollingFileAppender

2011-02-16 Thread
"overrun". Delete them manually? On Wed, Feb 16, 2011 at 5:56 PM, Tatsuya Kawano wrote: > Hi, > > On 02/16/2011, at 4:51 PM, 陈加俊 wrote: > > How to limit the number of logs that producted by > DailyRollingFileAppender ? > > > > I find the logs are exceeding

How to limit the number of logs that producted by DailyRollingFileAppender

2011-02-15 Thread
How to limit the number of logs that producted by DailyRollingFileAppender ? I find the logs are exceeding disk apace limit.

Re: Need to have hbase-site.xml in hadoop conf dir?

2011-02-13 Thread
If I have new jar builded by myselft,How should I extra Java CLASSPATH elements? On Sat, Feb 12, 2011 at 3:04 PM, Ryan Rawson wrote: > we include $HBASE_HOME/conf on the HADOOP_CLASSPATH in hadoop-env.sh. > > It goes like this: > > > export HBASE_HOME=/home/hadoop/hbase > JAR=`ls $HBASE_HOME/*.j

Re: Designing table with auto increment key

2011-02-13 Thread
That's what I did and works fine ! On Mon, Feb 14, 2011 at 2:10 AM, Something Something < mailinglist...@gmail.com> wrote: > Hello, > > Can you please tell me if this is the proper way of designing a table > that's > got an auto increment key? If there's a better way please let me know that > as

Re: How to improve the speed of HTable scan

2011-01-28 Thread
then a new > RPC that is pre-fetching a bunch of rows? > > St.Ack > > On Tue, Jan 25, 2011 at 9:36 PM, 陈加俊 wrote: > > Thank you ! > > > > But why the second (and subsequent ones) that getScanner and first next > is > > too slowly? I think the second (and subs

Re: How to improve the speed of HTable scan

2011-01-25 Thread
next 1.02ms > > next 1.28ms > > next 0.94ms > > next 1.35ms > > next 0.86ms > > next 0.86ms > > next 0.88ms > > next 0.83ms > > next 0.92ms > > next 0.92ms > > next 1.09ms > > next 0.91ms > > ... > > > &

Re: How to improve the speed of HTable scan

2011-01-25 Thread
> caching. Make sure your data can fit in the block cache and that it > stays there. > > J-D > > On Tue, Jan 25, 2011 at 2:35 AM, 陈加俊 wrote: > > final Scan scan = new Scan(); > > scan.setCaching(scannerCaching); > > scan.addColumn(family); > > > &g

How to scan by different region of one table and get diffrent row?

2011-01-25 Thread
One programme scan from regions[0].startKey and stop at regions[0].endKey ,and another programme scan from scan from regions[1].startKey and stop at regions[1].endKey. Every programme get the row by scan and then delete the row immediately. My question is: Will the two programme get the same ro

How to improve the speed of HTable scan

2011-01-25 Thread
final Scan scan = new Scan(); scan.setCaching(scannerCaching); scan.addColumn(family); table.getScanner(scan); For improving the speed of scan . How to adjust the parameters ? Is there any more parameters or methods that I don't know.

Re: installation question

2011-01-24 Thread
t with 'ant package'. > > St.Ack > > On Sun, Jan 23, 2011 at 11:13 PM, 陈加俊 wrote: > > anyone can tell me how to checkout and merge step by step? > > http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.20-append/ > > > > > > On Sun, Ja

Re: installation question

2011-01-23 Thread
anyone can tell me how to checkout and merge step by step? http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.20-append/ On Sun, Jan 23, 2011 at 3:14 AM, Stack wrote: > http://hbase.apache.org/notsoquick.html#hadoop > Yours, > St.Ack > > On Sat, Jan 22, 2011 at 11:07 AM, John Smith w

Re: cannot load Java class org.apache.hadoop.hbase.regionserver.HLogEdit (NameError)

2011-01-13 Thread
class org.apache.hadoop.hbase.regionserver.HLogEdit (NameError) from file:/home/uuwatch/hbase-0.20.6/lib/jruby-complete-1.2.0.jar!/META-INF/jruby.home/lib/ruby/site_ruby/1.8/builtin/javasupport/java.rb:51:in `method_missing' from bin/copy_table.rb:40 On Fri, Jan 14, 2011 at 10:30 AM, 陈加俊 wrote: &

cannot load Java class org.apache.hadoop.hbase.regionserver.HLogEdit (NameError)

2011-01-13 Thread
I want to rename the table but run errors as follow: ./bin/hbase org.jruby.Main bin/rename_table.rb t1 t2 file:/app/setup/cloud/hbase-0.20.6/lib/jruby-complete-1.2.0.jar!/META-INF/jruby.home/lib/ruby/site_ruby/1.8/builtin/javasupport/core_ext/object.rb:33:in `get_proxy_or_package_under_package':

Re: How to avoid this problem :connection to /192.168.5.154:60020 from an unknown user

2011-01-12 Thread
> log4j.additivity.org.apache.hadoop.ipc=false > > > --gh > > On Wed, Jan 12, 2011 at 5:59 PM, 陈加俊 wrote: > > > How to avoid this problem: > > > > [2011-01-13 09:47:19 DEBUG]- IPC Client (47) connection to / > > 192.168.5.154:60020 from an unknown user sending

How to avoid this problem :connection to /192.168.5.154:60020 from an unknown user

2011-01-12 Thread
How to avoid this problem: [2011-01-13 09:47:19 DEBUG]- IPC Client (47) connection to / 192.168.5.154:60020 from an unknown user sending #6 [2011-01-13 09:47:19 DEBUG]- IPC Client (47) connection to / 192.168.5.154:60020 from an unknown user: starting, having connections 3 [2011-01-13 09:47:19 DEB

java.net.SocketException: Too many open files

2011-01-11 Thread
I set the env as fallows: $ ulimit -n 65535 $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 63943 max locked memory (

Re: How to lookup the IPs that is connecting the RS?

2011-01-10 Thread
client-server's using port. > > > > > On 1/10/11, 陈加俊 wrote: > > There is many programs that connected the RS and insert or update the > data > > of some table. I stopped all the program now,but tha data of one table > is > > growing, I can't find the prog

How to lookup the IPs that is connecting the RS?

2011-01-09 Thread
There is many programs that connected the RS and insert or update the data of some table. I stopped all the program now,but tha data of one table is growing, I can't find the program who is runing .So my question is: How to lookup the IPs that is connecting the RS?

How to rename table's family name

2011-01-07 Thread
Hi everyone! How to rename the table's family name. I created the table and it's families , and insert many data into it, but I want to rename one family name now premise is not lost data .

Re: Is there any patch for this?ERROR org.apache.h adoop.hbase.regionserver.CompactSplitThread: Compaction fail ed for region

2010-12-20 Thread
:19 PM, 陈加俊 wrote: > I unzipped from the hbase-0.20.6.tar.gz file. > > > On Tue, Dec 21, 2010 at 6:12 AM, Ryan Rawson wrote: > >> Generally speaking, method not found errors are build and deployment >> errors. >> Are you using a stock hbase tar ball? >>

Re: Strange, suddenly shell cannot use

2010-12-20 Thread
, usedHeap=2601, maxHeap=2995 slave155.uc.uuwatch.com:60020 1292238605510 requests=231, regions=299, usedHeap=, maxHeap=2991 0 dead servers On Tue, Dec 21, 2010 at 2:01 PM, 陈加俊 wrote: > >1) Switch JDK back to IcedTea and see if shell works now - switch it back > to > >Sun

Re: Is there any patch for this?ERROR org.apache.h adoop.hbase.regionserver.CompactSplitThread: Compaction fail ed for region

2010-12-20 Thread
I unzipped from the hbase-0.20.6.tar.gz file. On Tue, Dec 21, 2010 at 6:12 AM, Ryan Rawson wrote: > Generally speaking, method not found errors are build and deployment > errors. > Are you using a stock hbase tar ball? > On Dec 19, 2010 5:54 PM, "陈加俊" wrote: > > H

  1   2   >