Re: HBase & MapReduce & Zookeeper

2011-07-20 Thread Andre Reiter
Hi Ted, thanks for the reply, at the moment i'm hust wondering, why the client creates a zookeeper connection at all all the client has to do, is to schedule a MR job, which is done by connecting to the jobtracker and to provide all the needed stuff, config, some extra resources in the distrib

Re: HBase & MapReduce & Zookeeper

2011-07-20 Thread Ted Yu
This seems to be cdh related. >> either the map HConnectionManager.HBASE_INSTANCES does not contain the connection for the current config You need to pass the same conf object. In trunk, I added the following: public static void deleteStaleConnection(HConnection connection) { See http://zhihon

Re: HBase & MapReduce & Zookeeper

2011-07-20 Thread Andre Reiter
unfortunatelly there was no such LOG entry... :-( our versions: hadoop-0.20.2-CDH3B4 hbase-0.90.1-CDH3B4 zookeeper-3.3.2-CDH3B4 either the map HConnectionManager.HBASE_INSTANCES does not contain the connection for the current config, or HConnectionImplementation.zooKeeper is null but the zooke

Re: HBase & MapReduce & Zookeeper

2011-07-20 Thread Ted Yu
Andre: So you didn't see the following in client log (HConnectionManager line 1067) ? LOG.info("Closed zookeeper sessionid=0x" + Long.toHexString(this.zooKeeper.getZooKeeper().getSessionId())); HConnectionManager.deleteConnection(conf, true) is supposed to close zk connection in 0

Re: HBase & MapReduce & Zookeeper

2011-07-20 Thread Andre Reiter
Hi St.Ack, actually calling HConnectionManager.deleteConnection(conf, true); does not close the connection to the zookeeper i still can see the connection established... andre Stack wrote: Then similarly, can you do the deleteConnection above in your client or reuse the Configuration client

Re: HBase & MapReduce & Zookeeper

2011-07-20 Thread Stack
Then similarly, can you do the deleteConnection above in your client or reuse the Configuration client-side that you use setting up the job? St.Ack On Wed, Jul 20, 2011 at 12:13 AM, Andre Reiter wrote: > Hi Stack, > > just to make clear, actually the connections to the zookeeper being kept are >

Re: HBase & MapReduce & Zookeeper

2011-07-20 Thread Andre Reiter
Hi Stack, just to make clear, actually the connections to the zookeeper being kept are not on our mappers (tasktrackers) but on the client, which schedules the MR job i think, the mappers are just fine, as they are andre Stack wrote: Can you reuse Configuration instances though the "configu

Re: HBase & MapReduce & Zookeeper

2011-07-20 Thread Stack
Can you reuse Configuration instances though the "configuration" changes? Else in your Mapper#cleanup, call HTable.close() then try HConnectionManager.deleteConnection(table.getConfiguration()) after close (could be issue with executors used by multi* operations not completing before delete of con

Re: HBase & MapReduce & Zookeeper

2011-07-19 Thread Andre Reiter
Hi St.Ack, thanks for your reply but funally i miss the point, what would be the options to solve our issue? andre

Re: HBase & MapReduce & Zookeeper

2011-07-19 Thread Stack
Configuration is not Comparable. Its instance identity that is used comparing Configurations down in the guts of HConnectionManager in 0.90.x hbase so even if you reuse a Configuration and tweak it per job, as far as HCM is concerned its the 'same'. Are you seeing otherwise? St.Ack On Tue, Jul

Re: HBase & MapReduce & Zookeeper

2011-07-19 Thread Andre Reiter
Hi Doug, thanks a lot for reply, it's clear, that there is a parameter for maxClientCnxns, which is 10 by default of course i could increase it to s.th. big. but like i said, the old connections are still there, and i cannot imagine, that this is a correct behaviour, to let them open (establish

Re: HBase & MapReduce & Zookeeper

2011-07-19 Thread Doug Meil
Hi there- re: "that we have to reuse the Configuration object" You are probably referring to this... http://hbase.apache.org/book.html#client.connections ... yes, that is general guidance on client connection.. re: "do i have to create a pool of Configuration objects, to share them synchron

HBase & MapReduce & Zookeeper

2011-07-19 Thread Andre Reiter
Hi folks, i'm running in an interesting issue: we have a zookeeper cluster running on 3 servers we run mapreduce jobs using org.apache.hadoop.conf.Configuration to pass parameters to our mappers the string based (key/value) approach is imho not the most elegant way, i would prefer to however p

Re: something wrong with hbase mapreduce

2010-12-06 Thread Lars George
gt; >> >> Try this: >> >> >> >> 1. Do the MR job >> >> 2. Do the delete from the shell >> >> 3. Check that it was deleted from the shell >> >> 4. Run a major compaction of the table on the shell (e.g. >> >> "major_compact "

Re: something wrong with hbase mapreduce

2010-12-05 Thread 梁景明
it was deleted from the shell > >> 4. Run a major compaction of the table on the shell (e.g. > >> "major_compact ") > >> 5. Re-run the MR job > >> 6. Check if the value is there again. > >> > >> And finally let us know here :) >

Re: something wrong with hbase mapreduce

2010-12-03 Thread Lars George
t;> >> And finally let us know here :) >> >> Lars >> >> On Thu, Dec 2, 2010 at 2:48 AM, 梁景明 wrote: >> > 0.20.6 >> > >> > 2010/12/2 Lars George >> > >> >> What version of HBase are you using? >> >> >

Re: something wrong with hbase mapreduce

2010-12-02 Thread 梁景明
1, 2010, at 9:24, 梁景明 wrote: > >> > >> > i found that if i didnt control timestamp of the put > >> > mapreduce can run, otherwise just one time mapreduce. > >> > the question is i scan by timestamp to get my data > >> > so to put timestam

Re: something wrong with hbase mapreduce

2010-12-02 Thread Lars George
gt;> > the question is i scan by timestamp to get my data >> > so to put timestamp is my scan thing. >> > >> > any ideas ? thanks. >> > >> > 2010/12/1 梁景明 >> > >> >> Hi,i found a problem in my hbase mapreduce case. >

Re: something wrong with hbase mapreduce

2010-12-01 Thread 梁景明
amp to get my data > > so to put timestamp is my scan thing. > > > > any ideas ? thanks. > > > > 2010/12/1 梁景明 > > > >> Hi,i found a problem in my hbase mapreduce case. > >> > >> when first running mapreduce TableMapReduceUtil run

Re: something wrong with hbase mapreduce

2010-12-01 Thread Lars George
t; > any ideas ? thanks. > > 2010/12/1 梁景明 > >> Hi,i found a problem in my hbase mapreduce case. >> >> when first running mapreduce TableMapReduceUtil runs ok. >> >> and i use hbase shell to delete some data from the table that mapreduce one >>

Re: something wrong with hbase mapreduce

2010-12-01 Thread 梁景明
i found that if i didnt control timestamp of the put mapreduce can run, otherwise just one time mapreduce. the question is i scan by timestamp to get my data so to put timestamp is my scan thing. any ideas ? thanks. 2010/12/1 梁景明 > Hi,i found a problem in my hbase mapreduce case. >

something wrong with hbase mapreduce

2010-12-01 Thread 梁景明
Hi,i found a problem in my hbase mapreduce case. when first running mapreduce TableMapReduceUtil runs ok. and i use hbase shell to delete some data from the table that mapreduce one . then ran mapreduce to insert some new data. no thing data changed, mapreduce didnt work. after that i drop

<    1   2