Hi Ted,
thanks for the reply,
at the moment i'm hust wondering, why the client creates a zookeeper connection
at all
all the client has to do, is to schedule a MR job, which is done by connecting
to the jobtracker and to provide all the needed stuff, config, some extra
resources in the distrib
This seems to be cdh related.
>> either the map HConnectionManager.HBASE_INSTANCES does not contain the
connection for the current config
You need to pass the same conf object.
In trunk, I added the following:
public static void deleteStaleConnection(HConnection connection) {
See
http://zhihon
unfortunatelly there was no such LOG entry... :-(
our versions:
hadoop-0.20.2-CDH3B4
hbase-0.90.1-CDH3B4
zookeeper-3.3.2-CDH3B4
either the map HConnectionManager.HBASE_INSTANCES does not contain the
connection for the current config, or HConnectionImplementation.zooKeeper is
null
but the zooke
Andre:
So you didn't see the following in client log (HConnectionManager line 1067)
?
LOG.info("Closed zookeeper sessionid=0x" +
Long.toHexString(this.zooKeeper.getZooKeeper().getSessionId()));
HConnectionManager.deleteConnection(conf, true) is supposed to close zk
connection in 0
Hi St.Ack,
actually calling HConnectionManager.deleteConnection(conf, true); does not
close the connection to the zookeeper
i still can see the connection established...
andre
Stack wrote:
Then similarly, can you do the deleteConnection above in your client
or reuse the Configuration client
Then similarly, can you do the deleteConnection above in your client
or reuse the Configuration client-side that you use setting up the
job?
St.Ack
On Wed, Jul 20, 2011 at 12:13 AM, Andre Reiter wrote:
> Hi Stack,
>
> just to make clear, actually the connections to the zookeeper being kept are
>
Hi Stack,
just to make clear, actually the connections to the zookeeper being kept are
not on our mappers (tasktrackers) but on the client, which schedules the MR job
i think, the mappers are just fine, as they are
andre
Stack wrote:
Can you reuse Configuration instances though the "configu
Can you reuse Configuration instances though the "configuration" changes?
Else in your Mapper#cleanup, call HTable.close() then try
HConnectionManager.deleteConnection(table.getConfiguration()) after
close (could be issue with executors used by multi* operations not
completing before delete of con
Hi St.Ack,
thanks for your reply
but funally i miss the point, what would be the options to solve our issue?
andre
Configuration is not Comparable. Its instance identity that is used
comparing Configurations down in the guts of HConnectionManager in
0.90.x hbase so even if you reuse a Configuration and tweak it per
job, as far as HCM is concerned its the 'same'.
Are you seeing otherwise?
St.Ack
On Tue, Jul
Hi Doug,
thanks a lot for reply,
it's clear, that there is a parameter for maxClientCnxns, which is 10 by default
of course i could increase it to s.th. big. but like i said, the old
connections are still there, and i cannot imagine, that this is a correct
behaviour, to let them open (establish
Hi there-
re: "that we have to reuse the Configuration object"
You are probably referring to this...
http://hbase.apache.org/book.html#client.connections
... yes, that is general guidance on client connection..
re: "do i have to create a pool of Configuration objects, to share them
synchron
Hi folks,
i'm running in an interesting issue:
we have a zookeeper cluster running on 3 servers
we run mapreduce jobs using org.apache.hadoop.conf.Configuration to pass
parameters to our mappers
the string based (key/value) approach is imho not the most elegant way, i would
prefer to however p
gt;
>> >> Try this:
>> >>
>> >> 1. Do the MR job
>> >> 2. Do the delete from the shell
>> >> 3. Check that it was deleted from the shell
>> >> 4. Run a major compaction of the table on the shell (e.g.
>> >> "major_compact "
it was deleted from the shell
> >> 4. Run a major compaction of the table on the shell (e.g.
> >> "major_compact ")
> >> 5. Re-run the MR job
> >> 6. Check if the value is there again.
> >>
> >> And finally let us know here :)
>
t;>
>> And finally let us know here :)
>>
>> Lars
>>
>> On Thu, Dec 2, 2010 at 2:48 AM, 梁景明 wrote:
>> > 0.20.6
>> >
>> > 2010/12/2 Lars George
>> >
>> >> What version of HBase are you using?
>> >>
>
1, 2010, at 9:24, 梁景明 wrote:
> >>
> >> > i found that if i didnt control timestamp of the put
> >> > mapreduce can run, otherwise just one time mapreduce.
> >> > the question is i scan by timestamp to get my data
> >> > so to put timestam
gt;> > the question is i scan by timestamp to get my data
>> > so to put timestamp is my scan thing.
>> >
>> > any ideas ? thanks.
>> >
>> > 2010/12/1 梁景明
>> >
>> >> Hi,i found a problem in my hbase mapreduce case.
>
amp to get my data
> > so to put timestamp is my scan thing.
> >
> > any ideas ? thanks.
> >
> > 2010/12/1 梁景明
> >
> >> Hi,i found a problem in my hbase mapreduce case.
> >>
> >> when first running mapreduce TableMapReduceUtil run
t;
> any ideas ? thanks.
>
> 2010/12/1 梁景明
>
>> Hi,i found a problem in my hbase mapreduce case.
>>
>> when first running mapreduce TableMapReduceUtil runs ok.
>>
>> and i use hbase shell to delete some data from the table that mapreduce one
>>
i found that if i didnt control timestamp of the put
mapreduce can run, otherwise just one time mapreduce.
the question is i scan by timestamp to get my data
so to put timestamp is my scan thing.
any ideas ? thanks.
2010/12/1 梁景明
> Hi,i found a problem in my hbase mapreduce case.
>
Hi,i found a problem in my hbase mapreduce case.
when first running mapreduce TableMapReduceUtil runs ok.
and i use hbase shell to delete some data from the table that mapreduce one
.
then ran mapreduce to insert some new data.
no thing data changed, mapreduce didnt work.
after that i drop
101 - 122 of 122 matches
Mail list logo