Wasn't all the effort to go to end to end "protobuf messaging" meant to
support rolling upgrades across major versions?
Perhaps I may be missing the point, but for us post-Singularity release,
the assumption was that all upgrades, major & minor, could be done
"rolling" as proto bufs would ensure b
Might this be a good time to _not_ throw IOException and perhaps throw
something along the lines of retryable / non-retryable etc exceptions -
similar to the hierarchy in asynchbase?
Since clients have to change anyways ... perhaps it is a good time to
introduce this change?
--S
On Mon, Aug 5, 2
I would like to second Laxman's proposal. Currently, some of the
"default" hbase configuration is targeted towards newbies so as to
avoid getting basic questions in the mailing list ... which is ok. I
think we at least need something like an hbase-prod.xml that lists
more realistic values for a pro
The numOps is not a monotonic counter ... see this thread:
http://old.nabble.com/HBase-jmx-stats-td31683839.html
--S
On Tue, May 29, 2012 at 11:52 PM, Vladimir Tretyakov
wrote:
> Hi, I am new with Hbase and try to extract HBase metrics by JMX, so I
> noticed that "NumOps" metrics reset periodical
You may be hitting this: https://issues.apache.org/jira/browse/HBASE-5481
--Suraj
On Sun, Apr 8, 2012 at 11:36 AM, Mikael Sitruk wrote:
> Hi devs.
>
> I have a strange situation with my cluster when an address cannot be
> resolved.
> Few days ago I had two entries in a DNS file, so a computer cou
gt; read from the region when doing the prePut() and came up working fine - no
> deadlock issues. Checked for doing bothPut and checkAndPut, and didn't come
> up with anything locking.
>
> Heavy concurrent use only will be slowed down by the reads, but nothing
> should be b
e into the patches ... but I thought it
better to discuss this before rolling it out to the larger community
... :)
Thanks,
--Suraj
On Fri, Dec 9, 2011 at 1:31 PM, Suraj Varma wrote:
> Hi:
> I opened a jira ticket on this:
> https://issues.apache.org/jira/browse/HBASE-4999
>
> I h
t;
> Cheers
>
> On Sun, Dec 4, 2011 at 8:39 AM, Suraj Varma wrote:
>
>> Jesse:
>> >> Quick soln - write a CP to check the single row (blocking the put).
>>
>> Yeah - given that I want this to be atomically done, I'm wondering if
>> this would even
lementation - it would
really open up CAS operations to do functional constraint checking,
rather than just value comparisons.
--Suraj
On Sun, Dec 4, 2011 at 8:32 AM, Suraj Varma wrote:
> Thanks - I see that the lock is taken internal to checkAndMutate.
>
> I'm wondering whether i
ed so that atomicity can be achieved across preCheckAndPut() and
> checkAndMutate().
>
> Cheers
>
> On Sat, Dec 3, 2011 at 4:54 PM, Suraj Varma wrote:
>
>> Just so my question is clear ... everything I'm suggesting is in the
>> context of a single row (not cross ro
e
> constraint doesn't work as expected, until a patch is made for constraints.
>
> Feel free to open up a ticket and link it to 4605 for adding the local
> table access functionality, and we can discuss the de/merits of adding the
> access.
>
> -Jesse
>
> On Sat, Dec
I'm looking at the preCheckAndPut / postCheckAndPut api with
coprocessors and I'm wondering ... are these pre/post checks done
_after_ taking the row lock or is the row lock only done within the
checkAndPut api.
I'm interested in seeing if we can implement something like:
(in pseudo sql)
update ta
I think you may be running into an error while executing the
saveVersion.sh under hbase-0.90.1/src directory - this fails if cygwin
is not available on the windows PATH (e.g. if you do ls in windows
command window, does it list files correctly?)
Also - specifically, what command did you run to bui
Thanks, tsuna for the detailed response. I totally agree with your points.
> No it doesn't make sense. HBase's jar actually contains a
> copy-pasted-hacked version of the Hadoop RPC code. Most of the RPC
> stuff happens inside the HBase jar, it only uses some helper functions
> from the Hadoop j
Sorry - missed the user group in my previous mail.
--Suraj
On Sun, Mar 6, 2011 at 10:07 PM, Suraj Varma wrote:
> Very interesting.
> I was just about to send an additional mail asking why HBase client also
> needs the hadoop jar (thereby tying the client onto the hadoop version as
>
Thanks all for your insights into this.
I would agree that providing mechanisms to support no-outage upgrades going
forward would really be widely beneficial. I was looking forward to Avro for
this reason.
Some follow up questions:
1) If asynchbase client to do this (i.e. talk wire protocol and a
This is interesting - so, doesn't asynchbase have any dependency on hbase
jar or hadoop jar? How does it achieve this version independence?
Also - does this mean that we could just swap the zookeeper quorum to that
of a different version cluster and achieve a no outage upgrade (from
client's persp
17 matches
Mail list logo