I can’t envision any incompatibilities and nodes running the different JDKs
shouldn’t have any issues communicating but depending on the stakes you may
wish to build out either a simple lab or a complex staging environment with
a snapshot of all the data to develop a playbook for doing the rollout
> under 100ms.
> > >
> > > On Tue, Apr 26, 2016 at 6:25 AM Saad Mufti
> wrote:
> > >
> > > > From what I can see in the source code, the default is actually even
> > > lower
> > > > at 100 ms (can be overridden with
> hbase.reg
I see similar log spam while system has reasonable performance. Was the
250ms default chosen with SSDs and 10ge in mind or something? I guess I'm
surprised a sync write several times through JVMs to 2 remote datanodes
would be expected to consistently happen that fast.
Regards,
On Mon, Apr 25,
o
0.5
hbase.ipc.server.callqueue.handler.factor
0.5
Regards,
Kevin
On Sat, Apr 16, 2016 at 9:27 PM, Vladimir Rodionov
wrote:
> There are separate RPC queues for read and writes in 1.0+ (not sure about
> 0.98). You need to set sizes of these queues accordingly.
>
> -Vlad
>
> On Sat, Apr 16, 2016 at
Hi,
Using OpenTSDB 2.2 with its "appends" feature, I see significant impact on
read performance when writes are happening. If a process injects a few
hundred thousand points in batch, the call queues on on the region servers
blow up and until they drain a new read request is basically blocked at
AM, Ted Yu wrote:
>
> > Can you look at master log during this period to see what procedure was
> > retried ?
> >
> > Turning on DEBUG logging if necessary and pastebin relevant portion of
> > master log.
> >
> > Thanks
> >
> > > On Apr 11, 2016,
Hi,
I'm running HBase 1.2.0 on FreeBSD via the ports system (
http://www.freshports.org/databases/hbase/), and it is generally working
well. However, in an HA setup, the HBase master spins at 200% CPU usage
when it is active and this follows the active master and disappears when
standby. Since t