Congratulations, Zheng
Anoop John 於 2019年8月6日 週二,下午5:31寫道:
> Congrats Zheng.
>
> Anoop
>
> On Tue, Aug 6, 2019 at 8:52 AM OpenInx wrote:
>
> > I'm so glad to join the PMC, Apache HBase is a great open source project
> > and the
> > community is also very nice and friendly. In the comming days,
Congratulations
OpenInx 於 2019年8月1日 週四,下午3:17寫道:
> Congratulations, Sakthi.
>
> On Thu, Aug 1, 2019 at 3:09 PM Jan Hentschel <
> jan.hentsc...@ultratendency.com> wrote:
>
> > Congrats Sakthi
> >
> > From: Reid Chan
> > Reply-To: "user@hbase.apache.org"
> > Date: Thursday, August 1, 2019 at
I tried to build project(clone from githb), use "mvn package -DskipTests"
i got this error
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-compiler-plugin:3.8.0:compile
(default-compile) on project hbase-server: Compilation
failure[ERROR*]*
ok, I got you
Thanks stack.
Stack 於 2019年3月29日 週五 上午5:45寫道:
> The end key of one region is the start key of the next so cecking the
> startkey is sufficient?
> Thanks Kevin,
> S
>
>
> On Tue, Mar 26, 2019 at 1:47 PM kevin su wrote:
>
> > Hi Users,
> >
>
tPoint != null &&
Bytes.compareTo(hri.getStartKey(), splitPoint) == 0) {
throw new IOException("should not give a splitkey which equals
to startkey!");
}
...
...
Thanks,
Kevin
Stack 於 2019年3月27日 週三 下午11:07寫道:
> That sounds right Kevin. Mind adding pointer to where in the code yo
Hi Users,
I found that when we start to split region, used splitRegionAsync in
HbaseAdmin.
it only check whether the splitPoint is startkey or not.
should we also check splitPoint is endkey ?
Thanks.
Kevin
Ok, i got you
Thanks for you reply.
Stack 於 2019年3月15日 週五 下午1:10寫道:
> File an issue Kevin? Maybe attach a patch?
> Thank you,
> S
>
> On Wed, Mar 13, 2019 at 1:51 AM kevin su wrote:
>
> > Hi,
> >
> > I clone the latest the hbase from github
with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
There are addition blank line, I thinks this should be fixed.
Kevin Su,
Thanks
I can’t envision any incompatibilities and nodes running the different JDKs
shouldn’t have any issues communicating but depending on the stakes you may
wish to build out either a simple lab or a complex staging environment with
a snapshot of all the data to develop a playbook for doing the rollout
Congrats Peter~~
Regards,
Kevin Su
Duo Zhang 於 2019年1月22日 週二 上午9:36寫道:
> On behalf of the Apache HBase PMC I am pleased to announce that Peter
> Somogyi
> has accepted our invitation to become a PMC member on the Apache HBase
> project.
> We appreciate Peter stepping
ommand
[ERROR] mvn -rf :hbase-shaded
OS : windows
maven version : 3.5.4
hbase branch : master
I didn't edit anything in repository.
*i use mvn clean package,do i use wrnog command ?*
Best Regards,
Kevin
(rahul.gidw...@gmail.com) wrote:
Are you using coprocessors? Can you tell us any more about what led to
this.
Thanks
On Tue, May 15, 2018 at 4:27 AM Kevin GEORGES <ke...@d33d33.fr> wrote:
> We are running HBASE 1.4.0
>
>
> On May 15, 2018 at 1:10:15 PM, Kevin GEO
We are running HBASE 1.4.0
On May 15, 2018 at 1:10:15 PM, Kevin GEORGES (ke...@d33d33.fr) wrote:
Hello,
We find region server abort with the following exception:
2018-05-15 08:23:23,920 ERROR
[RpcServer.default.FPBQ.Fifo.handler=27,queue=7,port=16020]
regionserver.HRegion: Asked to modify
ssors are: [org.apache.hadoop.hbase.coprocessor.example.BulkDeleteEndpoint
The error about memstoreSize becoming negative appear at a steady rate before
abort (hundreds/sec)
Any ideas?
Thanks,
Kevin
Looks like this might have triggered
https://issues.apache.org/jira/browse/HBASE-20581
Kevin Risden
On Mon, May 14, 2018 at 8:46 AM, Kevin Risden <kris...@apache.org> wrote:
> We are using HDP 2.5 with HBase 1.2.x. We think we found that the PUT vs
> POST documentation on the H
ase.apache.org/1.2/apidocs/org/apache/hadoop/hbase/rest/package-summary.html#operation_create_schema
Kevin Risden
"hopefully this week"... famous last words.
Finally got around to creating a JIRA: HBASE-19852 Close to having the
patch to submit done as well.
Kevin Risden
On Thu, Jan 11, 2018 at 10:02 AM, Kevin Risden <kris...@apache.org> wrote:
> "HBase Thrift2 "implement
m not
really looking to rewrite the Hue HBase Thrift module) There didn't look to
be much code shared between Thrift 1 and Thrift 2 server implementations.
Thrift 1 looks very much like HiveServer2 and the 401 bail out early might
also apply there.
I'll open a JIRA and throw up a patch hopefully this
HBase
master.
Side note: I saw the notes about HBase Thrift v1 was meant to go away at
some point but looks like it is still being depended on.
Kevin Risden
er this table whether there will be any data locality. If not
> please explain
>
> Thanks
>
--
Kevin O'Dell
Field Engineer
850-496-1298 | ke...@rocana.com
@kevinrodell
<http://www.rocana.com>
gt;
> For example, I have difficulties answering the following questions:
> * can I shorten my off-peak hours range?
> * can I afford to do compactions more often? or more aggressively?
> * how much degrades my performance if region size is becoming too large?
>
> HBase version I'm
> Am I doing something terribly wrong?
>
> Thanks in advance!
> Best regards,
> Lydia
--
Kevin O'Dell
Field Engineer
850-496-1298 | ke...@rocana.com
@kevinrodell
<http://www.rocana.com>
;
>
> > In my opinion, 1M/s input data will result in only 70MByte/s write
> > throughput to the cluster, which is quite a small amount compare to the 6
> > region servers. The performance should not be bad like this.
> >
> > Is anybody has idea why the performance stops at 600K/s?
> > Is there anything I have to tune to increase the HBase write throughput?
> >
>
>
> If you double the clients writing do you see an up in the throughput?
>
> If you thread dump the servers, can you tell where they are held up? Or if
> they are doing any work at all relative?
>
> St.Ack
>
--
Kevin O'Dell
Field Engineer
850-496-1298 | ke...@rocana.com
@kevinrodell
<http://www.rocana.com>
.
On Fri, Mar 17, 2017 at 1:55 PM, Kevin O'Dell <ke...@rocana.com> wrote:
> Hi Jeff,
>
> You can definitely lower the memstore, the last time I looked there it
> had to be set to .1 at lowest it could go. I would not recommend disabling
> compactions ever, bad things will oc
How about disabling some regular operations to save CPU time. I think
> Compaction is one of those we'd like to stop.
>
> thanks
>
> Jeff
>
--
Kevin O'Dell
Field Engineer
850-496-1298 | ke...@rocana.com
@kevinrodell
<http://www.rocana.com>
Adam Davidson <
> adam.david...@bigdatapartnership.com> wrote:
>
> > Hi Kevin,
> >
> > when creating the Configuration object for the HBase connection
> > (HBaseConfiguration.create()), you often need to set a number of
> properties
> > on the resulti
Thank you Adam Davidson.
2016-08-01 18:39 GMT+08:00 Adam Davidson <
adam.david...@bigdatapartnership.com>:
> Hi Kevin,
>
> when creating the Configuration object for the HBase connection
> (HBaseConfiguration.create()), you often need to set a number of properties
> on
hi,all:
I install hbase by ambari ,I found it's zookeeper url is /hbase-unsecure .
when I use java api to connect to hbase ,program hung up .
after kill it ,I found message :
WARN ZKUtil: hconnection-0x4d1d2788-0x25617464bd80032,
quorum=Centosle02:2181,Centosle03:2181,Centosle01:2181,
681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
2016-06-21 9:15 GMT+08:00 kevin <kiss.kevin...@gmail.com>:
> I have worked out this question :
> https://alluxio.atlassian.net/browse/ALLUXIO-2025
>
> 2016-06-20 21:02 GMT+08:00 Jean-M
and restart...
>
> 2016-06-20 3:22 GMT-04:00 kevin <kiss.kevin...@gmail.com>:
>
> > *I got some error:*
> >
> > 2016-06-20 14:50:45,453 INFO [main] zookeeper.ZooKeeper: Client
> > environment:java.library.path=/home/dcos/hadoop-2.7.1/lib/native
> > 2016-06
master:master:6] wal.FSHLog:
FileSystem's output stream doesn't support getPipeline; not available;
fsOut*
*=alluxio.client.file.FileOutStream"*
*is important?*
2016-06-16 11:31 GMT+08:00 kevin <kiss.kevin...@gmail.com>:
> I want to test if run on alluxio could improve
> perform
xio 1.1.0 needed ?
>
> Can you illustrate your use case ?
>
> Thanks
>
> On Wed, Jun 15, 2016 at 7:27 PM, kevin <kiss.kevin...@gmail.com> wrote:
>
> > hi,all:
> >
> > I wonder to know If run hbase on Alluxio/tacyon is possible and a good
> > idea,
hi,all:
I wonder to know If run hbase on Alluxio/tacyon is possible and a good
idea, and can anybody share the experience.,thanks.
I will try hbase0.98.16 with hadoop2.7.1 on top of alluxio 1.1.0.
s, by keeping most
> GC's
> > > under 100ms.
> > >
> > > On Tue, Apr 26, 2016 at 6:25 AM Saad Mufti <saad.mu...@gmail.com>
> wrote:
> > >
> > > > From what I can see in the source code, the default is actually even
> > > lower
> >
I see similar log spam while system has reasonable performance. Was the
250ms default chosen with SSDs and 10ge in mind or something? I guess I'm
surprised a sync write several times through JVMs to 2 remote datanodes
would be expected to consistently happen that fast.
Regards,
On Mon, Apr 25,
hbase.ipc.server.callqueue.handler.factor
0.5
Regards,
Kevin
On Sat, Apr 16, 2016 at 9:27 PM, Vladimir Rodionov <vladrodio...@gmail.com>
wrote:
> There are separate RPC queues for read and writes in 1.0+ (not sure about
> 0.98). You need to set sizes of these queues accordingly.
>
> -Vlad
>
> On S
ly blocked at the
end of the line.
Any recommendations for keeping reads balanced vs writes?
Regards,
Kevin
I confirm the fix, submitted a ports bump as
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=208739.
Regards,
Kevin
On Mon, Apr 11, 2016 at 7:09 AM, Matteo Bertozzi <theo.berto...@gmail.com>
wrote:
> that should be fixed in 1.2.1 with HBASE-15422
>
> Matteo
>
>
> On M
Hi,
I'm running HBase 1.2.0 on FreeBSD via the ports system (
http://www.freshports.org/databases/hbase/), and it is generally working
well. However, in an HA setup, the HBase master spins at 200% CPU usage
when it is active and this follows the active master and disappears when
standby. Since
On behalf of the development community, I am pleased to announce the
release of YCSB 0.7.0.
Highlights:
* GemFire binding replaced with Apache Geode (incubating) binding
* Apache Solr binding was added
* OrientDB binding improvements
* HBase Kerberos support and use single connection
* Accumulo
I would like to get the same information about the regions of a table that
appear in the web UI (i.e. region name, region server, start/end key,
locality), but through the hbase shell.
(The UI is flaky/slow, and furthermore I want to process this information as
part of a script.)
After much
,
- Andy
Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)
--
Kevin O'Dell
Field Enablement, Cloudera
accidental.
Use at your own risk.
Michael Segel
michael_segel (AT) hotmail.com
--
Kevin O'Dell
Field Enablement, Cloudera
to make is that with respect to HBase, you still
need to think about the cluster as a whole.
On Apr 2, 2015, at 7:41 AM, Kevin O'dell kevin.od...@cloudera.com
wrote:
Hi Mike,
Sorry for the delay here.
How does the HDFS load balancer impact the load balancing of HBase? --
The
HDFS
is still buffered
somewhere when hbase put the data into the memstore?
Reading src code may cost me months, so a kindly reply will help me a
lot... ...
Thanks very much!
Best Regards,
Ming
--
Kevin O'Dell
Systems Engineer, Cloudera
until you kill all the cache right? Or was this an old JIRA I was thinking
of?
On Thu, Nov 20, 2014 at 3:37 PM, Ted Yu yuzhih...@gmail.com wrote:
The indices are always cached.
Cheers
On Nov 20, 2014, at 12:33 PM, Kevin O'dell kevin.od...@cloudera.com
wrote:
I am also under
=1000. I suspect this may be a
block cache issue. My question is if/how to disable the block cache for the
scan queries? This is taking out writes and causing instability on the
cluster.
Thanks,
Pere
--
Kevin O'Dell
Systems Engineer, Cloudera
.
Any ideas?
Thanks,
Kevin
Matt,
You should create your own proto file and compile that with the Google
Protocol Buffer compiler. Take a look at the SingleColumnValueFilter's
code:
https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.java#L327
You
All machines use ipv4
On Tue, Oct 21, 2014 at 1:36 PM, Ted Yu yuzhih...@gmail.com wrote:
Do you use ipv6 ?
If so, this is related:
HBASE-12115
Cheers
On Tue, Oct 21, 2014 at 10:26 AM, Kevin kevin.macksa...@gmail.com wrote:
Hi,
I have connected a client machine with two network
BTW, the error looks like you didn't distribute your custom filter to your
region servers.
On Tue, Oct 21, 2014 at 1:34 PM, Kevin kevin.macksa...@gmail.com wrote:
Matt,
You should create your own proto file and compile that with the Google
Protocol Buffer compiler. Take a look
at 9:02 PM, Matt K matvey1...@gmail.com wrote:
Thanks Kevin!
I was under impression, probably mistakingly, that as of 0.96 placing
the filter on hdfs under hbase lib directory is sufficient and RS should
load the filter dynamically from hdfs. Is that not the case?
On Tuesday, October 21, 2014
Also, if you do end up using dynamic loading, you'll need a way to version
your filters because the RS will not reload a JAR if it changes.
On Tue, Oct 21, 2014 at 9:46 PM, Kevin kevin.macksa...@gmail.com wrote:
I haven't tried dynamic loading of filters on RS, but I know it does
exist. See
Hi,
The value of my table is a Map.
I want to know how can I get only value(no any key sent from the region server)
or get a subset of the value(Map) from hbase.
BR,
Kevin.
Hi, Ted
Thanks for your suggestion. But I want to know whether hbase can return the Map
to me directly instead of cell.
BR,
Kevin.
-Original Message-
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: 2014年9月19日 10:44
To: user@hbase.apache.org
Subject: Re: How to let hbase just return
,
Kevin
Hi, everyone
My application will hold tens of thousands of ResultScanner to get Data. Will
it hurt the performance and network resources?
If so, is there any way to solve it?
Thanks,
Kevin.
of thousands ResultScanner in the meantime.
I want to know whether it will hurt the performance and network resources and
if so, is there any way to solve it?
Best regards,
Kevin.
-Original Message-
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: 2014年8月26日 16:49
To: user@hbase.apache.org
Cc
Hi, Ted
I think you are right. But we must hold the ResultScanner for a while. So is
there any way to reduce the performance loss? Or is there any way to share the
connection?
Best regards,
Kevin.
-Original Message-
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: 2014年8月27日 11:36
a:739)
6 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnS
ocketNIO.java:361)
7 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1
081)
Can anyon help me to solve it.
Thanks,
Kevin.
Hi, all
I am now using spark to manipulate hbase. But I cant't use HBaseTestingUtility
to do unit test. Because spark needs Guava 15.0 and above while Hbase needs
Guava 14.0.1. These two versions are incompatible. Is there any way to solve
this conflict with maven.
Thanks,
Kevin.
Hi Ozhang,
If you are only bulk loading into HBase, then memstore flush size should
not matter. You most likely you looking to lower the upper/global memstore
limits.
On Aug 3, 2014 2:23 PM, ozhang ozhangu...@gmail.com wrote:
Hello,
In our hbase cluster memstore flush size is 128 mb. And to
Upon insert, lower the global setting not the flush size :)
On Aug 3, 2014 3:01 PM, ozhang ozhangu...@gmail.com wrote:
Hi Kevin,
We guess that, on region server start up, hbase gets some memory for each
memstore. So we want to decrease this value. You are saying that memstore
size doesnt
I am reading data off of HDFS that don't all get loaded into a single
table. With the current way of bulk loading I can load to the table that
most of the data will end up in, and I can use the client API (i.e., Put)
to load the other data from the file into the other tables.
The current bulk
Hi Jeremy,
I always recommend turning on snappy compression, I have ~20%
performance increases.
On Jun 14, 2014 10:25 AM, Ted Yu yuzhih...@gmail.com wrote:
You may have read Doug Meil's writeup where he tried out different
ColumnFamily
compressions :
https://blogs.apache.org/hbase/
prohibited. If you have received this message in error, please
immediately notify the sender and/or notificati...@carrieriq.com and
delete or destroy any copy of this message and its attachments.
--
Kevin O'Dell
Systems Engineer, Cloudera
memstore
--
Kevin O'Dell
Systems Engineer, Cloudera
* Search Analytics
Solr Elasticsearch Support * http://sematext.com/
--
Kevin O'Dell
Systems Engineer, Cloudera
Rohit,
64GB heap is not ideal, you will run into some weird issues. How many
regions are you running per server, how many drives in each node, any other
settings you changed from default?
On Jan 24, 2014 6:22 PM, Rohit Dev rohitdeve...@gmail.com wrote:
Hi,
We are running Opentsdb on CDH 4.3
Have you tried writing out an hfile and then bulk loading the data?
On Jan 4, 2014 4:01 PM, Ted Yu yuzhih...@gmail.com wrote:
bq. Output is written to either Hbase
Looks like Akhtar wants to boost write performance to HBase.
MapReduce over snapshot files targets higher read throughput.
as possible later on)
Kevin, my current understanding of bulk load is that you generate
StoreFiles and later load through a command line program. I dont want to do
any manual step. Our system is getting data after every 15 minutes, so
requirement is to automate it through client API completely
.
Regards,
Andy.
--
Kevin O'Dell
Systems Engineer, Cloudera
this message in context:
http://apache-hbase.679495.n3.nabble.com/Get-all-columns-in-a-column-family-tp4053696.html
Sent from the HBase User mailing list archive at Nabble.com.
--
Kevin O'Dell
Systems Engineer, Cloudera
:
Encountered problems when prefetch META table: ...
Does this depend on the number of thread in whereby I insert the data?
--
Kevin O'Dell
Systems Engineer, Cloudera
it is
possible), and try bringing up the cluster again. hbck will not work as
none of the region servers are up. Any one have any other ideas?
Thanks,
Raheem
--
Kevin O'Dell
Systems Engineer, Cloudera
in the RS logs to see what this region
can not come back online...
JM
2013/12/10 Kevin O'dell kevin.od...@cloudera.com
Hey Raheem,
You can sideline the table into tmp(mv /hbase/table /tmp/table, then
bring HBase back online. Once HBase is back you can use HBCK to repair
your META
a table that grows very fast so the
region keeps splitting, is it possible that the table could have as many
regions as it could until all the resource run out?
Thanks.
Kim
--
Kevin O'Dell
Systems Engineer, Cloudera
Dynamics
+7 812 640 38 76
https://mail.google.com/mail/u/0/html/compose/static_files/blank_quirks.html#
Skype: ivan.v.tretyakov
www.griddynamics.com
itretya...@griddynamics.com
--
Kevin O'Dell
Systems Engineer, Cloudera
if everything has been replicated? Do I query Zookeeper and check if
the RS queues are empty? Or is HBase replication not the right fit for my
use case?
I am using HBase 0.94.2.
Thanks in advance for any advice!
--
Kevin
on Android
--
Kevin O'Dell
Systems Engineer, Cloudera
John,
Out of Memory Error. You can add this to your code(assuming it is in
your release) scan.setBatch(batch);
On Wed, Sep 11, 2013 at 11:26 AM, John johnnyenglish...@gmail.com wrote:
@Kevin: I changed the hbase.client.keyvalue.maxsize from 10MB to 500MB,
but the regionserver still
Can you attach a screen shot of the HMaster UI? It appears ZK is connecting
fine, but can't find .META.
On Aug 25, 2013 8:57 AM, Shengjie Min shengjie@gmail.com wrote:
Hi Jean-Marc,
You meant my cloudera vm or my client? Here is my /etc/hosts
cloudera vm:
127.0.0.1
, are you able to access the VM from
outside?
Like, are you able to access the WebUI from outside of the VM with
something like http://cloudera:60010;?
JM
2013/8/25 Shengjie Min shengjie@gmail.com
On 25 August 2013 21:08, Kevin O'dell kevin.od
Shengjie,
Looks like you are binding to localhost on your services. Please make
sure you correct it so you bind on the interface for zk.
On Aug 25, 2013 10:32 AM, Shengjie Min shengjie@gmail.com wrote:
Sure, Kevin,
http://imgur.com/SQ3Zao9
Shengjie
On 25 August 2013 22:22, Kevin
QQ what is your caching set to?
On Aug 22, 2013 11:25 AM, Pavan Sudheendra pavan0...@gmail.com wrote:
Hi all,
A serious question.. I know this isn't one of the best hbase practices but
I really want to know..
I am doing a join across 3 table in hbase.. One table contain 19m records,
one
?
Thanks,
Viral
--
Thanks and Regards,
Vimal Jain
--
Kevin O'Dell
Systems Engineer, Cloudera
My questions are :
1) How this thing is working ? It is working because java can over allocate
memory. You will know you are using too much memory when the kernel starts
killing processes.
2) I just have one table whose size at present is about 10-15 GB , so what
should be ideal memory
Hi Inder,
Here is an excellent blog post which is a little dated:
http://www.larsgeorge.com/2009/11/hbase-vs-bigtable-comparison.html?m=1
On Aug 4, 2013 10:55 AM, Inder Pall inder.p...@gmail.com wrote:
Kevin
Would love to hear your thoughts around hbase not big table.
Thanks
inder
Does it exist in meta or hdfs?
On Aug 1, 2013 8:24 AM, Jean-Marc Spaggiari jean-m...@spaggiari.org
wrote:
My master keep logging that:
2013-07-31 21:52:59,201 WARN
org.apache.hadoop.hbase.master.AssignmentManager: Region
270a9c371fcbe9cd9a04986e0b77d16b not found on server
,
270a9c371fcbe9cd9a04986e0b77d16b, aff4d1d8bf470458bb19525e8aef0759]
Can I just delete those zknodes? Worst case hbck will find them back from
HDFS if required?
JM
2013/8/1 Kevin O'dell kevin.od...@cloudera.com
Does it exist in meta or hdfs?
On Aug 1, 2013 8:24 AM, Jean-Marc Spaggiari jean-m...@spaggiari.org
the znodes but got the same result. So I shutted down
all
the RS and restarted HBase, and now I have 0 regions for this table.
Running HBCK. Seems that it has a lot to do...
2013/8/1 Kevin O'dell kevin.od...@cloudera.com
Yes you can if HBase is down, first I would copy .META
If that doesn't work you probably have an invalid reference file and you
will find that in RS logs for the HLog split that is never finishing.
On Aug 1, 2013 1:38 PM, Kevin O'dell kevin.od...@cloudera.com wrote:
JM,
Stop HBase
rmr /hbase from zkcli
Sideline META
Run offline meta repair
Kevin O'dell kevin.od...@cloudera.com
If that doesn't work you probably have an invalid reference file and you
will find that in RS logs for the HLog split that is never finishing.
On Aug 1, 2013 1:38 PM, Kevin O'dell kevin.od...@cloudera.com wrote:
JM,
Stop HBase
rmr /hbase from
,
- Andy
Problems worthy of attack prove their worth by hitting back. - Piet
Hein
(via Tom White)
--
Best regards,
- Andy
Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)
--
Kevin O'Dell
Systems Engineer
:
What would happen to this ?
System.out.println(c.compareTo(Bytes.toBytes(30)));
On Thu, Jul 18, 2013 at 5:55 PM, Kevin kevin.macksa...@gmail.com
wrote:
Sure, try using the BinaryComparator. For example,
BinaryComparator c = new
BinaryComparator
Sure, try using the BinaryComparator. For example,
BinaryComparator c = new BinaryComparator(Bytes.toBytes(200));
System.out.println(c.compareTo(Bytes.toBytes(201))); // returns -1
On Thu, Jul 18, 2013 at 4:28 PM, Frank Luo j...@merkleinc.com wrote:
That requires creating my
...@carrieriq.com and
delete or destroy any copy of this message and its attachments.
--
Kevin O'Dell
Systems Engineer, Cloudera
a stable HBase cluster with 16 or 24GB RS
heaps).
Thanks in advance,
--Suraj
--
Kevin O'Dell
Systems Engineer, Cloudera
. Perhaps those
parentheses made that statement look like an optional statement. Just
to
clarify it was mandatory.
Warm Regards,
Tariq
cloudfront.blogspot.com
On Sat, Jun 22, 2013 at 9:45 PM, Kevin O'dell
kevin.od...@cloudera.com
wrote:
If you run ZK with a DN/TT
If you run ZK with a DN/TT/RS please make sure to dedicate a hard drive and
a core to the ZK process. I have seen many strange occurrences.
On Jun 22, 2013 12:10 PM, Jean-Marc Spaggiari jean-m...@spaggiari.org
wrote:
You HAVE TO run a ZK3, or else you don't need to have ZK2 and any ZK
failure
1 - 100 of 263 matches
Mail list logo