What load do you see on the system? I am wondering if bottleneck is on the
client side.
On Fri, Nov 2, 2012 at 9:07 PM, yun peng wrote:
> Hi, All,
> In my HBase cluster, I observed Put() executes faster than a Get(). Since
> HBase is optimized towards write, I wonder what may affect Put performa
Hi, All,
In my HBase cluster, I observed Put() executes faster than a Get(). Since
HBase is optimized towards write, I wonder what may affect Put performance
in a distributed setting below.
Hbase setup:
My HBase cluster are of three nodes, in which one hosts zookeeper and
HMaster, and two slaves.
Update::
I pre split the regions and my big data load MR worked! Thanks for your
help.
One note however, it takes time to create table with pre split regions. And
I have not figured out why.
I continue to see regions as "not deployed" for at least 20-30 minutes
after table is created with pre sp
That was my initial plan too, but I was wondering if there was any
other best practice about the delete. So I will go that way.
Thanks,
JM
2012/11/2, Shrijeet Paliwal :
> Not sure what exactly is happening in your job. But in one of the delete
> jobs I wrote I was creating an instance of HTable
Not sure what exactly is happening in your job. But in one of the delete
jobs I wrote I was creating an instance of HTable in setup method of my
mapper
delTab = new HTable(conf, conf.get(TABLE_NAME));
And performing delete in map() call using delTab. So no, you do not have
access to table directl
Sorry, one last question.
On the map method, I have access to the row using the values
parameter. Now, based on the value content, I might want to delete it.
Do I have access to the table directly from one of the parameters? Or
should I call the delete using an HTableInterface from my pool?
Thank
Yep, you perfectly got my question.
I just tried and it's working perfectly!
Thanks a lot! I now have a lot to play with.
JM
2012/11/2, Shrijeet Paliwal :
> JM,
>
> I personally would chose to put it neither hadoop libs nor hbase libs. Have
> them go to your application's own install directory.
JM,
I personally would chose to put it neither hadoop libs nor hbase libs. Have
them go to your application's own install directory.
Then you could sent the variable HADOOP_CLASSPATH to have your jar (also
include hbase jars, hbase dependencies and dependencies your program needs)
And to execute
Hi Shrijeet,
Helped a lot! Thanks!
Now, the only think I need is to know where's the best place to put my
JAR on the server. Should I put it on the hadoop lib directory? Or
somewhere on the HBase structure?
Thanks,
JM
2012/10/29, Shrijeet Paliwal :
> In line.
>
> On Mon, Oct 29, 2012 at 8:11 A
Nope. I'm honestly not sure how the files changed, but I will keep an eye
on it.
On Fri, Nov 2, 2012 at 2:22 PM, Kevin O'dell wrote:
> Do you use Puppet?
>
> On Fri, Nov 2, 2012 at 1:13 PM, Dan Brodsky wrote:
>
> > Ram,
> >
> > I wanted to follow up with you since you helped me with your below
Do you use Puppet?
On Fri, Nov 2, 2012 at 1:13 PM, Dan Brodsky wrote:
> Ram,
>
> I wanted to follow up with you since you helped me with your below comment.
>
> It turns out that the ZK configuration files somehow got changed (reverted
> to their default values?), and I'm not sure who/when/how.
Ram,
I wanted to follow up with you since you helped me with your below comment.
It turns out that the ZK configuration files somehow got changed (reverted
to their default values?), and I'm not sure who/when/how. The zoo.cfg files
didn't have the list of quorum peers, and the myid files that tol
On Fri, Nov 2, 2012 at 10:17 AM, Thanh Do wrote:
> Hi all,
>
> Could any body please show me how to apply a patches to hbase-core-trunk.
> I look over the hbase book as well as the mailing list but could find how.
>
> Simply run this command in the hbase-core-trunk directory does not help:
> patch
Hi all,
Could any body please show me how to apply a patches to hbase-core-trunk.
I look over the hbase book as well as the mailing list but could find how.
Simply run this command in the hbase-core-trunk directory does not help:
patch -p0 < patchfile
does not help.
Many thanks,
Thanh
3. if pool is out:
I get a normal Htable instance, because HTablePool.getTable will call
findOrCreateTable(), once there is not table available, it creates a
HbaseTable instance instead of PooledHbase instance in this case, once I am
done with query, I call Htable.close(), it will just release
Additionally, don't take it for granted that an RDBMS and HBase aren't the
same thing. Check out these sections of the RefGuide if you haven't
already.
http://hbase.apache.org/book.html#datamodel
http://hbase.apache.org/book.html#schema
On 11/1/12 11:01 PM, "Shumin Wu" wrote:
>Have you ta
On 2012-10-26, at 9:59 PM, Stack wrote:
> On Thu, Oct 25, 2012 at 1:24 AM, Oliver Meyn (GBIF) wrote:
>> Hi all,
>>
>> I'm on cdh3u3 (hbase 0.90.4) and I need to provide a bunch of row keys based
>> on a column value (e.g. give me all keys where column "dataset" = 1234).
>> That's straightfor
17 matches
Mail list logo