Thank you so much Serega.
Regards,
Krishna
On Sun, Sep 28, 2014 at 11:01 PM, Serega Sheypak serega.shey...@gmail.com
wrote:
https://pig.apache.org/docs/r0.11.0/api/org/apache/pig/backend/hadoop/hbase/HBaseStorage.html
I'm not sure how does Pig HBaseStroage works. I suppose it would read all
Hi
Even when the RS throws this Exception, the client side will start a new
Scanner and retry. U just see this in log or the scan is failing altogether?
What is the caching you use on Scan? When most of the rows are filtered
out at server side, it takes more time to fetch and return the
Hi Anoop,
I receive this error in client side, and pretty sure the scan failed.
I'm using default caching, so it should be 100, right?
About scan time out period, I will try to set it higher, probably 1 hour.
BTW, I'm using hbase 0.96.0.
Best regards,
Henry
-Original Message-
From:
Hi
We are trying to migrating to* HBase 0.98.1(CDH 5.1.1) from 0.94.6*,
to use *Bucket
Cache + CoProcessor* and to check the performance improvement but looking
into the API i found that a lot has changed.
I tried using the HBase-example jar for the row count coprocessor, the
coprocessor jar
How many threads in your client?
On Mon, Sep 29, 2014 at 4:05 PM, Henry Hung ythu...@winbond.com wrote:
Hi Anoop,
I receive this error in client side, and pretty sure the scan failed.
I'm using default caching, so it should be 100, right?
About scan time out period, I will try to set it
Hello guys
I am using the hbase java api to connect to hbase remotely, but when I
executed the java code, got |MasterNotRunningException|. When I debugged
the code, I came to know that zookeeper was returning the address of
hmaster as localhost.localdomain, so the client was trying to search
Hi,
You should not do this, as localhost should resolve to the own host. This is
probably some missing property on the clients hbase configuration (make sure
you have a proper hbase-site.xml on client's classpath or set configuration
programatically). As a start, check if you had set the
java client
package com.example.hbaseconnect.HBaseConnect;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import org.apache.hadoop.hbase.client.HTable;
import
Does the IP you listed properly handle DNS resolution? You should make sure
forward and reverse look up work properly for hosts used.
Generally you should also configure things via host name and not IP address.
On Mon, Sep 29, 2014 at 8:17 AM, SACHINGUPTA sac...@datametica.com wrote:
java
bq. rowcount endpoint and the Example protos
Can you describe how you deployed the rowcount endpoint on regionservers ?
bq. want to utilize Bucket Cache of HBase
You need 0.96+ in order to utilize Bucket Cache
Cheers
On Mon, Sep 29, 2014 at 1:48 AM, Vikram Singh Chandel
yes
On Monday 29 September 2014 08:18 PM, Sean Busbey wrote:
Does the IP you listed properly handle DNS resolution? You should make sure
forward and reverse look up work properly for hosts used.
Generally you should also configure things via host name and not IP address.
On Mon, Sep 29, 2014
What does the hostname command return on the master?
Do you have any backup-masters configured?
What is the contents of your regionservers file?
On Mon, Sep 29, 2014 at 9:55 AM, SACHINGUPTA sac...@datametica.com wrote:
yes
On Monday 29 September 2014 08:18 PM, Sean Busbey wrote:
Does the
'hostname' command returns localhost.localdomain
I am having a cloudera quick start vm so i am using that as my cluster
so whatever cloudera provides i have it.
And i am trying to access the hbase of cloudera vm remotely
On Monday 29 September 2014 08:38 PM, Sean Busbey wrote:
What does the
It sounds like your VM host is not properly configured for remote access;
the canonical hostname should not be any of the localhost derivatives if
you are going to try to use the machine remotely.
If necessary, there are configuration settings you can use to change what
interface HBase looks at
Hey Ted,
I was in the process of comparing insert throughputs which we
discussed using ycsb.What I could find is that when I split the data into
multiple column families the insert through is coming down to half when
compared to persisting into a single column family.Do you think this is
Can you give a bit more detail, such as:
the release of HBase you're using
number of column families where slowdown is observed
size of cluster
release of hadoop you're using
Thanks
On Mon, Sep 29, 2014 at 9:43 AM, Nishanth S nishanth.2...@gmail.com wrote:
Hey Ted,
I was in the process of
Hbase Release: 0.96.1
Number of column families at which issue is observed is 2.Earlier I had one
single column family where all the data was persisted.In the new case I
was storing all meta data into column family 1(less than 1k) and a blob
on second column family(around 7Kb).
We have 9 node
bq. had to spawn multiple put requests in this case because there is no API
for sending insert requests to multiple column family.
Could this be related to the slowdown you observed ?
Are you able to use HTable API and see if the same slowdown is reproduced ?
BTW try 0.98.6.1 if you can.
Anyone have any advice on this? I'm going to cross post to the hadoop users
group since this seems to be a YARN related issue
Best,
Just after the 1st instance of such an exception at client side, ur scan
failed? Because on receiving this Exception, we retry with a new Scanner
automatically. I am not sure whether all of ur retry also fails.
I think it will retry with new Scanner only one more time. (remembering
some work
20 matches
Mail list logo