Region size increased from 4G to 10G, and merged the adjacent region, i.e. the
region number reduced half approximately. Then the table capacity reduced from
969.4G to 945.2G, the regionServer information are as follows:
ServerNameNum. StoresNum. StorefilesStorefile Size
> On Mar 18, 2015, at 1:52 AM, Gokul Balakrishnan wrote:
>
>
>
> @Sean this was exactly what I was looking for. Based on the region
> boundaries, I should be able to create virtual groups of rows which can
> then be retrieved from the table (e.g. through a scan) on demand.
>
Huh?
You don’t
Hi,
I am using hbase connection pooling using
HConnection hConnection=HConnectionManager.createConnection(conf);
I see most of method of HConnectionManager goes to depreciated.
Please provide to me best way to create Hbase connection pool and what your
future plan for this topic.
thanks
HConnection is also deprecated. It would be better to do:
Connection connection = ConnectionFactory.createConnection(conf);
On Wed, Mar 18, 2015 at 7:55 AM, OM PARKASH Nain wrote:
> Hi,
> I am using hbase connection pooling using
>
> HConnection hConnection=HConnectionManager.createCo
Hi All,
I am new to hbase and using Amazon EMR with hbase hbase-0.94.18
install. Always getting "hbase(main):002:0> 15:59:13.023
[main-SendThread(ip-172-31-12-99.ec2.internal:2181)] DEBUG
org.apache.zookeeper.ClientCnxn - Got ping response for sessionid:
0x14c2d99c8f20008 after
Change the loglevel for org.apache.zookeeper to WARN in the
log4j.properties file. (Or from the HBase master web UI)
JM
2015-03-18 12:44 GMT-04:00 Garry Chen :
> Hi All,
> I am new to hbase and using Amazon EMR with hbase
> hbase-0.94.18 install. Always getting "hbase(main):002:
On Tue, Mar 17, 2015 at 11:42 PM, Mike Dillon
wrote:
> Thanks. I'll look into those suggestions tomorrow. I'm pretty sure that
> short-circuit reads are not turned on, but I'll double check when I follow
> up on this.
>
> The main issue that actually led to me being asked to look into this issue
Can you please share your log4j.properties file? Just cut&past the content
here...
2015-03-18 13:21 GMT-04:00 Garry Chen :
> Hi JM,
> Thank you very much for your information. After follow your
> instruction and a reboot. I am still getting that message.
>
> Garry
>
> hbase(main):004:0>
On Tue, Mar 17, 2015 at 9:47 PM, Stack wrote:
>
> > If it's possible to recover all of the file except
> > a portion of the affected block, that would be OK too.
>
> I actually do not see a 'fix' or 'recover' on the hfile tool. We need to
> add it so you can recover all but the bad block (we sho
I haven't filed one myself, but I can do so if my investigation ends up
finding something bug-worthy as opposed to just random failures due to
out-of-disk scenarios.
Unfortunately, I had to prioritize some other work this morning, so I
haven't made it back to the bad node yet.
I did attempt resta
For a 'fix' and 'recover' hfile tool at HBase level, the relatively easy
thing we can recover is probably the data (KVs) up to the point when we hit
the first corruption caused exception.
After that, it will not be as easy. For example, if the current key length
or value length is bad, there is n
Realistically this should be weighed by number of machines.If you run a small 5
node cluster, sure, you can upgrade easily. But your vote does not count as
much as somebody who's running 1000 machines.
-- Lars
From: Otis Gospodnetic
To: "user@hbase.apache.org"
Sent: Thursday, March 12,
Hi, I'm trying to use HBaseStorage to read data from HBase
1. I do persist smth to hbase each day using hbase-client java api
2. using HBaseStorage vis oozie
Now I failed to read persisted data using pig script via HUE or plain pig.
I don't have any problem reading data using java client api.
What
Hi,
I have hbase scan with some filters. If the scan going through 1 million
rows then the filters reduce it down to 100 rows to send back to the client.
How can I know that the scan went through 1 million rows? Is it possible to
see it from the ResultScanner?
Thanks,
--
View this message in c
In 1.0+ API, there's no more automatic pooling. You application uses the
ConnectionFactory (as Solomon says) to get a connection instance for the
application, and retrieve accessor objects (Table, Admin, &c) from that
singleton.
Have a look at https://github.com/ndimiduk/hbase-1.0-api-examples for
Looks like ipv6 address is not being parsed correctly. Maybe related
to : https://bugs.openjdk.java.net/browse/JDK-6991580
Alok
On Wed, Mar 18, 2015 at 3:13 PM, Serega Sheypak
wrote:
> Hi, I'm trying to use HBaseStorage to read data from HBase
> 1. I do persist smth to hbase each day using hbase
I've had a chance to try out Stack's passed along suggestion of
HADOOP_ROOT_LOGGER="TRACE,console" hdfs dfs -cat and managed to get this:
https://gist.github.com/md5/d42e97ab7a0bd656f09a
After knowing what to look for, I was able to find the same checksum
failures in the logs during the major com
My only complaint about this poll is the labels: "0.94.x - I like stable
releases". It's not really about the stable releases for me, it's more
about the extremely difficulty of overcoming "the singularity" from 0.94 ->
0.96+ with no downtime in a reasonably complex production system.
Hortonwork's
bq. Is it possible to see it from the ResultScanner?
I don't think so.
On Wed, Mar 18, 2015 at 2:20 PM, seanhouse79 wrote:
> Hi,
>
> I have hbase scan with some filters. If the scan going through 1 million
> rows then the filters reduce it down to 100 rows to send back to the
> client.
> How ca
If you haven't already seen it - take a look at the bridge at
https://issues.apache.org/jira/browse/HBASE-12814
We're using it to go through the process now.
Dave
On Wed, Mar 18, 2015 at 5:46 PM, Bryan Beaudreault wrote:
> My only complaint about this poll is the labels: "0.94.x - I like stable
>From HBase perspective, since we don't have a ready tool, the general idea
will need you to have access to HBase source code and write your own tool.
On the high level, the tool will read/scan the KVs from the hfile similar
to what the HFile tool does, while opening a HFileWriter to dump the good
Hi,
Suppose I have enabled encryption in a column family by setting "ENCRYPTION =>
'AES'".
Now I want to disable encryption for this column family. How to do this through
HBase Shell?
As per alter table syntax, at column family level we can add CFs, delete CFs or
set/modify properties. How to r
22 matches
Mail list logo