There are separate RPC queues for read and writes in 1.0+ (not sure about
0.98). You need to set sizes of these queues accordingly.
-Vlad
On Sat, Apr 16, 2016 at 4:23 PM, Kevin Bowling
wrote:
> Hi,
>
> Using OpenTSDB 2.2 with its "appends" feature, I see significant
>> have a project that needs to store large number of image and video
files,
>>the file size varies from 10MB to 10GB, the initial number of files will
be
>>0.1 billion and would grow over 1 billion, what will be the practical
>>recommendations to store and view these files?
>>
Files are
Hi,
I'm using HBase 0.98.7 and I want to get startkeys and endkeys of all
blocks in a HFile.
Is there any way to get them?
Thanks,
Van
There was HBASE-15370 for backport but it was decided not to backport the
feature.
FYI
On Sat, Apr 16, 2016 at 7:26 PM, Ascot Moss wrote:
> Hi,
>
> About HBase-11339,
> "The size of the MOB data could not be very large, it better to keep the
> MOB size within 100KB and
Hi,
About HBase-11339,
"The size of the MOB data could not be very large, it better to keep the
MOB size within 100KB and 10MB. Since MOB cells are written into the
memstore before flushing, large MOB cells stress the memory in region
servers."
Can this be resolved if we provide more RAM in
Thanks Ted!
Just visited HBASE-11339, its status is "resolved" however, it is for "Fix
Version : 2.0.0."
How to patch it to current HBase stable version (v1.1.4) ?
About Fault Tolerance to DataCenter level, I am thinking HBase Replication
method to replicate HBase Tables to another cluster
Have you taken a look at HBASE-11339 (HBase MOB) ?
Note: this feature does not handle 10GB objects well. Consider store GB
image on hdfs.
Cheers
On Sat, Apr 16, 2016 at 6:21 PM, Ascot Moss wrote:
> Hi,
>
> I have a project that needs to store large number of image and
Hi,
Is there any document about what's new in HBase v1.x vs 9.98.x?
Regards
Hi,
I have a project that needs to store large number of image and video files,
the file size varies from 10MB to 10GB, the initial number of files will be
0.1 billion and would grow over 1 billion, what will be the practical
recommendations to store and view these files?
#1 One cluster, store
Hi,
Using OpenTSDB 2.2 with its "appends" feature, I see significant impact on
read performance when writes are happening. If a process injects a few
hundred thousand points in batch, the call queues on on the region servers
blow up and until they drain a new read request is basically blocked at
Hi there,
I fixed the error by trying different maven dependencies and that solved
the problem in the end.
First, you should not need and, should NOT include the hadoop-core jar.
Here is all the code you need in the end to read HFile from a localfile
system.
And I tested this on a brand new
Can you verify that hbase is running by logging onto master node and check
the Java processes ?
If master is running, can you do a listing of the zookeeper znode (using
zkCli) and pastebin the result ?
Thanks
On Sat, Apr 16, 2016 at 8:14 AM, Eric Gao wrote:
> Yes,I have
Yes,I have seen your reply.Thanks very much for your kindness.
This is my hbase-site.xml:
hbase.rootdir
hdfs://master:9000/hbase/data
hbase.cluster.distributed
true
zookeeper.znode.parent
/hbase
Root ZNode for HBase in ZooKeeper. All of HBase's ZooKeeper
Have you seen my reply ?
http://search-hadoop.com/m/q3RTtJHewi1jOgc21
The actual value for zookeeper.znode.parent could be /hbase-secure (just an
example).
Make sure the correct hbase-site,xml is in classpath for hbase shell.
On Sat, Apr 16, 2016 at 7:53 AM, Eric Gao
Dear expert,
I have encountered a problem,when I run hbase cmd :status it shows:
hbase(main):001:0> status
2016-04-16 13:03:02,333 ERROR [main]
client.ConnectionManager$HConnectionImplementation: The node /hbase is not in
ZooKeeper. It should have been written by the master. Check the value
15 matches
Mail list logo