Coud you try with "-XX:+PrintGCApplicationStoppedTime" vm parameter ?
the hung from vm side was not caused by GC always
Thanks,
发件人: Rural Hunter [ruralhun...@gmail.com]
发送时间: 2014年7月8日 14:06
收件人: user@hbase.apache.org
主题: Region server not accept connectio
Hi,
I'm using hbase-0.96.2. I saw sometimes my region servers don't accept
connections from clients. this could last like 10 minutes to half hour.
I was not able to connect to the 60020 port even with telnet command
when it happened. After a while, the problem disappeared and the region
serve
w.r.t. Apache Slider, see http://slider.incubator.apache.org/
bq. with flexibility of growing and shrinking
The 'flex' action achieves the growing and shrinking.
When the cluster is not needed (for some period of time), you can freeze
the cluster. In case it is to be used again, you can thaw the
Though I have not looked at it myself but you can run hbase as a long running
process on Yarn (apache slider). As far as I understand, you can have an
instance of any size with flexibility of growing and shrinking.
Artem Ervits
Data Analyst
New York Presbyterian Hospital
- Original Message
I have never tried MySQL's blob or varbinary. I guess I can look into that.
Thanks for answering my questions.
Arun
On Jul 7, 2014 6:22 PM, "Dima Spivak" wrote:
> Does MySQL's BLOB or VARBINARY satisfy your use case?
>
> As for converting a pseudo-distributed cluster to a distributed one, unless
Does MySQL's BLOB or VARBINARY satisfy your use case?
As for converting a pseudo-distributed cluster to a distributed one, unless
I'm mistaken, you should have no problem doing so. HDFS is quite good with
scaling, whether it's from 10 machines to 20 or 1 to 10 and I don't know of
any reason that H
I understand. But for example, my use case is where even if I don't have a
lot of data, what if I would rather store serialized objects. For this
traditional RDBMS are not suitable. If I can forego the fail safe
capabilities, then what is a good choice (if not HBase).
Also, on a different note, if
In general, production systems run in distributed mode because they
leverage HBase's scalability and reliability; HBase really only shows its
worth when it's charged with managing terabytes of data on a fault-tolerant
file system like HDFS. You lose both of these when you run in standalone
mode, so
Hi Ted,
I have. So the book says there are two types of distributed modes. One is
pseudo distributed, which is used when we want to test HBase's distributed
capabilities using a single machine. As far as I understood, this is just
to verify the use cases and the requirements. Then we have the full
Have you read http://hbase.apache.org/book.html#standalone_dist ?
Cheers
On Mon, Jul 7, 2014 at 3:55 PM, Arun Allamsetty
wrote:
> Hi all,
>
> So this question might be stupid, retarded even, but it has been bugging me
> for a while and I cannot think of a better place to ask this. I am really
Hi all,
So this question might be stupid, retarded even, but it has been bugging me
for a while and I cannot think of a better place to ask this. I am really
impressed with the way HBase works (as a key-value store). Since it stores
everything as a byte array, I find it really convenient to store
A bit more context.
Initially we had Facebook go off on 0.89-FB, which had to do (as we heard from
them) with internal process considerations more than anything else. This has
evolved into HydraBase. Later, OhmData revealed another fork. Probably this was
about differentiating and providing pr
Out of curiosity Vladimir, did you feel like a fork of HBase was necessary
because of something about the Apache HBase project's process or community? Or
was it more of a licensing thing (noting you're not using ASL 2)?
On Jul 6, 2014, at 11:26 PM, Vladimir Rodionov wrote:
>>>
>>> Another is
We have a java application (on tomcat) that connects to Hbase. We are getting
the below errors when we stop tomcat. Any thoughts?
SEVERE: The web application [/testapp] appears to have started a thread
named [hbase-tablepool-168-thread-1] but has failed to stop it. This is very
likely to create a
Hi Ted,
I did not at first. I don't know why I didn't realize I could do that at
first. But then I understood that I can. Thanks for the help though.
Cheers,
Arun
On Jul 3, 2014 10:28 AM, "Ted Yu" wrote:
> Did you read the summary object through HTable API in Job #2 ?
>
> Cheers
>
>
> On Thu, J
btw, another worth reading article about block caches:
http://www.n10k.com/blog/blockcache-101/
On Mon, Jul 7, 2014 at 8:26 AM, Vladimir Rodionov
wrote:
> >>
> >>Another issue is that we cache only blocks. So for workloads with random
> reads where the working set of blocks does not fit into th
16 matches
Mail list logo