For those of you who will bump into this issue, here is what looks like
the solution:
https://issues.apache.org/jira/browse/ZOOKEEPER-1904
It is a zookeeper issue solved in zookeeper 3.4.7 and 3.5.0
cheers
reinis
On 17.07.2014 12:30, Reinis Vicups wrote:
Hi Mikhail,
thank you for your
Hi Wilm
that is actually an interesting option - include the entire json-path in
the cq
2014-09-09 23:17 GMT-07:00 Wilm Schumacher wilm.schumac...@cawoom.com:
as stated above you can use JSON or something similar, which is always
possible. However, if you have to do that very often (and I
Thanks for the update Renis,
however from a cursory read from the committed code in ZooKeeper doesn't
seems that code path is used by HBase at all, have you tried the 3.5
release? does that solve the issue? If upgrading ZK to another release
fixed the issue perhaps the problem in a different
Hi,
I developed an distributed scan, I create an thread for each region. After
that, I've tried to get some times Scan vs DistributedScan.
I have disabled blockcache in my table. My cluster has 3 region servers
with 2 regions each one, in total there are 100.000 rows and execute a
complete scan.
out of curiosity, did you see below messages in RS log?
LOG.warn(Snapshot called again without clearing previous. +
Doing nothing. Another ongoing flush or did we fail last
attempt?”);
Nope
thanks.
On Tue, Sep 9, 2014 at 2:15 AM, Brian Jeltema
Because you really don’t want to do that since you need to keep the number of
CFs low.
Again, you can store the data within the structure and index it.
On Sep 10, 2014, at 7:17 AM, Wilm Schumacher wilm.schumac...@cawoom.com wrote:
as stated above you can use JSON or something similar, which
In the Bay Area next week?
Join HBase committers Alex Newman and Ryan Rawson from WANdisco for “Making
HBase Invincible: Use Cases and Challenges”.
Topics discussed will include:
-
HBase in production environments
-
Challenges in large-scale deployments
-
Upcoming features
Hello Guillermo,
Sounds like some potential contention going on, how many disks per node you
have?
Can you explain further what do you mean by and I don't know why it's so
fast,, it's really much faster than execute an count from hbase shell,
the count command from the shell uses the
Ryan and Alex:
I got flagged that the below is spam but I let it through. Please talk to
your 'Marketing Coordinator' and explain that we generally like to keep our
community lists free of 'marketing'.
Thanks,
St.Ack
On Wed, Sep 10, 2014 at 9:43 AM, Rehana Lerandeau
Hi,
I'm trying to run a HBase cluster on a development environment with several
nodes. One thing that I don't understand checking the HBase Master is why
all online regions are pointing to a single region server and don't balance
them among all the region servers.
I have 6 different tables, so
Do you see other region servers from your master web UI? If so, if you
run balancer/balance_switch from the shell, what happens?
On Wed, Sep 10, 2014 at 10:24 AM, Ivan Fernandez
ivan.fernandez.pe...@gmail.com wrote:
Hi,
I'm trying to run a HBase cluster on a development environment with
Am 10.09.2014 um 17:33 schrieb Michael Segel:
Because you really don’t want to do that since you need to keep the number of
CFs low.
in my example the number of CFs is 1. So this is not a problem.
Best wishes,
Wilm
Hi,
The short question:
Is there any way to update delegation tokens of an existing active HConnection
instance?
Long story:
This is a follow up to http://osdir.com/ml/general/2014-08/msg27210.html. To
recap storm is trying to get delegation tokens from Hbase on behalf of a user
who is
I posted step-by-step instructions here
http://lessc0de.github.io/connecting_hbase_to_elasticsearch.html on using
Apache Hbase/Phoenix with Elasticsearch JDBC River.
This might be useful to Elasticsearch users who want to use Hbase as a
primary data store, and to Hbase users who wish to enable
Hi,
I'm running hbase 0.98.1
When using TableMapReduceUtil to init a MR job on a snapshot, I got an
error :
org.apache.hadoop.hbase.TableInfoMissingException: No table descriptor file
under hdfs://hbase/hbase/.hbase-snapshot/xxx-snapshot
at
you are probably not using hbase 0.98.1 or you were mixing different version
The data.manifest got in master/0.98.6 so, the snapshot is generated with
one of those,
and you are may be reading with a version that does not support the new
format
Matteo
On Wed, Sep 10, 2014 at 11:13 AM, Guangle
Hi Stephen!
Have you taken a look at Apache Gora? It uses Avro for its data model,
which supports nested data structures, and can store in a variety of
backing stores, including HBase.
-Sean
On Tue, Sep 9, 2014 at 4:20 PM, Stephen Boesch java...@gmail.com wrote:
Thanks Michael, yes cells are
Thanks Sean. We have some internal requirements that lead us to most
likely need to stick with native HBase API's. But the suggestion is still
appreciated - I was not aware of that project.
2014-09-10 12:09 GMT-07:00 Sean Busbey bus...@cloudera.com:
Hi Stephen!
Have you taken a look at
Authentication is only performed during RPC connection setup. So
there isn't really a concept of token expiration for an existing RPC
connection. The connection will be authenticated (will not expire)
for as long as it's held open. When it's closed and re-opened, it
should pick up the latest
correct.. you can't use new stuff from old versions
Matteo
On Wed, Sep 10, 2014 at 12:41 PM, Guangle Fan fanguan...@gmail.com wrote:
Sorry correct my words.
The snapshot was created by 0.98.1 cdh5.1.0 (new format of snapshot)
And reading where the failure happened is using 0.96.1.1 cdh5.0.1
Ok, but here’s the thing… you extrapolate the design out… each column with a
subordinate record will get its own CF.
Simple examples can go very bad when you move to real life.
Again you need to look at hierarchical databases and not think in terms of
relational.
To give you a really good
What I want to say that I don't understand why a count takes more time than
a complete scan without cache. I thought it should take more time to scan
the table than to execute a count.
Another point is why is slower an distributed scan than a sequential scan.
Tomorrow I'll check how many disk we
Hi, folk
I am trying to find out if we keep HBase Ref Guide and other docs from
previous releases on the Apache HBase site.
We have:
http://hbase.apache.org/book/book.html
which seems to be the latest from master.
I found this one for 0.94:
http://hbase.apache.org/0.94/book/book.html
But I can
0.98 is very similar to trunk and 0.96 is retired. It hasn't made
sense to make a differentiated guide for 0.98. The trunk version
applies just about everywhere. Where there is a difference in 0.98 it
is mentioned in the relevant section of the guide.
On Wed, Sep 10, 2014 at 2:06 PM, Jerry He
Thanks for writing in with this pointer Alex!
On Wed, Sep 10, 2014 at 11:11 AM, Alex Kamil alex.ka...@gmail.com wrote:
I posted step-by-step instructions here on using Apache Hbase/Phoenix with
Elasticsearch JDBC River.
This might be useful to Elasticsearch users who want to use Hbase as a
Am 10.09.2014 um 22:25 schrieb Michael Segel:
Ok, but here’s the thing… you extrapolate the design out… each column
with a subordinate record will get its own CF.
I disagree. Not by the proposed design. You could do it with one CF.
Simple examples can go
very bad when you move to real life.
Ok. Andrew
Thanks!
On Wed, Sep 10, 2014 at 2:09 PM, Andrew Purtell apurt...@apache.org wrote:
0.98 is very similar to trunk and 0.96 is retired. It hasn't made
sense to make a differentiated guide for 0.98. The trunk version
applies just about everywhere. Where there is a difference in 0.98
Apache HBase 0.98.6 is now available for download. Get it from an
Apache mirror [1] or Maven repository.
The list of changes in this release can be found in the release notes
[2] or following this announcement.
Thanks to all who contributed to this release.
Best,
The HBase Dev Team
1.
Hi,
Just trying to get a feel for what HBase apps look like. I assume that the Java
client API dominates? What other APIs are popular?
Are the apps mostly deployed on the same cluster as HBase or external?
What other things make HBase apps special, if any?
Thanks,
Gunnar
bq. What other APIs are popular
You can also utilize REST: http://hbase.apache.org/book.html#rest
or Thrift: http://hbase.apache.org/book.html#thrift
Disclaimer: I am not hbase app developer.
Cheers
On Wed, Sep 10, 2014 at 8:49 PM, Tapper, Gunnar gunnar.tap...@hp.com
wrote:
Hi,
Just trying
Hi Guillermo,
Thanks for the additional information. How large is the difference between
the shell count command and the single threaded scan you use? e.g. in the
order of 1% or 200%? can you tell us which filter are you using for the
scan? Have you fully verified that you are in fact not using
Hi Ted,
Yes, I know that you *can* (and Avro etc.) but I'm wondering what by *do* use.
:)
Obviously, I am not an app developer either spending my time further down the
stack.
Thank you,
Gunnar
Download a free version of HP DSM, a unified big-data administration tool for
Vertica and Hadoop
+1. Thanks, Alex. I added a blog pointing folks there as well:
https://blogs.apache.org/phoenix/entry/connecting_hbase_to_elasticsearch_through
On Wed, Sep 10, 2014 at 2:12 PM, Andrew Purtell apurt...@apache.org wrote:
Thanks for writing in with this pointer Alex!
On Wed, Sep 10, 2014 at 11:11
HBase apps can run on the same cluster, e.g. MapReduce based apps or
externally as Ted mentioned using REST or Thrift. Some others have
developed more user friendly interfaces like Apache Phoenix which allow
developers use SQL to interact with HBase.
Also, I don't think there is something that
Which version of HBase?
Can you show us the code?
Your parallel scan with caching 100 takes about 6x as long as the single scan,
which is suspicious because you say you have 6 regions.
Are you sure you're not accidentally scanning all the data in each of your
parallel scans?
-- Lars
35 matches
Mail list logo