Hi,
Coprocessors introduced in 0.92 can also be used to filter out the data
similar to the filters. What are the differences between filters and
coprocessors leaving aside the code/API? One thing I can think is, filters
are defined at the client, while the coprocessors are defined on the
server. S
What i found out are as follows:
12.1.8. RowCounter
RowCounter is a utility that will count all the rows of a table. This is a
good utility to use as a sanity check to ensure that HBase can read all the
blocks of a table if there are any concerns of metadata inconsistency.
$ bin/hbase org.apache
Do anyone have any idea about rainstor ???
Opensource? How to download ? How to use? PErformance ??
Hi,
I would like to have a client connecting to a remote hbase host with just 1
node running on there.
hbase-site.xml on server:«
hbase.zookeeper.quorum
machine_ipThe directory shared by
RegionServers.
hbase.cluster.distributedtrue
The mode the cluster will be i
Yes that's right!!
> To: dalia.mohso...@hotmail.com
> CC: user@hbase.apache.org
> Subject: RE: Important Question
> From: mspre...@us.ibm.com
> Date: Wed, 25 Jan 2012 16:04:57 -0500
>
> Just a couple more questions. Your data will all be in one place, this is
> not a federated architecture, rig
So may be you are all right I found Hbase really complex..
So what are other alternatives I am already using Hadoop as my backend system?
Kindly check apixio which is a similar medical system which adopts Hadoop so
plz check and reply..
Bescause this is concerning my thesis part..
Thxx all for
A bit more grist for our mill: what transaction rate do you need to
support? Are you concerned with a lookup or aggregation query "correctly"
including a record that is being concurrently updated?
Thanks,
Mike
Hi there-
As someone who works with medical data I take such analysis very
seriously, but according to the World Health Organization there were 608
cases of measles reported in Egypt in 2011 (page 82). Granted, these are
probably incidence and not prevalence statistics, but the order of
magnitud
Hey everybody,
with the risk of being flamed and bbqued...
to be absolutely honest, I think the NoSQL approach and with it HBase and
all other alternatives don't fit your use case at all. You have a complex
domain model, where it is very likely that you will want to search through
your domain spac
Just a couple more questions. Your data will all be in one place, this is
not a federated architecture, right? How much data are we talking about?
It sounds like you want to find/create/update/delete individual records
and do simple aggregations over records identified by a conjunction of
pre
I will explain to u more Mike.
I am building a Software Oriented Architecture, I want my API to provide some
services such as Add/Delete Patients, Search for a patient by name/ID, count
the number of people who are suffering from measles in Alexandria Egypt.
Something like that so I am wondering
Interesting,
I added this, and my scan did speed up somewhat
conf.setInt("hbase.client.prefetch.limit",100);
hTable = new HTable(conf, tableName);
What does this environment variable really control, and how should it be
set to an appropriate value? What is a region, and how d
On Wed, Jan 25, 2012 at 6:21 AM, Tim Robertson
wrote:
> Hi all,
>
Hey Tim.
> This gave me 32 regions across 2 of our 3 region servers (we have HDFS
> across 17 nodes but only machines running 3 RS).
>
The balancer ran? I'd think it'd balance the regions across the three
servers. Something stu
Thanks Geoff! No apology required, that's good stuff. I'll update the
book with that param.
On 1/25/12 2:17 PM, "Geoff Hendrey" wrote:
>Sorry for jumping in late, and perhaps out of context, but I'm pasting
>in some findings (reported to this list by us a while back) that helped
>us to ge
Hi everyone, i have a problem.
I'm trying to count for java the number of row in a table, but i can't do
it!!
I read about rowCounter, but i can't use it. Some one can tell me the code i
have to write to count the row in a table??
Thanks!!
Andrea.
--
View this message in context:
http://old.nabb
Sorry for jumping in late, and perhaps out of context, but I'm pasting
in some findings (reported to this list by us a while back) that helped
us to get scans to perform very fast. Adjusting
hbase.client.prefetch.limit was critical for us.:
It's even more mysterious than w
I think this is one of those "damned if you do..." situations. If you
want to do a lot of quick single-record lookups (a Get is actually a Scan
underneath the covers), then "1" is what you want. But for MapReduce
jobs, or for scanning over a wide number of records like you're doing,
then you'll
Does it make sense to have better defaults so the performance out of the box is
better?
~Jeff
On 1/25/2012 8:06 AM, Peter Wolf wrote:
Ah ha! I appear to be insane ;-)
Adding the following speeded things up quite a bit
scan.setCacheBlocks(true);
scan.setCaching(1000);
Thank
BTW, what do you mean by "realtime"? Do you mean you want to run some
non-trivial query quickly enough for some sort of interactive use? Can
you give us a feel for the sort of queries that interest you?
Thanks,
Mike
From: Dalia Sobhy
To: "user@hbase.apache.org"
Cc: "u...@hive.ap
Because you specifically cited the medical domain in your question, I
think you might want talk to Explorys (disclaimer: I work there).
Otherwise, you probably want to look at the HBase book.
On 1/25/12 11:30 AM, "Dalia Sobhy" wrote:
>So what about HBQL??
>And if i had complex queries would
Pe 25.01.2012 18:30, Dalia Sobhy a scris:
So what about HBQL??
And if i had complex queries would i get stuck with HBase?
Hbql seems to be unmaintained. Last update seems to be in jan 2011, one
year ago.
Also can anyone provide me with examples of a table in RDBMS transformed into
hbase,
So what about HBQL??
And if i had complex queries would i get stuck with HBase?
Also can anyone provide me with examples of a table in RDBMS transformed into
hbase, realtime query and analytical processing..
Sent from my iPhone
On 2012-01-25, at 6:15 PM, bejoy...@yahoo.com wrote:
> Real Time..
No problem! That's one of the tips in the Performance chapter of the
book/refGuide - always a good thing to double-check because even the most
experienced folks sometimes forget the simple stuff.
On 1/25/12 10:06 AM, "Peter Wolf" wrote:
>Ah ha! I appear to be insane ;-)
>
>Adding the follow
It's back up. Never mind.
On 1/25/12 10:33 AM, "Doug Meil" wrote:
>
>Not only is the hbase website down, but apache.org appears to be down.
>
>http://www.downforeveryoneorjustme.com/hbase.apache.org
>
>http://www.downforeveryoneorjustme.com/www.apache.org
>
>
>
>Doug Meil
>Chief Software A
Real Time.. Definitely not hive. Go in for HBase, but don't expect Hbase to be
as flexible as RDBMS. You need to choose your Row Key and Column Families
wisely as per your requirements.
For data mining and analytics you can mount Hive table over corresponding
Hbase table and play on with SQL li
Pe 25.01.2012 17:01, Dalia Sobhy a scris:
Dear all,
I am developing an API for medical use i.e Hospital admissions and all about
patients, thus transactions and queries and realtime data is important here...
Therefore both real-time and analytical processing is a must..
Therefore which best sui
Ah ha! I appear to be insane ;-)
Adding the following speeded things up quite a bit
scan.setCacheBlocks(true);
scan.setCaching(1000);
Thank you, it was a duh!
P
On 1/25/12 8:13 AM, Doug Meil wrote:
Hi there-
Quick sanity check: what caching level are you using? (default
Hi all,
I am trying to sanitize our setup, and using the PerformanceEvaluation
as a basis to check.
To to this, I ran the following to load it up:
$HADOOP_HOME/bin/hadoop org.apache.hadoop.hbase.PerformanceEvaluation
randomWrite 5
This gave me 32 regions across 2 of our 3 region servers (we have
Hi there-
Quick sanity check: what caching level are you using? (default is 1) I
know this is basic, but it's always good to double-check.
If "language" is already in the lead position of the rowkey, why use the
filter?
As for EC2, that's a wildcard.
On 1/25/12 7:56 AM, "Peter Wolf" wr
I'm confused...
You mention that you are hashing your key, and you want to do a scan w a start
and stop value?
Could you elaborate?
With respect to hashing, if you use a SHA-1 hash, your values will be unique.
(you talked about rehashing ...)
Sent from my iPhone
On Jan 25, 2012, at 7:56 AM, "P
Hello all,
I am looking for advice on speeding up my Scanning.
I want to iterate over all rows where a particular column (language)
equals a particular value ("JA").
I am already creating my row keys using that column in the first bytes.
And I do my scans using partial row matching, like th
Sorry, I sent last mail too soon:
you also need to change your script to:
echo "create '$1','cf1'"
piping (i.e. running created.sh test1 | hbase shell) should also support
multiple lines echoed by created.sh
Le 25/01/12 11:01, Christian Schäfer a écrit :
Tried what you did.
There is f
Hi,
What about
created.sh test1 | $HBASE_HOME/bin/hbase shell
?
Le 25/01/12 11:01, Christian Schäfer a écrit :
Tried what you did.
There is furthermore printed that seems to me that hbase shell
may not have the additional arguments (created.sh test1):
ArgumentError: wrong number of ar
Tried what you did.
There is furthermore printed that seems to me that hbase shell
may not have the additional arguments (created.sh test1):
ArgumentError: wrong number of arguments (2 for 0)
start at /usr/lib/hbase/bin/../bin/hirb.rb:169
(root) at /usr/lib/hbase/bin/../bin/hirb.rb:183
Thanks - `mvn javadoc:javadoc` worked.
Praveen
On Wed, Jan 25, 2012 at 2:43 PM, Ulrich Staudinger <
ustaudin...@activequant.com> wrote:
> Did you try what is written on
> http://maven.apache.org/plugins/maven-javadoc-plugin/usage.html under
> section "And execute any of the following commands:"
Did you try what is written on
http://maven.apache.org/plugins/maven-javadoc-plugin/usage.html under
section "And execute any of the following commands:" ?
Regards
On Wed, Jan 25, 2012 at 10:06 AM, Praveen Sripati
wrote:
> Hi,
>
> Java Doc for 0.92 is not in the tar ball. How do I generate it?
Hi,
Java Doc for 0.92 is not in the tar ball. How do I generate it? I did an
`svn co` for the 0.92 branch.
`mvn site` didn't generate the Java Doc.
Regards,
Praveen
37 matches
Mail list logo