Hey folks,
I have a question (possibly naive) about a scans performance.
My scan is taking about 6 seconds to do the following:
470 rows extracted
total size of all rows together is about 1.4 megs
I'm using a InclusiveStopRow filter to limit rows which are being
extracted by the scanner. The
Have a look at htable.setScannerCaching, and please, please upgrade to 0.20
post haste. The only answer to 0.19 perf problems is upgrade to 0.20
On Aug 3, 2009 12:02 AM, Kyle Oba kyle...@gmail.com wrote:
Hey folks,
I have a question (possibly naive) about a scans performance.
My scan is taking
I have changed hbase-site.xml as below, and it now works (in Local mode). Its
something about Hadoop maybe?
configuration
property
namehbase.master/name
valuelocalhost:6/value
descriptionThe directory shared by region servers.
/description
/property
property
Hi all,
I have hbase 0.19.3 version ... On what versions of hadoop apart frm 0.19.x
can i run this hbase version ...
is it 0.20.x or 0.18.x ..
Thanks in advance
I believe only that Hadoop version.
Cheers,
Tim
According to the docs
http://hadoop.apache.org/hbase/docs/r0.19.3/api/overview-summary.html#overview_description
Requirements
- Java 1.6.x, preferably from Sun.
- Hadoop 0.19.x. This version of HBase will only run on this
version of
http://people.apache.org/~stack/hbase-0.20.0-candidate-1/
release candidate-1
2009/8/3 Onur AKTAS onur.ak...@live.com
Some people talks about HBase 0.20 (improved performance etc.). Is it
available to download? If yes, where can I download it?
Thanks.
Date: Mon, 3 Aug 2009 13:54:29
If this is all of your hbase-site.xml, you're not using Hadoop at all.
Please review the Pseudo-distributed documentation for HBase.
J-D
2009/8/3 Onur AKTAS onur.ak...@live.com:
I have changed hbase-site.xml as below, and it now works (in Local mode). Its
something about Hadoop maybe?
No, this is what after I changed.
I was using like below, but it was not working. It was giving an exception like
INFO: Retrying connect to server: localhost/127.0.0.1:6. Already tried
property
namehbase.rootdir/name
valuehdfs://localhost:9000/hbase/value
descriptionThe
Sorry, I was trying with Hadoop 0.19.2 and with HBase 0.19.3 (I wrote Hadoop
0.19.3 and HBase 0.19.2 by mistake).
Anyway, now I try with Hadoop 0.20.0 and HBase 0.20.0.
Here are my Hadoop configuration files.
core-site.xml:
configuration
property
namefs.default.name/name
6 sec isn't crazy with 0.19. If you really want to research it, have a
look at where the time is spent, creating scanner or actually doing
the scanning. I think it's the former. That being said, upgrading to
0.20 is a much quicker solution. Scanner has been optimized in the new
version.
On Mon,
I am evaluating the performance of stargate (which btw, is a
great contrib to hbase, thanks!). The evaluation program is mostly a simple
modification to the existing PerformanceEvaluation program, just replace java
client with stargate client and get value as protobuf.
All of the software
Looks like crossed lines.
In hadoop 0.20.0, there is the mapred package and the mapreduce package.
The latter has the new lump-sum context to which you go for all things.
HBase has similar. The new mapreduce package that is in 0.20.0 hbase is the
old mapred redone to fit the new hadoop APIs.
On Mon, Aug 3, 2009 at 1:58 PM, Xinan Wu wuxi...@gmail.com wrote:
6 sec isn't crazy with 0.19. If you really want to research it, have a
look at where the time is spent, creating scanner or actually doing
the scanning. I think it's the former.
You are probably right that it is the fomer. In
Hi all,
A quick reminder that Scale Unlimited will run a 2 day Hadoop BootCamp
in Berlin on August 27th and 28th.
This 2 day course is for managers and developers who want to quickly
become experienced with Hadoop and related technologies.
The BootCamp provides training in MapReduce
The implementation in the new package is different from the old one. So, if
you want to use it in the same way as you used to use the old one, you'll
have to stick to the mapred package till the time you upgrade the code
according to the new implementation.
On Mon, Aug 3, 2009 at 3:45 PM, Lucas
Hi,
Thanks for the testing and performance report!
You said you used the stargate Client package? It is pretty basic, written
mainly for convenience for writing test cases in the test suite.
Regarding Stargate quality in general, this is an alpha release. It can survive
torture testing with
Andrew,
Thanks for the reply. I am considering using stargate in one of my projects,
the design/impl is quite elegant. In your opinion, is there any hard limitation
preventing stargate achieving the same throughput as that of hbase java client?
Is it just a matter of fine tuning? I am not
17 matches
Mail list logo