Hi,
I'd like to do some benchmarks fo HBase but I don't know what tool
could use. I started to make some code but I guess that there're some
easier.
I've taken a look to JMeter, but I guess that I'd attack directly from
Java, JMeter looks great but I don't know if it fits well in this
scenario. W
You can use ycsb for this purpose.See here
https://github.com/brianfrankcooper/YCSB/wiki/Getting-Started
-Nishanth
On Wed, Jan 28, 2015 at 1:37 PM, Guillermo Ortiz
wrote:
> Hi,
>
> I'd like to do some benchmarks fo HBase but I don't know what tool
> could use. I started to make some code but I
Guillermo:
If you use hbase 0.98.x, please consider Andrew's ycsb repo:
https://github.com/apurtell/ycsb/tree/new_hbase_client
Cheers
On Wed, Jan 28, 2015 at 12:41 PM, Nishanth S
wrote:
> You can use ycsb for this purpose.See here
>
> https://github.com/brianfrankcooper/YCSB/wiki/Getting-Start
I was checking that web, do you know if there's another possibility
since last updated for Cassandra was two years ago and I'd like to
compare bothof them with kind of same tool/code.
2015-01-28 22:10 GMT+01:00 Ted Yu :
> Guillermo:
> If you use hbase 0.98.x, please consider Andrew's ycsb repo:
>
Maybe ask on Cassandra mailing list for the benchmark tool they use ?
Cheers
On Wed, Jan 28, 2015 at 1:23 PM, Guillermo Ortiz
wrote:
> I was checking that web, do you know if there's another possibility
> since last updated for Cassandra was two years ago and I'd like to
> compare bothof them w
Is there any result with that benchmark to compare??
I'm executing the different workloads and for example for 100% Reads
in a table with 10Millions of records I only get an performance of
2000operations/sec. I hoped much better performance but I could be
wrong. I'd like to know if it's a normal pe
What's the value for hfile.block.cache.size ?
By default it is 40%. You may want to increase its value if you're using
default.
Andrew published some ycsb results :
http://people.apache.org/~apurtell/results-ycsb-0.98.8/ycsb
-0.98.0-vs-0.98.8.pdf
However, I couldn't access the above now.
Cheers
Yes, I'm using 40%. i can't access to those data either.
I don't know how YSCB executes the reads and if they are random and
could take advange of the cache.
Do you think that it's an acceptable performance?
2015-01-29 16:26 GMT+01:00 Ted Yu :
> What's the value for hfile.block.cache.size ?
>
>
How many instances of ycsb do you run and how many threads do you use per
instance.I guess these ops are per instance and you should get similar
numbers if you run more instances.In short try running more workload
instances...
-Nishanth
On Thu, Jan 29, 2015 at 8:49 AM, Guillermo Ortiz
wrote:
There's an option when you execute yscb to say how many clients
threads you want to use. I tried with 1/8/16/32. Those results are
with 16, the improvement 1vs8 it's pretty high not as much 16 to 32.
I only use one yscb, could it be that important?
-threads : the number of client threads. By defau
I have coming back to the benchmark.I executde this command:
yscb run hbase -P workflowA -p columnfamilty=cf -p
operationcount=10 threads=32
And I got an performace of 2000op/seg
What I did later it's to execute ten of those commands in parallel and
I got about 18000op/sec in total. I don't g
You are hitting hbase harder now which is important for benchmarking.If
there is no data loss it means your hbase cluster is good enough to handle
the load.You are simply making more use of the cores from where you launch
ycsb process.Write your own workload depending on the record sizes,format
12 matches
Mail list logo