Hmmm, don't we have a performance benchmark for comparing with Bigtable?
seems a while since someone updates that...
I was just hoping that someone has a rough number in mind, so that i don't
get any big surpirse when i try this out on the larger row size data.

Thanks!

On Wed, Jun 17, 2009 at 5:50 PM, Ryan Rawson <ryano...@gmail.com> wrote:

> And when I say 'test suite' i really mean "performance suite"  -- that's
> the
> problem, test suites we've been running test the functionality, not the
> speed in a repeatable/scientific manner.
>
> -ryan
>
>
> On Wed, Jun 17, 2009 at 5:46 PM, Ryan Rawson <ryano...@gmail.com> wrote:
>
> > Hey,
> >
> > The interesting thing is due to the way things are handled internally,
> > small values are more challenging than large ones.  The performance is
> not
> > strictly IO bound or limited, and you won't be seeing corresponding
> > slowdowns on larger values.
> >
> > I encourage you to give download the alpha and give it a shot!  Alas some
> > of the developers are busy developing and haven't run a test suite this
> > week.
> >
> > Thanks for your interest!
> > -ryan
> >
> >
> >
> > On Wed, Jun 17, 2009 at 5:36 PM, Ski Gh3 <ski...@gmail.com> wrote:
> >
> >> In the NOSQL meetup slides the inserts and reads are really good, but
> the
> >> test is on single column and only 16bytes,
> >> I wonder how the numbers would be affected if the row grows to 1K bytes,
> >> even 16Kbytes?
> >>
> >> if the numbers are disk I/O bounded, then we almost have to multiply the
> >> numbers by 64 or 1024?
> >>
> >> has any one done any other test on this?
> >>
> >> Thanks!
> >>
> >
> >
>

Reply via email to