That's really cool~ Thanks for the info, Ryan!!! Cheers, Ski Gh
On Wed, Jun 17, 2009 at 5:59 PM, Ryan Rawson <ryano...@gmail.com> wrote: > From the talk given at hadoop summit: > > Fat Table: 1000 Rows with 10 Columns,1MB values > Sequential insert – 68 seconds (68 ms/row) > Random reads – 56.92 ms/row (average) > Full scan – 35 seconds (3.53 seconds/100 rows, 35ms/row) > > so for 1 MB values, we are getting a value in 56ms. Scans in 35ms/row vs > 0.01 ms/row per small value. > > So you can extrapolate a tad, I dont think you'll be dissapointed :-) > > -ryan > > On Wed, Jun 17, 2009 at 5:55 PM, Ski Gh3 <ski...@gmail.com> wrote: > > > Hmmm, don't we have a performance benchmark for comparing with Bigtable? > > seems a while since someone updates that... > > I was just hoping that someone has a rough number in mind, so that i > don't > > get any big surpirse when i try this out on the larger row size data. > > > > Thanks! > > > > On Wed, Jun 17, 2009 at 5:50 PM, Ryan Rawson <ryano...@gmail.com> wrote: > > > > > And when I say 'test suite' i really mean "performance suite" -- > that's > > > the > > > problem, test suites we've been running test the functionality, not the > > > speed in a repeatable/scientific manner. > > > > > > -ryan > > > > > > > > > On Wed, Jun 17, 2009 at 5:46 PM, Ryan Rawson <ryano...@gmail.com> > wrote: > > > > > > > Hey, > > > > > > > > The interesting thing is due to the way things are handled > internally, > > > > small values are more challenging than large ones. The performance > is > > > not > > > > strictly IO bound or limited, and you won't be seeing corresponding > > > > slowdowns on larger values. > > > > > > > > I encourage you to give download the alpha and give it a shot! Alas > > some > > > > of the developers are busy developing and haven't run a test suite > this > > > > week. > > > > > > > > Thanks for your interest! > > > > -ryan > > > > > > > > > > > > > > > > On Wed, Jun 17, 2009 at 5:36 PM, Ski Gh3 <ski...@gmail.com> wrote: > > > > > > > >> In the NOSQL meetup slides the inserts and reads are really good, > but > > > the > > > >> test is on single column and only 16bytes, > > > >> I wonder how the numbers would be affected if the row grows to 1K > > bytes, > > > >> even 16Kbytes? > > > >> > > > >> if the numbers are disk I/O bounded, then we almost have to multiply > > the > > > >> numbers by 64 or 1024? > > > >> > > > >> has any one done any other test on this? > > > >> > > > >> Thanks! > > > >> > > > > > > > > > > > > > >