Nothing's going to be "free" automatically. I was playing with sqlite in the last couple of days and by turning off journaling and synchronizing to the file system (because I was doing one time writes to build up indexes) I went from this thing running from over 5 minutes to 40 seconds.
I guess what I'm saying is performance and optimization depends on what you are trying to do. I do think that bulk requests should give you a big boost. I would have to imagine the size of the document matters too in how many inserts/sec you get. K. On Wed, Feb 25, 2009 at 8:30 PM, Scott Zhang <[email protected]> wrote: > Hi. Thanks for replying. > But what a database is for if it is slow? Every database has the feature to > make cluster to improve speed and capacity (Don't metion "access" things). > > > I was expecting couchDB is as fast as SqlServer or mysql. At least I know, > mnesia is much faster than SqlServer. But mnesia always throw harmless > "overload" message. > > I will try bulk insert now. But be fair, I was inserting into sqlserver > one insert one time. > > Regards. > > > > > On Thu, Feb 26, 2009 at 12:18 PM, Jens Alfke <[email protected]> wrote: > >> >> On Feb 25, 2009, at 8:02 PM, Scott Zhang wrote: >> >> But the performance is as bad as I can image, After several minutes run, I >>> only inserted into 120K records. I saw the speed is ~20 records each >>> second. >>> >> >> Use the bulk-insert API to improve speed. The way you're doing it, every >> record being added is a separate transaction, which requires a separate HTTP >> request and flushing the file. >> >> (I'm a CouchDB newbie, but I don't think the point of CouchDB is speed. >> What's exciting about it is the flexibility and the ability to build >> distributed systems. If you're looking for a traditional database with >> speed, have you tried MySQL?) >> >> —Jens >
