Re: RIP Temp Views?
2009/7/17 Paul Davis : > I volunteer to delete lots of code in the inernals if someone can give > us Futon awesomeness. Like Chris says, this won't happen till Futon > gets upgrades and my JavaScript fu is embarrassingly weak so patches > *please*. > > Paul Davis > I can do this. The only question I'm currently asking myself is how to make sure "temporary but not temp_view" are deleted if the browser is closed or smth like it ? The only way i see it is to remove all temp view each time we go on view interface but , maybe it could be smarter ? - benoît
Re: RIP Temp Views?
On Fri, Jul 17, 2009 at 2:22 PM, Chris Anderson wrote: > On Fri, Jul 17, 2009 at 11:11 AM, Paul Davis > wrote: >> On Fri, Jul 17, 2009 at 2:03 PM, Will Hartung wrote: >>> I like the idea of getting rid of them. If they can be "emulated" >>> within Futon, then so much the better. >>> >>> The only thing I could suggest is simply providing tooling to make it >>> easy for folks to create sub sets of existing DBs. That way it's >>> straightforward to encourage development with "permanent" views, but >>> not to do it on production DBs, or production volumes. >>> >>> The curse of the temp_view is that they persist and "work just like" a >>> permanent view, so on the surface folks don't see the difference (and >>> in development, there is, almost, no difference). >>> >>> As an aside, however, a problem with emulating temp views kind of >>> become problematic if/when folks wish to use a different view server >>> than JS. Clearly, it's not practical to emulate a Erlang view server >>> within a JS couch implementation for development. >>> >>> So, that seems to me to point even more to focusing on tooling to make >>> it easy and routine to create "development" databases that people can >>> mold and form. Or to make, say, implicit view version. (Create a view, >>> create it again and the old one is versioned away as view_1 or >>> whatever). >> >> This already happens behind the scenes now. Chris Anderson just landed >> a patch to name index files after views which allows for zero downtime >> upgrades among other things. >> >>> >>> This ends up acting "just like" temp views, but now there's a minor >>> Garbage Collection phase to go through and remove the old views. >>> >>> And, again, when this is done on a "throw away" instance of the DB, >>> it's even less of an issue. >>> >>> Regards, >>> >>> Will Hartung >>> >> >> In general, this is how I'd shoot for doing it. Give tools to make >> slicing a DB easy as well as give Futon a nice interface to >> creating/editing/managing design documents. >> > > > I don't hear anyone protesting. I think as long as we keep the ability > to play with views easily in Futon (and add some replication helpers > for "random" subsets) no one will miss temp views. > > It'd be nice to have them stripped out by the end of the month, but > that's gonna be a fair amount of Futon work. Anyone feel up to it? > > Chris > > -- > Chris Anderson > http://jchrisa.net > http://couch.io > I volunteer to delete lots of code in the inernals if someone can give us Futon awesomeness. Like Chris says, this won't happen till Futon gets upgrades and my JavaScript fu is embarrassingly weak so patches *please*. Paul Davis
Re: RIP Temp Views?
On Fri, Jul 17, 2009 at 11:11 AM, Paul Davis wrote: > On Fri, Jul 17, 2009 at 2:03 PM, Will Hartung wrote: >> I like the idea of getting rid of them. If they can be "emulated" >> within Futon, then so much the better. >> >> The only thing I could suggest is simply providing tooling to make it >> easy for folks to create sub sets of existing DBs. That way it's >> straightforward to encourage development with "permanent" views, but >> not to do it on production DBs, or production volumes. >> >> The curse of the temp_view is that they persist and "work just like" a >> permanent view, so on the surface folks don't see the difference (and >> in development, there is, almost, no difference). >> >> As an aside, however, a problem with emulating temp views kind of >> become problematic if/when folks wish to use a different view server >> than JS. Clearly, it's not practical to emulate a Erlang view server >> within a JS couch implementation for development. >> >> So, that seems to me to point even more to focusing on tooling to make >> it easy and routine to create "development" databases that people can >> mold and form. Or to make, say, implicit view version. (Create a view, >> create it again and the old one is versioned away as view_1 or >> whatever). > > This already happens behind the scenes now. Chris Anderson just landed > a patch to name index files after views which allows for zero downtime > upgrades among other things. > >> >> This ends up acting "just like" temp views, but now there's a minor >> Garbage Collection phase to go through and remove the old views. >> >> And, again, when this is done on a "throw away" instance of the DB, >> it's even less of an issue. >> >> Regards, >> >> Will Hartung >> > > In general, this is how I'd shoot for doing it. Give tools to make > slicing a DB easy as well as give Futon a nice interface to > creating/editing/managing design documents. > I don't hear anyone protesting. I think as long as we keep the ability to play with views easily in Futon (and add some replication helpers for "random" subsets) no one will miss temp views. It'd be nice to have them stripped out by the end of the month, but that's gonna be a fair amount of Futon work. Anyone feel up to it? Chris -- Chris Anderson http://jchrisa.net http://couch.io
Re: RIP Temp Views?
On Fri, Jul 17, 2009 at 2:03 PM, Will Hartung wrote: > I like the idea of getting rid of them. If they can be "emulated" > within Futon, then so much the better. > > The only thing I could suggest is simply providing tooling to make it > easy for folks to create sub sets of existing DBs. That way it's > straightforward to encourage development with "permanent" views, but > not to do it on production DBs, or production volumes. > > The curse of the temp_view is that they persist and "work just like" a > permanent view, so on the surface folks don't see the difference (and > in development, there is, almost, no difference). > > As an aside, however, a problem with emulating temp views kind of > become problematic if/when folks wish to use a different view server > than JS. Clearly, it's not practical to emulate a Erlang view server > within a JS couch implementation for development. > > So, that seems to me to point even more to focusing on tooling to make > it easy and routine to create "development" databases that people can > mold and form. Or to make, say, implicit view version. (Create a view, > create it again and the old one is versioned away as view_1 or > whatever). This already happens behind the scenes now. Chris Anderson just landed a patch to name index files after views which allows for zero downtime upgrades among other things. > > This ends up acting "just like" temp views, but now there's a minor > Garbage Collection phase to go through and remove the old views. > > And, again, when this is done on a "throw away" instance of the DB, > it's even less of an issue. > > Regards, > > Will Hartung > In general, this is how I'd shoot for doing it. Give tools to make slicing a DB easy as well as give Futon a nice interface to creating/editing/managing design documents. Paul Davis
Re: RIP Temp Views?
I like the idea of getting rid of them. If they can be "emulated" within Futon, then so much the better. The only thing I could suggest is simply providing tooling to make it easy for folks to create sub sets of existing DBs. That way it's straightforward to encourage development with "permanent" views, but not to do it on production DBs, or production volumes. The curse of the temp_view is that they persist and "work just like" a permanent view, so on the surface folks don't see the difference (and in development, there is, almost, no difference). As an aside, however, a problem with emulating temp views kind of become problematic if/when folks wish to use a different view server than JS. Clearly, it's not practical to emulate a Erlang view server within a JS couch implementation for development. So, that seems to me to point even more to focusing on tooling to make it easy and routine to create "development" databases that people can mold and form. Or to make, say, implicit view version. (Create a view, create it again and the old one is versioned away as view_1 or whatever). This ends up acting "just like" temp views, but now there's a minor Garbage Collection phase to go through and remove the old views. And, again, when this is done on a "throw away" instance of the DB, it's even less of an issue. Regards, Will Hartung
Re: [VOTE] Apache CouchDB 0.9.1 release, third round
On Fri, Jul 17, 2009 at 8:47 AM, Randall Leeds wrote: > On Fri, Jul 17, 2009 at 11:45, Randall Leeds wrote: > >> On Thu, Jul 16, 2009 at 17:13, Noah Slater wrote: >> >>> Hello, >>> >>> I would like call a vote for the Apache CouchDB 0.9.1 release, third >>> round. >> >> +1 all tests passing here Mac OS X 10.5.6 Erlang 5.6.5 [source] [smp:2] [async-threads:0] [kernel-poll:false] -- Chris Anderson http://jchrisa.net http://couch.io
0.9.1 Vote
(just subscribed, so couldn't "reply" to original message...) +1 Works for me FF 3.0.11 Erlang (BEAM) emulator version 5.6.3 [source] [smp:4] [async-threads:0] [kernel-poll:false] Linux willh-linux 2.6.27-14-generic #1 SMP Tue Jun 30 19:57:39 UTC 2009 i686 GNU/Linux Ubunutu 8.10 Regards, Will Hartung
Re: There is no spoon.
On Fri, Jul 17, 2009 at 11:58 AM, Chris Anderson wrote: > On Fri, Jul 17, 2009 at 12:35 AM, Paul Davis > wrote: >> Hiya, >> >> I had me an idea the other day I got around to trying. We've been >> going over how to make JSON parsing über fast between Erlang and the >> View servers. Instead of making JSON parsing faster I decided to just >> drop it completely. I wrote enough code in couch_js.c tonight to get >> the basics of converting the ErlJSON -> Spidermonkey objects and back. >> Quite a few of the pertinent tests are passing. There was an issue >> with object iteration that prevent view collation from working >> correctly. And the show/list tests are broken because I didn't add XML >> serialization. Either way it was enough for me to collect some numbers >> with the same script I used on my blog a couple weeks ago. >> >> The huge ass caveat on the tail end for Patch at 10K 8KiB docs is that >> this is best case scenario. I was just beefing up the document size by >> adding a large string on them. In the conversion process this ends up >> being a fairly quick pass using EncodeString and DecodeString. >> >> Another thing to notice is that once compiled with +native the numbers >> for small docs don't change too drastically. >> >> And here are numbers: >> >> Straight up trunk: 10K tiny docs >> >> >> Inserting: 0.955830 >> Map only: 4.296859 >> With reduce: 4.014233 >> With erlang reduce: 3.199325 >> >> Inserting: 0.970745 >> Map only: 3.961110 >> With reduce: 4.550082 >> With erlang reduce: 3.493316 >> >> Inserting: 0.992892 >> Map only: 4.747793 >> With reduce: 4.552446 >> With erlang reduce: 3.681820 >> >> >> Straight up trunk: 1OK 4KiB Documents >> - >> >> Inserting: 5.895689 >> Map only: 11.716073 >> With reduce: 12.127348 >> With erlang reduce: 11.069352 >> >> Inserting: 6.221656 >> Map only: 12.074525 >> With reduce: 11.500115 >> With erlang reduce: 10.680610 >> >> Inserting: 5.974915 >> Map only: 11.240969 >> With reduce: 11.620035 >> With erlang reduce: 10.458795 >> >> >> Straight up trunk: 10K 8KiB Documents >> - >> >> Inserting: 9.533340 >> Map only: 16.273873 >> With reduce: 16.647050 >> With erlang reduce: 14.529038 >> >> Inserting: 9.828476 >> Map only: 15.772620 >> With reduce: 15.707862 >> With erlang reduce: 14.577865 >> >> Inserting: 9.598872 >> Map only: 15.251671 >> With reduce: 15.930784 >> With erlang reduce: 14.445052 >> >> Trunk +native 10K Tiny docs >> --- >> >> Inserting: 0.953937 >> Map only: 2.524961 >> With reduce: 2.411511 >> With erlang reduce: 1.541173 >> >> Inserting: 0.963175 >> Map only: 2.486752 >> With reduce: 2.354808 >> With erlang reduce: 1.534005 >> >> Inserting: 0.949138 >> Map only: 2.429267 >> With reduce: 2.385016 >> With erlang reduce: 1.525428 >> >> >> Trunk +native 10K 4KiB docs >> --- >> >> Inserting: 3.952355 >> Map only: 10.106112 >> With reduce: 9.687787 >> With erlang reduce: 8.781025 >> >> Inserting: 3.968877 >> Map only: 9.552732 >> With reduce: 9.626942 >> With erlang reduce: 8.537417 >> >> Inserting: 4.359648 >> Map only: 9.472417 >> With reduce: 9.719609 >> With erlang reduce: 8.771725 >> >> >> Trunk +native 10K 8KiB docs >> --- >> >> Inserting: 7.046171 >> Map only: 12.111946 >> With reduce: 11.566371 >> With erlang reduce: 10.571792 >> >> Inserting: 7.183114 >> Map only: 12.177807 >> With reduce: 11.619149 >> With erlang reduce: 10.461091 >> >> Inserting: 6.867450 >> Map only: 11.358312 >> With reduce: 11.420452 >> With erlang reduce: 10.452706 >> >> >> Patched 10K Tiny docs >> - >> >> Inserting: 0.954482 >> Map only: 2.339038 >> With reduce: 2.311544 >> With erlang reduce: 1.513258 >> >> Inserting: 0.942735 >> Map only: 2.543295 >> With reduce: 2.522470 >> With erlang reduce: 1.514119 >> >> Inserting: 0.961381 >> Map only: 2.372250 >> With reduce: 2.336503 >> With erlang reduce: 1.558217 >> >> >> Patched 10K 4KiB docs >> - >> >> Inserting: 5.933259 >> Map only: 5.484083 >> With reduce: 5.693180 >> With erlang reduce: 4.502828 >> >> Inserting: 5.980323 >> Map only: 5.251158 >> With reduce: 5.290837 >> With erlang reduce: 4.530348 >> >> Inserting: 6.067070 >> Map only: 5.501945 >> With reduce: 5.314363 >> With erlang reduce: 4.409588 >> >> Patched 10K 8KiB docs >> - >> >> Inserting: 7.096909 >> Map only: 5.293864 >> With reduce: 5.254415 >> With erlang reduce: 4.437001 >> >> Inserting: 6.847729 >> Map only: 5.191201 >> With reduce: 5.161696 >> With erlang reduce: 4.256955 >> >> Inserting: 7.168672 >> Map only: 5.294789 >> With reduce: 5.195616 >> With erlang reduce: 4.323075 >> >> Patched +native 10K Tiny docs >> - >> >> Inserting: 0.945826 >> Map only: 2.451693 >> With reduce: 2.383522 >> With erlang reduce: 1.578508 >> >> Inserting: 0.972655 >> Map only: 2.504448 >> With reduce: 2.34359
[jira] Closed: (COUCHDB-285) tail append headers
[ https://issues.apache.org/jira/browse/COUCHDB-285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Damien Katz closed COUCHDB-285. --- Resolution: Fixed Fix Version/s: (was: 1.0) 0.10 Assignee: Damien Katz > tail append headers > --- > > Key: COUCHDB-285 > URL: https://issues.apache.org/jira/browse/COUCHDB-285 > Project: CouchDB > Issue Type: Improvement > Components: Database Core >Reporter: Chris Anderson >Assignee: Damien Katz > Fix For: 0.10 > > > this will make .couch files resilient even when truncated (data-loss but > still usable). also cuts down on the # of disk seeks. > [3:02pm] > damienkatz: the offset in a header would corresponds to number of bytes from > the front of the file? > [3:03pm] > andysky joined the chat room. > [3:03pm] > jchris: yes > [3:03pm] > because my offset seems to suggest that just over a MB of the file is missing > [3:03pm] » jchris > blames not couchdb > [3:03pm] > but the streaming writes you've talked about would make this more resilent, > eh? > [3:03pm] > where the header is also appended each time > [3:04pm] > there could be data lost but the db would still be usable > [3:04pm] > yes, a file truncation just gives you and earlier version of the file > [3:05pm] > now's not a good time for me to work on that, but after Amsterdam I may want > to pick it up > [3:05pm] > the hardest part is finding the header again > [3:06pm] > hu? isn't the header the firs 4k? > [3:06pm] > t > [3:06pm] > it would only really change couch_file:read_header and write_header I think > [3:06pm] > jan: we're talking about moving it to the end > [3:06pm] > so it never gets overwritten > [3:06pm] > jan: this is for tail append headers > [3:06pm] > duh > [3:06pm] > futuretalk > [3:06pm] > n/m me > [3:07pm] > jchris: so one way is to sign the header regions, but you need to make it > unforgable. > [3:08pm] > basically a boundary problem... > [3:08pm] > because if a client wrote a binary that looked like it had a header, they > could do bad things. > [3:08pm] > like for instance an attachment that's a .couch file :) > [3:08pm] > right > [3:09pm] > so you can salt the db file on creation with a key in the header. And use > that key to sign and verify headers. > [3:09pm] > tlrobinson joined the chat room. > [3:09pm] > doesn't sound too tough > [3:10pm] > damienkatz: I looked into adding conflict inducing bulk docs in rep_security. > would this work: POST /db/_bulk_docs?allow_conflicts=true could do a regular > bulk save but grab the "error" responses and do a update_docs() call with the > replicated_changes option for all "errors" from the first bulk save while > assigning new _revs for "new" docs? > [3:10pm] > the key is crypto-random, and must stay hidden from clients. > [3:10pm] > if you have the file, you could forge headers... > [3:10pm] > but under normal operation, it sounds like not a big deal > [3:10pm] > Qaexl joined the chat room. > [3:11pm] > so we just give the db an internal secret uuid > [3:11pm] > mmalone left the chat room. (Connection reset by peer) > [3:11pm] > peritus_ joined the chat room. > [3:11pm] > I'm not sure I like this approach. > [3:11pm] > damienkatz: drawbacks? > [3:11pm] > if a client can see a file share with the db, they can attack it. > [3:12pm] > mmalone joined the chat room. > [3:12pm] > mmalone left the chat room. (Read error: 104 (Connection reset by peer)) > [3:12pm] > how about this approach. every 4k, we write a NULL byte. > [3:13pm] > we always write headers at the 4k boundary > [3:13pm] > mmalone joined the chat room. > [3:13pm] > and make that byte 1 > [3:13pm] > grr > [3:13pm] > did my bulk-docs proposal get through? > [3:13pm] > the attacker could still get lucky > [3:13pm] > (or got it shot down? :) > [3:13pm] > jan: sorry. > [3:13pm] > damienkatz: I couldn't read the backlog > [3:13pm] > Let me think about the conflict stuff a little bit. > [3:13pm] > sure > [3:13pm] > no baclog then > [3:14pm] > +k > [3:14pm] > jan: your paragraph is dense there - > [3:14pm] > jchris: no, this is immune from attack > [3:14pm] > because you'd write an attachment marker after the null byte for attachments? > [3:14pm] > every 4k, we just write a 0 byte, we skip that byte. > [3:15pm] > jchris: yeah, sorry, will let you finish the file stuff > [3:15pm] > no matter what, we never write anything into that byte. > [3:15pm] > wasting all these 0 bytes > [3:15pm] > a big file right whil write all the surrounding bytes, but not that byte. > [3:16pm] > only 1 every 4k jan. I think we'll manage ;) > [3:16pm] > :) > [3:16pm] > when that byte is a 1 though, that means is the start of a header. > [3:16pm] > oh, gotcha > [3:17pm] > so headers always get written on a 4k boundary. > [3:17
Re: There is no spoon.
On Fri, Jul 17, 2009 at 12:35 AM, Paul Davis wrote: > Hiya, > > I had me an idea the other day I got around to trying. We've been > going over how to make JSON parsing über fast between Erlang and the > View servers. Instead of making JSON parsing faster I decided to just > drop it completely. I wrote enough code in couch_js.c tonight to get > the basics of converting the ErlJSON -> Spidermonkey objects and back. > Quite a few of the pertinent tests are passing. There was an issue > with object iteration that prevent view collation from working > correctly. And the show/list tests are broken because I didn't add XML > serialization. Either way it was enough for me to collect some numbers > with the same script I used on my blog a couple weeks ago. > > The huge ass caveat on the tail end for Patch at 10K 8KiB docs is that > this is best case scenario. I was just beefing up the document size by > adding a large string on them. In the conversion process this ends up > being a fairly quick pass using EncodeString and DecodeString. > > Another thing to notice is that once compiled with +native the numbers > for small docs don't change too drastically. > > And here are numbers: > > Straight up trunk: 10K tiny docs > > > Inserting: 0.955830 > Map only: 4.296859 > With reduce: 4.014233 > With erlang reduce: 3.199325 > > Inserting: 0.970745 > Map only: 3.961110 > With reduce: 4.550082 > With erlang reduce: 3.493316 > > Inserting: 0.992892 > Map only: 4.747793 > With reduce: 4.552446 > With erlang reduce: 3.681820 > > > Straight up trunk: 1OK 4KiB Documents > - > > Inserting: 5.895689 > Map only: 11.716073 > With reduce: 12.127348 > With erlang reduce: 11.069352 > > Inserting: 6.221656 > Map only: 12.074525 > With reduce: 11.500115 > With erlang reduce: 10.680610 > > Inserting: 5.974915 > Map only: 11.240969 > With reduce: 11.620035 > With erlang reduce: 10.458795 > > > Straight up trunk: 10K 8KiB Documents > - > > Inserting: 9.533340 > Map only: 16.273873 > With reduce: 16.647050 > With erlang reduce: 14.529038 > > Inserting: 9.828476 > Map only: 15.772620 > With reduce: 15.707862 > With erlang reduce: 14.577865 > > Inserting: 9.598872 > Map only: 15.251671 > With reduce: 15.930784 > With erlang reduce: 14.445052 > > Trunk +native 10K Tiny docs > --- > > Inserting: 0.953937 > Map only: 2.524961 > With reduce: 2.411511 > With erlang reduce: 1.541173 > > Inserting: 0.963175 > Map only: 2.486752 > With reduce: 2.354808 > With erlang reduce: 1.534005 > > Inserting: 0.949138 > Map only: 2.429267 > With reduce: 2.385016 > With erlang reduce: 1.525428 > > > Trunk +native 10K 4KiB docs > --- > > Inserting: 3.952355 > Map only: 10.106112 > With reduce: 9.687787 > With erlang reduce: 8.781025 > > Inserting: 3.968877 > Map only: 9.552732 > With reduce: 9.626942 > With erlang reduce: 8.537417 > > Inserting: 4.359648 > Map only: 9.472417 > With reduce: 9.719609 > With erlang reduce: 8.771725 > > > Trunk +native 10K 8KiB docs > --- > > Inserting: 7.046171 > Map only: 12.111946 > With reduce: 11.566371 > With erlang reduce: 10.571792 > > Inserting: 7.183114 > Map only: 12.177807 > With reduce: 11.619149 > With erlang reduce: 10.461091 > > Inserting: 6.867450 > Map only: 11.358312 > With reduce: 11.420452 > With erlang reduce: 10.452706 > > > Patched 10K Tiny docs > - > > Inserting: 0.954482 > Map only: 2.339038 > With reduce: 2.311544 > With erlang reduce: 1.513258 > > Inserting: 0.942735 > Map only: 2.543295 > With reduce: 2.522470 > With erlang reduce: 1.514119 > > Inserting: 0.961381 > Map only: 2.372250 > With reduce: 2.336503 > With erlang reduce: 1.558217 > > > Patched 10K 4KiB docs > - > > Inserting: 5.933259 > Map only: 5.484083 > With reduce: 5.693180 > With erlang reduce: 4.502828 > > Inserting: 5.980323 > Map only: 5.251158 > With reduce: 5.290837 > With erlang reduce: 4.530348 > > Inserting: 6.067070 > Map only: 5.501945 > With reduce: 5.314363 > With erlang reduce: 4.409588 > > Patched 10K 8KiB docs > - > > Inserting: 7.096909 > Map only: 5.293864 > With reduce: 5.254415 > With erlang reduce: 4.437001 > > Inserting: 6.847729 > Map only: 5.191201 > With reduce: 5.161696 > With erlang reduce: 4.256955 > > Inserting: 7.168672 > Map only: 5.294789 > With reduce: 5.195616 > With erlang reduce: 4.323075 > > Patched +native 10K Tiny docs > - > > Inserting: 0.945826 > Map only: 2.451693 > With reduce: 2.383522 > With erlang reduce: 1.578508 > > Inserting: 0.972655 > Map only: 2.504448 > With reduce: 2.343594 > With erlang reduce: 1.512512 > > Inserting: 0.952105 > Map only: 2.391866 > With reduce: 2.329651 > With erlang reduce: 1.505249 > > > Patched +native 10K 4KiB docs > - > > Inserting: 4.060498 > Map only: 5.937243 > With reduce: 5.
Re: [VOTE] Apache CouchDB 0.9.1 release, third round
On Fri, Jul 17, 2009 at 11:45, Randall Leeds wrote: > On Thu, Jul 16, 2009 at 17:13, Noah Slater wrote: > >> Hello, >> >> I would like call a vote for the Apache CouchDB 0.9.1 release, third >> round. > > > +1 > > Tested on Ubuntu Karmic in Firefox 3.5. > Erlang R13B01 (erts-5.7.2) [source] [64-bit] [smp:2:2] [rq:2] [async-threads:0] [kernel-poll:false]
Re: [VOTE] Apache CouchDB 0.9.1 release, third round
On Thu, Jul 16, 2009 at 17:13, Noah Slater wrote: > Hello, > > I would like call a vote for the Apache CouchDB 0.9.1 release, third round. +1 Tested on Ubuntu Karmic in Firefox 3.5. (Interesting aside: the test suite hangs on view_collation in a recent Chromium nightly build. As this is an alpha browser my vote is still +1.)
[jira] Commented: (COUCHDB-416) Replicating shards into a single aggregation node may cause endless respawning
[ https://issues.apache.org/jira/browse/COUCHDB-416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12732487#action_12732487 ] Enda Farrell commented on COUCHDB-416: -- Hi Adam. Due to the way I have been playing with the environments I am afraid I don't have debug logs for this particular test. However ... As I was attempting to recreate this, I did find a possible bug which has the same symptoms. Essentially, as I think you're hinting at above, if the source database doesn't exist, the replication module isn't handling the 404s and endlessly keeps trying. I have an example here of trying to pull replicate from a source that's known to NOT exist: *** ** [Thu, 16 Jul 2009 16:23:05 GMT] [debug] [<0.141.0>] 'GET' /social/ {1,1} ** Headers: [{'Host',"10.10.10.16:5984"}] ** ** [Thu, 16 Jul 2009 16:23:05 GMT] [debug] [<0.141.0>] httpd 404 error response: ** {"error":"not_found","reason":"Missing"} ** ** ** [Thu, 16 Jul 2009 16:23:05 GMT] [info] [<0.141.0>] 10.10.10.15 - - 'GET' /social/ 404 ** ** [Thu, 16 Jul 2009 16:23:05 GMT] [debug] [<0.2451.0>] 'GET' /social/_local%2F5ba3add9594c40bb0b0480ff454d89a2 {1,1} ** Headers: [{'Host',"10.10.10.16:5984"}] ** ** [Thu, 16 Jul 2009 16:23:05 GMT] [debug] [<0.2451.0>] httpd 404 error response: ** {"error":"not_found","reason":"Missing"} ** ** [Thu, 16 Jul 2009 16:23:05 GMT] [info] [<0.2451.0>] 10.10.10.15 - - 'GET' /social/_local%2F5ba3add9594c40bb0b0480ff454d89a2 404 ** ** [Thu, 16 Jul 2009 16:23:05 GMT] [debug] [<0.2454.0>] 'GET' /social/_all_docs_by_seq?limit=100&startkey=0 {1,1} ** Headers: [{'Host',"10.10.10.16:5984"}] ** ** [Thu, 16 Jul 2009 16:23:05 GMT] [debug] [<0.2454.0>] httpd 404 error response: ** {"error":"not_found","reason":"Missing"} ** ** ** [Thu, 16 Jul 2009 16:23:05 GMT] [info] [<0.2454.0>] 10.10.10.15 - - 'GET' /social/_all_docs_by_seq?limit=100&startkey=0 404 ** ** [Thu, 16 Jul 2009 16:23:05 GMT] [debug] [<0.2457.0>] 'GET' /social/_all_docs_by_seq?limit=100&startkey=0 {1,1} ** Headers: [{'Host',"10.10.10.16:5984"}] ** ** [Thu, 16 Jul 2009 16:23:05 GMT] [debug] [<0.2457.0>] httpd 404 error response: ** {"error":"not_found","reason":"Missing"} ** ** [Thu, 16 Jul 2009 16:23:05 GMT] [info] [<0.2457.0>] 10.10.10.15 - - 'GET' /social/_all_docs_by_seq?limit=100&startkey=0 404 ** ** [Thu, 16 Jul 2009 16:23:05 GMT] [debug] [<0.2460.0>] 'GET' /social/_all_docs_by_seq?limit=100&startkey=0 {1,1} ** Headers: [{'Host',"10.10.10.16:5984"}] ** ** [Thu, 16 Jul 2009 16:23:05 GMT] [debug] [<0.2460.0>] httpd 404 error response: ** {"error":"not_found","reason":"Missing"} ** ** ** [Thu, 16 Jul 2009 16:23:05 GMT] [info] [<0.2460.0>] 10.10.10.15 - - 'GET' /social/_all_docs_by_seq?limit=100&startkey=0 404 ** ** [Thu, 16 Jul 2009 16:23:05 GMT] [debug] [<0.2463.0>] 'GET' /social/_all_docs_by_seq?limit=100&startkey=0 {1,1} ** Headers: [{'Host',"10.10.10.16:5984"}] ** ** [Thu, 16 Jul 2009 16:23:05 GMT] [debug] [<0.2463.0>] httpd 404 error response: ** {"error":"not_found","reason":"Missing"} ** ** *** Is thins a known bug? If not I can spin one up. It *is* possible that the above is the bug, and that it has nothing to do with replicating the same database-name from many sources into a single (aggregator) database. I have not yet been able to rule it in or out. Never-the-less, I'll be changing our "replication controller" code to be mindful of this issue. I > Replicating shards into a single aggregation node may cause endless respawning > -- > > Key: COUCHDB-416 > URL: https://issues.apache.org/jira/browse/COUCHDB-416 > Project: CouchDB > Issue Type: Bug > Components: Database Core >Affects Versions: 0.9 > Environment: couchdb 0.9.0.r766883 CentOS x86_64 >Reporter: Enda Farrell >Assignee: Adam Kocoloski >Priority: Critical > Attachments: Picture 2.png > > > I have a set of CouchDB instances, each one acting as a shard for a large set > of data. > Ocassionally, we replicate each instances' database into a different CouchDB > instance. We always "pull" replicate (see image attached) > When we do this, we often see errors like this on the target instance: > * [Thu, 16 Jul 2009 13:52:32 GMT] [error] [emulator] Error in process > <0.29787.102> with exit value: > {function_clause,[{lists,map,[#Fun,undefined]},{couch_rep,enum_docs_since,4}]} > * > * > * > * [Thu, 16 Jul 2009 13:
Re: [VOTE] Apache CouchDB 0.9.1 release, third round
On 16 Jul 2009, at 22:13, Noah Slater wrote: I would like call a vote for the Apache CouchDB 0.9.1 release, third round. +1. All tests pass on FF 3.5, Safari 4.0.2 on OS X 10.5.7, Erlang R13B (erts-5.7.1) [source] [smp:2:2] [rq:2] [async-threads:0] [hipe] [kernel-poll:false] Cheers, -- Jason Davies www.jasondavies.com
Re: [VOTE] Apache CouchDB 0.9.1 release, third round
On 16 Jul 2009, at 23:13, Noah Slater wrote: Hello, I would like call a vote for the Apache CouchDB 0.9.1 release, third round. Changes since the last round: * share/www/script/test/attachment_names.js is included * test/couch_config_test.erl is licenced * test/couch_config_writer_test.erl is licenced * test/runner.erl is licenced * test/runner.sh is licenced In future, all builds will automatically test for these types of error. We encourage the whole community to download and test these release artifacts so that any critical issues can be resolved before the release is made. Everyone is free to vote on this release, so get stuck in! We are voting on the following release artifacts: http://people.apache.org/~nslater/dist/0.9.1/ These artifacts have been built from the 0.9.1 tag in Subversion: http://svn.apache.org/repos/asf/couchdb/tags/0.9.1/ +1 Tested on Mac OS X (10.5.7) and Ubuntu (Dunnothecodename, based on Debian 5.0.2) in Safari 4 & Firefox 3,1 Cheers Jan --
Re: There is no spoon.
Crap. Also should mention this is an Mac Mini, Intel Core 2 Duo 2.26 GHz and 4GiB of RAM. I don't remember what speed this harddrive is off the top of my head. Paul And now time for bed. On Fri, Jul 17, 2009 at 3:38 AM, Paul Davis wrote: > I should've mentioned that the code is online at [1]. > > Don't look too hard at the code. Its pretty shitty as I was just > trying to tear through to be able to measure it. > > Paul > > [1] > http://github.com/davisp/couchdb/commit/f6019cceb068dc4f3df3a48c253ad460f2619332 > > On Fri, Jul 17, 2009 at 3:35 AM, Paul Davis > wrote: >> Hiya, >> >> I had me an idea the other day I got around to trying. We've been >> going over how to make JSON parsing über fast between Erlang and the >> View servers. Instead of making JSON parsing faster I decided to just >> drop it completely. I wrote enough code in couch_js.c tonight to get >> the basics of converting the ErlJSON -> Spidermonkey objects and back. >> Quite a few of the pertinent tests are passing. There was an issue >> with object iteration that prevent view collation from working >> correctly. And the show/list tests are broken because I didn't add XML >> serialization. Either way it was enough for me to collect some numbers >> with the same script I used on my blog a couple weeks ago. >> >> The huge ass caveat on the tail end for Patch at 10K 8KiB docs is that >> this is best case scenario. I was just beefing up the document size by >> adding a large string on them. In the conversion process this ends up >> being a fairly quick pass using EncodeString and DecodeString. >> >> Another thing to notice is that once compiled with +native the numbers >> for small docs don't change too drastically. >> >> And here are numbers: >> >> Straight up trunk: 10K tiny docs >> >> >> Inserting: 0.955830 >> Map only: 4.296859 >> With reduce: 4.014233 >> With erlang reduce: 3.199325 >> >> Inserting: 0.970745 >> Map only: 3.961110 >> With reduce: 4.550082 >> With erlang reduce: 3.493316 >> >> Inserting: 0.992892 >> Map only: 4.747793 >> With reduce: 4.552446 >> With erlang reduce: 3.681820 >> >> >> Straight up trunk: 1OK 4KiB Documents >> - >> >> Inserting: 5.895689 >> Map only: 11.716073 >> With reduce: 12.127348 >> With erlang reduce: 11.069352 >> >> Inserting: 6.221656 >> Map only: 12.074525 >> With reduce: 11.500115 >> With erlang reduce: 10.680610 >> >> Inserting: 5.974915 >> Map only: 11.240969 >> With reduce: 11.620035 >> With erlang reduce: 10.458795 >> >> >> Straight up trunk: 10K 8KiB Documents >> - >> >> Inserting: 9.533340 >> Map only: 16.273873 >> With reduce: 16.647050 >> With erlang reduce: 14.529038 >> >> Inserting: 9.828476 >> Map only: 15.772620 >> With reduce: 15.707862 >> With erlang reduce: 14.577865 >> >> Inserting: 9.598872 >> Map only: 15.251671 >> With reduce: 15.930784 >> With erlang reduce: 14.445052 >> >> Trunk +native 10K Tiny docs >> --- >> >> Inserting: 0.953937 >> Map only: 2.524961 >> With reduce: 2.411511 >> With erlang reduce: 1.541173 >> >> Inserting: 0.963175 >> Map only: 2.486752 >> With reduce: 2.354808 >> With erlang reduce: 1.534005 >> >> Inserting: 0.949138 >> Map only: 2.429267 >> With reduce: 2.385016 >> With erlang reduce: 1.525428 >> >> >> Trunk +native 10K 4KiB docs >> --- >> >> Inserting: 3.952355 >> Map only: 10.106112 >> With reduce: 9.687787 >> With erlang reduce: 8.781025 >> >> Inserting: 3.968877 >> Map only: 9.552732 >> With reduce: 9.626942 >> With erlang reduce: 8.537417 >> >> Inserting: 4.359648 >> Map only: 9.472417 >> With reduce: 9.719609 >> With erlang reduce: 8.771725 >> >> >> Trunk +native 10K 8KiB docs >> --- >> >> Inserting: 7.046171 >> Map only: 12.111946 >> With reduce: 11.566371 >> With erlang reduce: 10.571792 >> >> Inserting: 7.183114 >> Map only: 12.177807 >> With reduce: 11.619149 >> With erlang reduce: 10.461091 >> >> Inserting: 6.867450 >> Map only: 11.358312 >> With reduce: 11.420452 >> With erlang reduce: 10.452706 >> >> >> Patched 10K Tiny docs >> - >> >> Inserting: 0.954482 >> Map only: 2.339038 >> With reduce: 2.311544 >> With erlang reduce: 1.513258 >> >> Inserting: 0.942735 >> Map only: 2.543295 >> With reduce: 2.522470 >> With erlang reduce: 1.514119 >> >> Inserting: 0.961381 >> Map only: 2.372250 >> With reduce: 2.336503 >> With erlang reduce: 1.558217 >> >> >> Patched 10K 4KiB docs >> - >> >> Inserting: 5.933259 >> Map only: 5.484083 >> With reduce: 5.693180 >> With erlang reduce: 4.502828 >> >> Inserting: 5.980323 >> Map only: 5.251158 >> With reduce: 5.290837 >> With erlang reduce: 4.530348 >> >> Inserting: 6.067070 >> Map only: 5.501945 >> With reduce: 5.314363 >> With erlang reduce: 4.409588 >> >> Patched 10K 8KiB docs >> - >> >> Inserting: 7.096909 >> Map only: 5.293864 >> With reduce: 5.254415 >> With erlang reduce:
Re: There is no spoon.
I should've mentioned that the code is online at [1]. Don't look too hard at the code. Its pretty shitty as I was just trying to tear through to be able to measure it. Paul [1] http://github.com/davisp/couchdb/commit/f6019cceb068dc4f3df3a48c253ad460f2619332 On Fri, Jul 17, 2009 at 3:35 AM, Paul Davis wrote: > Hiya, > > I had me an idea the other day I got around to trying. We've been > going over how to make JSON parsing über fast between Erlang and the > View servers. Instead of making JSON parsing faster I decided to just > drop it completely. I wrote enough code in couch_js.c tonight to get > the basics of converting the ErlJSON -> Spidermonkey objects and back. > Quite a few of the pertinent tests are passing. There was an issue > with object iteration that prevent view collation from working > correctly. And the show/list tests are broken because I didn't add XML > serialization. Either way it was enough for me to collect some numbers > with the same script I used on my blog a couple weeks ago. > > The huge ass caveat on the tail end for Patch at 10K 8KiB docs is that > this is best case scenario. I was just beefing up the document size by > adding a large string on them. In the conversion process this ends up > being a fairly quick pass using EncodeString and DecodeString. > > Another thing to notice is that once compiled with +native the numbers > for small docs don't change too drastically. > > And here are numbers: > > Straight up trunk: 10K tiny docs > > > Inserting: 0.955830 > Map only: 4.296859 > With reduce: 4.014233 > With erlang reduce: 3.199325 > > Inserting: 0.970745 > Map only: 3.961110 > With reduce: 4.550082 > With erlang reduce: 3.493316 > > Inserting: 0.992892 > Map only: 4.747793 > With reduce: 4.552446 > With erlang reduce: 3.681820 > > > Straight up trunk: 1OK 4KiB Documents > - > > Inserting: 5.895689 > Map only: 11.716073 > With reduce: 12.127348 > With erlang reduce: 11.069352 > > Inserting: 6.221656 > Map only: 12.074525 > With reduce: 11.500115 > With erlang reduce: 10.680610 > > Inserting: 5.974915 > Map only: 11.240969 > With reduce: 11.620035 > With erlang reduce: 10.458795 > > > Straight up trunk: 10K 8KiB Documents > - > > Inserting: 9.533340 > Map only: 16.273873 > With reduce: 16.647050 > With erlang reduce: 14.529038 > > Inserting: 9.828476 > Map only: 15.772620 > With reduce: 15.707862 > With erlang reduce: 14.577865 > > Inserting: 9.598872 > Map only: 15.251671 > With reduce: 15.930784 > With erlang reduce: 14.445052 > > Trunk +native 10K Tiny docs > --- > > Inserting: 0.953937 > Map only: 2.524961 > With reduce: 2.411511 > With erlang reduce: 1.541173 > > Inserting: 0.963175 > Map only: 2.486752 > With reduce: 2.354808 > With erlang reduce: 1.534005 > > Inserting: 0.949138 > Map only: 2.429267 > With reduce: 2.385016 > With erlang reduce: 1.525428 > > > Trunk +native 10K 4KiB docs > --- > > Inserting: 3.952355 > Map only: 10.106112 > With reduce: 9.687787 > With erlang reduce: 8.781025 > > Inserting: 3.968877 > Map only: 9.552732 > With reduce: 9.626942 > With erlang reduce: 8.537417 > > Inserting: 4.359648 > Map only: 9.472417 > With reduce: 9.719609 > With erlang reduce: 8.771725 > > > Trunk +native 10K 8KiB docs > --- > > Inserting: 7.046171 > Map only: 12.111946 > With reduce: 11.566371 > With erlang reduce: 10.571792 > > Inserting: 7.183114 > Map only: 12.177807 > With reduce: 11.619149 > With erlang reduce: 10.461091 > > Inserting: 6.867450 > Map only: 11.358312 > With reduce: 11.420452 > With erlang reduce: 10.452706 > > > Patched 10K Tiny docs > - > > Inserting: 0.954482 > Map only: 2.339038 > With reduce: 2.311544 > With erlang reduce: 1.513258 > > Inserting: 0.942735 > Map only: 2.543295 > With reduce: 2.522470 > With erlang reduce: 1.514119 > > Inserting: 0.961381 > Map only: 2.372250 > With reduce: 2.336503 > With erlang reduce: 1.558217 > > > Patched 10K 4KiB docs > - > > Inserting: 5.933259 > Map only: 5.484083 > With reduce: 5.693180 > With erlang reduce: 4.502828 > > Inserting: 5.980323 > Map only: 5.251158 > With reduce: 5.290837 > With erlang reduce: 4.530348 > > Inserting: 6.067070 > Map only: 5.501945 > With reduce: 5.314363 > With erlang reduce: 4.409588 > > Patched 10K 8KiB docs > - > > Inserting: 7.096909 > Map only: 5.293864 > With reduce: 5.254415 > With erlang reduce: 4.437001 > > Inserting: 6.847729 > Map only: 5.191201 > With reduce: 5.161696 > With erlang reduce: 4.256955 > > Inserting: 7.168672 > Map only: 5.294789 > With reduce: 5.195616 > With erlang reduce: 4.323075 > > Patched +native 10K Tiny docs > - > > Inserting: 0.945826 > Map only: 2.451693 > With reduce: 2.383522 > With erlang reduce: 1.578508 > > Inserting: 0.972655 > Map only: 2.504448 > With reduce: 2.343594
There is no spoon.
Hiya, I had me an idea the other day I got around to trying. We've been going over how to make JSON parsing über fast between Erlang and the View servers. Instead of making JSON parsing faster I decided to just drop it completely. I wrote enough code in couch_js.c tonight to get the basics of converting the ErlJSON -> Spidermonkey objects and back. Quite a few of the pertinent tests are passing. There was an issue with object iteration that prevent view collation from working correctly. And the show/list tests are broken because I didn't add XML serialization. Either way it was enough for me to collect some numbers with the same script I used on my blog a couple weeks ago. The huge ass caveat on the tail end for Patch at 10K 8KiB docs is that this is best case scenario. I was just beefing up the document size by adding a large string on them. In the conversion process this ends up being a fairly quick pass using EncodeString and DecodeString. Another thing to notice is that once compiled with +native the numbers for small docs don't change too drastically. And here are numbers: Straight up trunk: 10K tiny docs Inserting: 0.955830 Map only: 4.296859 With reduce: 4.014233 With erlang reduce: 3.199325 Inserting: 0.970745 Map only: 3.961110 With reduce: 4.550082 With erlang reduce: 3.493316 Inserting: 0.992892 Map only: 4.747793 With reduce: 4.552446 With erlang reduce: 3.681820 Straight up trunk: 1OK 4KiB Documents - Inserting: 5.895689 Map only: 11.716073 With reduce: 12.127348 With erlang reduce: 11.069352 Inserting: 6.221656 Map only: 12.074525 With reduce: 11.500115 With erlang reduce: 10.680610 Inserting: 5.974915 Map only: 11.240969 With reduce: 11.620035 With erlang reduce: 10.458795 Straight up trunk: 10K 8KiB Documents - Inserting: 9.533340 Map only: 16.273873 With reduce: 16.647050 With erlang reduce: 14.529038 Inserting: 9.828476 Map only: 15.772620 With reduce: 15.707862 With erlang reduce: 14.577865 Inserting: 9.598872 Map only: 15.251671 With reduce: 15.930784 With erlang reduce: 14.445052 Trunk +native 10K Tiny docs --- Inserting: 0.953937 Map only: 2.524961 With reduce: 2.411511 With erlang reduce: 1.541173 Inserting: 0.963175 Map only: 2.486752 With reduce: 2.354808 With erlang reduce: 1.534005 Inserting: 0.949138 Map only: 2.429267 With reduce: 2.385016 With erlang reduce: 1.525428 Trunk +native 10K 4KiB docs --- Inserting: 3.952355 Map only: 10.106112 With reduce: 9.687787 With erlang reduce: 8.781025 Inserting: 3.968877 Map only: 9.552732 With reduce: 9.626942 With erlang reduce: 8.537417 Inserting: 4.359648 Map only: 9.472417 With reduce: 9.719609 With erlang reduce: 8.771725 Trunk +native 10K 8KiB docs --- Inserting: 7.046171 Map only: 12.111946 With reduce: 11.566371 With erlang reduce: 10.571792 Inserting: 7.183114 Map only: 12.177807 With reduce: 11.619149 With erlang reduce: 10.461091 Inserting: 6.867450 Map only: 11.358312 With reduce: 11.420452 With erlang reduce: 10.452706 Patched 10K Tiny docs - Inserting: 0.954482 Map only: 2.339038 With reduce: 2.311544 With erlang reduce: 1.513258 Inserting: 0.942735 Map only: 2.543295 With reduce: 2.522470 With erlang reduce: 1.514119 Inserting: 0.961381 Map only: 2.372250 With reduce: 2.336503 With erlang reduce: 1.558217 Patched 10K 4KiB docs - Inserting: 5.933259 Map only: 5.484083 With reduce: 5.693180 With erlang reduce: 4.502828 Inserting: 5.980323 Map only: 5.251158 With reduce: 5.290837 With erlang reduce: 4.530348 Inserting: 6.067070 Map only: 5.501945 With reduce: 5.314363 With erlang reduce: 4.409588 Patched 10K 8KiB docs - Inserting: 7.096909 Map only: 5.293864 With reduce: 5.254415 With erlang reduce: 4.437001 Inserting: 6.847729 Map only: 5.191201 With reduce: 5.161696 With erlang reduce: 4.256955 Inserting: 7.168672 Map only: 5.294789 With reduce: 5.195616 With erlang reduce: 4.323075 Patched +native 10K Tiny docs - Inserting: 0.945826 Map only: 2.451693 With reduce: 2.383522 With erlang reduce: 1.578508 Inserting: 0.972655 Map only: 2.504448 With reduce: 2.343594 With erlang reduce: 1.512512 Inserting: 0.952105 Map only: 2.391866 With reduce: 2.329651 With erlang reduce: 1.505249 Patched +native 10K 4KiB docs - Inserting: 4.060498 Map only: 5.937243 With reduce: 5.60 With erlang reduce: 4.813867 Inserting: 3.979680 Map only: 5.720481 With reduce: 5.602648 With erlang reduce: 4.734298 Inserting: 3.892947 Map only: 6.140995 With reduce: 5.565891 With erlang reduce: 4.736162 Patched +native 10K 8KiB docs - Inserting: 7.040456 Map only: 5.302867 With reduce: 5.269647 With erlang reduce: 4.433941 Inserting: 6.808467 Map only: 5.371900 With reduce: 5.20304