[jira] [Commented] (COUCHDB-1464) Unable to login to CouchDB

2012-04-13 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253291#comment-13253291
 ] 

Filipe Manana commented on COUCHDB-1464:


This might be the same as COUCHDB-1357, fixed for 1.2.0.

Petter, any chance to reproduce this on 1.2.0 or see in the logs if before the 
issue there was a users database crash?

 Unable to login to CouchDB
 --

 Key: COUCHDB-1464
 URL: https://issues.apache.org/jira/browse/COUCHDB-1464
 Project: CouchDB
  Issue Type: Bug
Affects Versions: 1.0.3
 Environment: Apache CouchDB 1.0.1
 Ubuntu 10.04.2 LTS.
Reporter: Petter Olofsson
Assignee: Paul Joseph Davis

 Hi,
 Login to server with ordinary user accounts stopped working. Logging in with 
 admin's defined in local.ini still worked, but user account created as 
 documents always responded with username/password incorrect.
 When the old users did not work, we started to create new users in Futon. 
 Here are the requests from the logs when a user is created, and the failed 
 login afterwards.
 [Thu, 12 Apr 2012 14:22:57 GMT] [info] [0.1392.0] ip - - 'PUT' 
 /_users/org.couchdb.user%3Apetter 201
 [Thu, 12 Apr 2012 14:22:57 GMT] [info] [0.1393.0] ip - - 'POST' /_session 
 401
 We restarted couch several times using
 $ service couchdb restart
 but it was still impossible to login with an ordinary user.
 The problem was resolved by changing the log level to debug in local.ini 
 and a restart.
 After this change the login and sign-up/in process in futon worked fine, and 
 the already created user account worked fine. 
 After changing the log level back to info the server continued to work fine.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1454) server admin error on latest HEAD after PBKDF2 introduction

2012-04-04 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13246606#comment-13246606
 ] 

Filipe Manana commented on COUCHDB-1454:


I get exactly the same errors Benoît.

 server admin error on latest HEAD after PBKDF2 introduction
 ---

 Key: COUCHDB-1454
 URL: https://issues.apache.org/jira/browse/COUCHDB-1454
 Project: CouchDB
  Issue Type: Bug
  Components: HTTP Interface, Test Suite
Affects Versions: 1.3
 Environment: erlang r15b, spidermonkey1.8.5
Reporter: Benoit Chesneau
 Attachments: test.log


 JS tests  are failing since last introduction of PBKDF2:
 http://friendpaste.com/6ZjpPkJW3p6t26gARmbq1E

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1453) Replicator fails with use_users_db = false

2012-04-02 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13244860#comment-13244860
 ] 

Filipe Manana commented on COUCHDB-1453:


Try adding an explicit user_ctx property to the replication document:

{
source: ...,
...,
user_ctx: { roles: [ _admin] }
}

Database creation and design document update/creation/deletion requires admin 
privilege. This is documented somewhere in the wiki.

 Replicator fails with use_users_db = false
 --

 Key: COUCHDB-1453
 URL: https://issues.apache.org/jira/browse/COUCHDB-1453
 Project: CouchDB
  Issue Type: Bug
  Components: Replication
Affects Versions: 1.2
 Environment: Centos 6 32bit, Erlang R14B, Spidermonkey 1.8.5
Reporter: Wendall Cada

 If I create a new replication document in _replicate like this:
 {
 source:  http://localhost:5990/users;,
 target:  users_backup,
 create_target:  true,
 continuous: true
 }
 Creation of DB fails with:
 unauthorized to access or create database users_backup
 If I manually create this database, and set create_target to false, 
 replication completes, but generates errors while processing the 
 update_sequence like this:
 Replicator: couldn't write document `_design/lck`, revision 
 `2-8edc91dec975f893efdc6f440286c79e`, to target database `users_backup`. 
 Error: `unauthorized`, reason: `You are not a db or server admin.`.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1453) Replicator fails with use_users_db = false

2012-04-02 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13244879#comment-13244879
 ] 

Filipe Manana commented on COUCHDB-1453:


Wendall, it's documented  in a gist mentioned at:

http://wiki.apache.org/couchdb/Replication#Replicator_database

Gist: https://gist.github.com/832610 (section 8)

 Replicator fails with use_users_db = false
 --

 Key: COUCHDB-1453
 URL: https://issues.apache.org/jira/browse/COUCHDB-1453
 Project: CouchDB
  Issue Type: Bug
  Components: Replication
Affects Versions: 1.2
 Environment: Centos 6 32bit, Erlang R14B, Spidermonkey 1.8.5
Reporter: Wendall Cada

 If I create a new replication document in _replicate like this:
 {
 source:  http://localhost:5990/users;,
 target:  users_backup,
 create_target:  true,
 continuous: true
 }
 Creation of DB fails with:
 unauthorized to access or create database users_backup
 If I manually create this database, and set create_target to false, 
 replication completes, but generates errors while processing the 
 update_sequence like this:
 Replicator: couldn't write document `_design/lck`, revision 
 `2-8edc91dec975f893efdc6f440286c79e`, to target database `users_backup`. 
 Error: `unauthorized`, reason: `You are not a db or server admin.`.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1424) make check hangs when compiling with R15B

2012-03-07 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13224548#comment-13224548
 ] 

Filipe Manana commented on COUCHDB-1424:


As for the R15B related issues, I agree the sleeps are a poor choice.

Why not use gdb to see where the Erlang VM is hanging? Just build your own 
Erlang with the -g flag to add debug info and then use gdb -p beam.smp pid. 
Seems more reliable than adding io:format calls everywhere.

 make check hangs when compiling with R15B
 -

 Key: COUCHDB-1424
 URL: https://issues.apache.org/jira/browse/COUCHDB-1424
 Project: CouchDB
  Issue Type: Bug
  Components: Test Suite
Affects Versions: 1.2, 1.3
Reporter: Jan Lehnardt
 Attachments: 
 0001-Fix-very-slow-test-test-etap-220-compaction-daemon.t.patch


 make check hangs when running under R15B. For me it is 160-vhosts.t where 
 execution stops, but if I recall correctly others have reported other tests. 
 The crux here is that running the tests individually succeeds.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1424) make check hangs when compiling with R15B

2012-03-07 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13224837#comment-13224837
 ] 

Filipe Manana commented on COUCHDB-1424:


Wendall, like on debian/ubuntu, you probably need to install a missing erlang 
package. On debian it's named 'erlang-os-mon'.

 make check hangs when compiling with R15B
 -

 Key: COUCHDB-1424
 URL: https://issues.apache.org/jira/browse/COUCHDB-1424
 Project: CouchDB
  Issue Type: Bug
  Components: Test Suite
Affects Versions: 1.2, 1.3
Reporter: Jan Lehnardt
 Attachments: 
 0001-Fix-very-slow-test-test-etap-220-compaction-daemon.t.patch, 
 220-compaction-daemon.t.out


 make check hangs when running under R15B. For me it is 160-vhosts.t where 
 execution stops, but if I recall correctly others have reported other tests. 
 The crux here is that running the tests individually succeeds.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1426) error while building with 2 spidermonkey installed

2012-03-01 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13220225#comment-13220225
 ] 

Filipe Manana commented on COUCHDB-1426:


Thanks Benoît.

Tried the latest patch, there seems to be a syntax error:

./configure: line 16477: syntax error near unexpected token `('
./configure: line 16477: `  echo $ECHO_N (cached) $ECHO_C 6'



 error while building with 2 spidermonkey installed
 --

 Key: COUCHDB-1426
 URL: https://issues.apache.org/jira/browse/COUCHDB-1426
 Project: CouchDB
  Issue Type: Bug
  Components: Build System
Affects Versions: 1.1.1, 1.2
Reporter: Benoit Chesneau
Priority: Blocker
 Attachments: 
 0001-fix-build-with-custom-path-close-COUCHDB-1426.patch, 
 0001-fix-build-with-custom-path-close-COUCHDB-1426.patch, 
 0001-fix-build-with-custom-path-close-COUCHDB-1426.patch


 Context:
 To bench the differences between different versions of couchdb I had to test 
 against spidermonkey 1.7 and 1.8.5 . 1.8.5 is installed globally in 
 /usr/local  while the 1.7 version is installed on a temporary path. 
 Problem:
 Using --witth-js-include  --with-js-lib configure options aren't enough to 
 use the 1.7 version it still want to use spidermonkey 1.8.5 . Removing 
 js-config from the path doesn't change anything.  I had to uninstall 
 spidermonkey 1.8.5 to have these setting working.
 Error result:
 $ ./configure 
 --with-erlang=/Users/benoitc/local/otp-r14b04/lib/erlang/usr/include 
 --with-js-include=/Users/benoitc/local/js-1.7.0/include 
 --with-js-lib=/Users/benoitc/local/js-1.7.0/lib64
 checking for a BSD-compatible install... /usr/bin/install -c
 checking whether build environment is sane... yes
 checking for a thread-safe mkdir -p... build-aux/install-sh -c -d
 checking for gawk... no
 checking for mawk... no
 checking for nawk... no
 checking for awk... awk
 checking whether make sets $(MAKE)... yes
 checking for gcc... gcc
 checking for C compiler default output file name... a.out
 checking whether the C compiler works... yes
 checking whether we are cross compiling... no
 checking for suffix of executables... 
 checking for suffix of object files... o
 checking whether we are using the GNU C compiler... yes
 checking whether gcc accepts -g... yes
 checking for gcc option to accept ISO C89... none needed
 checking for style of include used by make... GNU
 checking dependency style of gcc... gcc3
 checking build system type... i386-apple-darwin11.3.0
 checking host system type... i386-apple-darwin11.3.0
 checking for a sed that does not truncate output... /usr/bin/sed
 checking for grep that handles long lines and -e... /usr/bin/grep
 checking for egrep... /usr/bin/grep -E
 checking for fgrep... /usr/bin/grep -F
 checking for ld used by gcc... 
 /usr/llvm-gcc-4.2/libexec/gcc/i686-apple-darwin11/4.2.1/ld
 checking if the linker 
 (/usr/llvm-gcc-4.2/libexec/gcc/i686-apple-darwin11/4.2.1/ld) is GNU ld... no
 checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm
 checking the name lister (/usr/bin/nm) interface... BSD nm
 checking whether ln -s works... yes
 checking the maximum length of command line arguments... 196608
 checking whether the shell understands some XSI constructs... yes
 checking whether the shell understands +=... yes
 checking for /usr/llvm-gcc-4.2/libexec/gcc/i686-apple-darwin11/4.2.1/ld 
 option to reload object files... -r
 checking how to recognize dependent libraries... pass_all
 checking for ar... ar
 checking for strip... strip
 checking for ranlib... ranlib
 checking command to parse /usr/bin/nm output from gcc object... ok
 checking for dsymutil... dsymutil
 checking for nmedit... nmedit
 checking for lipo... lipo
 checking for otool... otool
 checking for otool64... no
 checking for -single_module linker flag... yes
 checking for -exported_symbols_list linker flag... yes
 checking how to run the C preprocessor... gcc -E
 checking for ANSI C header files... yes
 checking for sys/types.h... yes
 checking for sys/stat.h... yes
 checking for stdlib.h... yes
 checking for string.h... yes
 checking for memory.h... yes
 checking for strings.h... yes
 checking for inttypes.h... yes
 checking for stdint.h... yes
 checking for unistd.h... yes
 checking for dlfcn.h... yes
 checking for objdir... .libs
 checking if gcc supports -fno-rtti -fno-exceptions... no
 checking for gcc option to produce PIC... -fno-common -DPIC
 checking if gcc PIC flag -fno-common -DPIC works... yes
 checking if gcc static flag -static works... no
 checking if gcc supports -c -o file.o... yes
 checking if gcc supports -c -o file.o... (cached) yes
 checking whether the gcc linker 
 (/usr/llvm-gcc-4.2/libexec/gcc/i686-apple-darwin11/4.2.1/ld) supports shared 
 libraries... yes
 checking dynamic linker 

[jira] [Commented] (COUCHDB-1426) error while building with 2 spidermonkey installed

2012-03-01 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13220456#comment-13220456
 ] 

Filipe Manana commented on COUCHDB-1426:


Thanks Benoit.

However even with the latest patch I get the same issue as with the first patch:

gcc -DHAVE_CONFIG_H -I. -I../../.. -I../../../src/snappy/google-snappy   
-I/opt/local/include -I/usr/local/include -I/usr/include  -g -Wall -Werror 
-D_BSD_SOURCE  -DXP_UNIX -I/Users/fdmanana/sm185/install/include -O2 -g -O2 -MT 
couchjs-http.o -MD -MP -MF .deps/couchjs-http.Tpo -c -o couchjs-http.o `test -f 
'couch_js/http.c' || echo './'`couch_js/http.c
couch_js/http.c:19:19: error: jsapi.h: No such file or directory
In file included from couch_js/http.c:21:


However, Randall's initial spidermonkey 1.8.5 commit works for me on Mac OS X:

http://git-wip-us.apache.org/repos/asf?p=couchdb.git;a=commit;h=7b0f330627c9f3ef1ccb9e3ffe1e909e3a27f1bf


A few weeks ago I recall master worked for me on Linux with custom 
--with-js-include and --with-js-lib configure flags.

 error while building with 2 spidermonkey installed
 --

 Key: COUCHDB-1426
 URL: https://issues.apache.org/jira/browse/COUCHDB-1426
 Project: CouchDB
  Issue Type: Bug
  Components: Build System
Affects Versions: 1.1.1, 1.2
Reporter: Benoit Chesneau
Priority: Blocker
 Attachments: 
 0001-fix-build-with-custom-path-close-COUCHDB-1426.patch, 
 0001-fix-build-with-custom-path-close-COUCHDB-1426.patch, 
 0001-fix-build-with-custom-path-close-COUCHDB-1426.patch, 
 0001-fix-build-with-custom-path-close-COUCHDB-1426.patch


 Context:
 To bench the differences between different versions of couchdb I had to test 
 against spidermonkey 1.7 and 1.8.5 . 1.8.5 is installed globally in 
 /usr/local  while the 1.7 version is installed on a temporary path. 
 Problem:
 Using --witth-js-include  --with-js-lib configure options aren't enough to 
 use the 1.7 version it still want to use spidermonkey 1.8.5 . Removing 
 js-config from the path doesn't change anything.  I had to uninstall 
 spidermonkey 1.8.5 to have these setting working.
 Error result:
 $ ./configure 
 --with-erlang=/Users/benoitc/local/otp-r14b04/lib/erlang/usr/include 
 --with-js-include=/Users/benoitc/local/js-1.7.0/include 
 --with-js-lib=/Users/benoitc/local/js-1.7.0/lib64
 checking for a BSD-compatible install... /usr/bin/install -c
 checking whether build environment is sane... yes
 checking for a thread-safe mkdir -p... build-aux/install-sh -c -d
 checking for gawk... no
 checking for mawk... no
 checking for nawk... no
 checking for awk... awk
 checking whether make sets $(MAKE)... yes
 checking for gcc... gcc
 checking for C compiler default output file name... a.out
 checking whether the C compiler works... yes
 checking whether we are cross compiling... no
 checking for suffix of executables... 
 checking for suffix of object files... o
 checking whether we are using the GNU C compiler... yes
 checking whether gcc accepts -g... yes
 checking for gcc option to accept ISO C89... none needed
 checking for style of include used by make... GNU
 checking dependency style of gcc... gcc3
 checking build system type... i386-apple-darwin11.3.0
 checking host system type... i386-apple-darwin11.3.0
 checking for a sed that does not truncate output... /usr/bin/sed
 checking for grep that handles long lines and -e... /usr/bin/grep
 checking for egrep... /usr/bin/grep -E
 checking for fgrep... /usr/bin/grep -F
 checking for ld used by gcc... 
 /usr/llvm-gcc-4.2/libexec/gcc/i686-apple-darwin11/4.2.1/ld
 checking if the linker 
 (/usr/llvm-gcc-4.2/libexec/gcc/i686-apple-darwin11/4.2.1/ld) is GNU ld... no
 checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm
 checking the name lister (/usr/bin/nm) interface... BSD nm
 checking whether ln -s works... yes
 checking the maximum length of command line arguments... 196608
 checking whether the shell understands some XSI constructs... yes
 checking whether the shell understands +=... yes
 checking for /usr/llvm-gcc-4.2/libexec/gcc/i686-apple-darwin11/4.2.1/ld 
 option to reload object files... -r
 checking how to recognize dependent libraries... pass_all
 checking for ar... ar
 checking for strip... strip
 checking for ranlib... ranlib
 checking command to parse /usr/bin/nm output from gcc object... ok
 checking for dsymutil... dsymutil
 checking for nmedit... nmedit
 checking for lipo... lipo
 checking for otool... otool
 checking for otool64... no
 checking for -single_module linker flag... yes
 checking for -exported_symbols_list linker flag... yes
 checking how to run the C preprocessor... gcc -E
 checking for ANSI C header files... yes
 checking for sys/types.h... yes
 checking for sys/stat.h... yes
 checking for stdlib.h... yes
 

[jira] [Commented] (COUCHDB-1426) error while building with 2 spidermonkey installed

2012-02-29 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13219793#comment-13219793
 ] 

Filipe Manana commented on COUCHDB-1426:


Neither current master with or without this patch works for me.
I have spidermonkey 1.8.0 installed via homebrew and built my own spidermonkey 
1.8.5 from source. If I try to use the later compilation fails:

Running configure like:

$ ./configure  --with-js-include=/Users/fdmanana/sm185/install/include  
--with-js-lib=/Users/fdmanana/sm185/install/lib

My spidermonkey 1.8.5's tree:

http://friendpaste.com/2UVvchXGJKGDv2tIzKKgMR

fdmanana 20:35:42 ~/sm185  pwd
/Users/fdmanana/sm185
fdmanana 20:35:43 ~/sm185  ./js-1.8.5/js/src/js --version
JavaScript-C 1.8.5 2011-03-31

And 'make dev' errors with the patch:

http://friendpaste.com/6pd556mxhZPRgrVegIV4pH

Without the patch:

http://friendpaste.com/5MwatQTQLgiN4Jiz0nWLMd


My config.log:

http://friendpaste.com/2M1ijtykz1eDwyZLXNAgsI



 error while building with 2 spidermonkey installed
 --

 Key: COUCHDB-1426
 URL: https://issues.apache.org/jira/browse/COUCHDB-1426
 Project: CouchDB
  Issue Type: Bug
  Components: Build System
Affects Versions: 1.1.1, 1.2
Reporter: Benoit Chesneau
Priority: Blocker
 Attachments: 0001-fix-build-with-custom-path-close-COUCHDB-1426.patch


 Context:
 To bench the differences between different versions of couchdb I had to test 
 against spidermonkey 1.7 and 1.8.5 . 1.8.5 is installed globally in 
 /usr/local  while the 1.7 version is installed on a temporary path. 
 Problem:
 Using --witth-js-include  --with-js-lib configure options aren't enough to 
 use the 1.7 version it still want to use spidermonkey 1.8.5 . Removing 
 js-config from the path doesn't change anything.  I had to uninstall 
 spidermonkey 1.8.5 to have these setting working.
 Error result:
 $ ./configure 
 --with-erlang=/Users/benoitc/local/otp-r14b04/lib/erlang/usr/include 
 --with-js-include=/Users/benoitc/local/js-1.7.0/include 
 --with-js-lib=/Users/benoitc/local/js-1.7.0/lib64
 checking for a BSD-compatible install... /usr/bin/install -c
 checking whether build environment is sane... yes
 checking for a thread-safe mkdir -p... build-aux/install-sh -c -d
 checking for gawk... no
 checking for mawk... no
 checking for nawk... no
 checking for awk... awk
 checking whether make sets $(MAKE)... yes
 checking for gcc... gcc
 checking for C compiler default output file name... a.out
 checking whether the C compiler works... yes
 checking whether we are cross compiling... no
 checking for suffix of executables... 
 checking for suffix of object files... o
 checking whether we are using the GNU C compiler... yes
 checking whether gcc accepts -g... yes
 checking for gcc option to accept ISO C89... none needed
 checking for style of include used by make... GNU
 checking dependency style of gcc... gcc3
 checking build system type... i386-apple-darwin11.3.0
 checking host system type... i386-apple-darwin11.3.0
 checking for a sed that does not truncate output... /usr/bin/sed
 checking for grep that handles long lines and -e... /usr/bin/grep
 checking for egrep... /usr/bin/grep -E
 checking for fgrep... /usr/bin/grep -F
 checking for ld used by gcc... 
 /usr/llvm-gcc-4.2/libexec/gcc/i686-apple-darwin11/4.2.1/ld
 checking if the linker 
 (/usr/llvm-gcc-4.2/libexec/gcc/i686-apple-darwin11/4.2.1/ld) is GNU ld... no
 checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm
 checking the name lister (/usr/bin/nm) interface... BSD nm
 checking whether ln -s works... yes
 checking the maximum length of command line arguments... 196608
 checking whether the shell understands some XSI constructs... yes
 checking whether the shell understands +=... yes
 checking for /usr/llvm-gcc-4.2/libexec/gcc/i686-apple-darwin11/4.2.1/ld 
 option to reload object files... -r
 checking how to recognize dependent libraries... pass_all
 checking for ar... ar
 checking for strip... strip
 checking for ranlib... ranlib
 checking command to parse /usr/bin/nm output from gcc object... ok
 checking for dsymutil... dsymutil
 checking for nmedit... nmedit
 checking for lipo... lipo
 checking for otool... otool
 checking for otool64... no
 checking for -single_module linker flag... yes
 checking for -exported_symbols_list linker flag... yes
 checking how to run the C preprocessor... gcc -E
 checking for ANSI C header files... yes
 checking for sys/types.h... yes
 checking for sys/stat.h... yes
 checking for stdlib.h... yes
 checking for string.h... yes
 checking for memory.h... yes
 checking for strings.h... yes
 checking for inttypes.h... yes
 checking for stdint.h... yes
 checking for unistd.h... yes
 checking for dlfcn.h... yes
 checking for objdir... .libs
 checking if gcc 

[jira] [Commented] (COUCHDB-1186) Speedups in the view indexer

2012-02-28 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13218002#comment-13218002
 ] 

Filipe Manana commented on COUCHDB-1186:


My replies in the following development mailing list thread:

http://mail-archives.apache.org/mod_mbox/couchdb-dev/201202.mbox/%3CCA%2BY%2B4475J_wPbiC%3Dg2R6CcqUfQ-_V6TTTxV2iS4xTbz9a10%2BXw%40mail.gmail.com%3E



 Speedups in the view indexer
 

 Key: COUCHDB-1186
 URL: https://issues.apache.org/jira/browse/COUCHDB-1186
 Project: CouchDB
  Issue Type: Improvement
Reporter: Filipe Manana
Assignee: Filipe Manana
 Fix For: 1.2


 The patches at [1] and [2] do 2 distinct optimizations to the view indexer
 1) Use a NIF to implement couch_view:less_json/2;
 2) Multiple small optimizations to couch_view_updater - the main one is to 
 decode the view server's JSON only in the updater's write process, avoiding 2 
 EJSON term copying phases (couch_os_process - updater processes and writes 
 work queue)
 [1] - 
 https://github.com/fdmanana/couchdb/commit/3935a4a991abc32132c078e908dbc11925605602
 [2] - 
 https://github.com/fdmanana/couchdb/commit/cce325378723c863f05cca2192ac7bd58eedde1c
 Using these 2 patches, I've seen significant improvements to view generation 
 time. Here I present as example the databases at:
 A) http://fdmanana.couchone.com/indexer_test_2
 B) http://fdmanana.couchone.com/indexer_test_3
 ## Trunk
 ### database A
 $ time curl 
 http://localhost:5985/indexer_test_2/_design/test/_view/view1?limit=1
 {total_rows:1102400,offset:0,rows:[
 
 {id:00d49881-7bcf-4c3d-a65d-e44435eeb513,key:[dwarf,assassin,2,1.1],value:[{x:174347.18,y:127272.8},{x:35179.93,y:41550.55},{x:157014.38,y:172052.63},{x:116185.83,y:69871
.73},{x:153746.28,y:190006.59}]}
 ]}
 real  19m46.007s
 user  0m0.024s
 sys   0m0.020s
 ### Database B
 $ time curl 
 http://localhost:5985/indexer_test_3/_design/test/_view/view1?limit=1
 {total_rows:1102400,offset:0,rows:[
 
 {id:00d49881-7bcf-4c3d-a65d-e44435eeb513,key:[dwarf,assassin,2,1.1],value:[{x:174347.18,y:127272.8},{x:35179.93,y:41550.55},{x:157014.38,y:172052.63},{x:116185.83,y:69871
.73},{x:153746.28,y:190006.59}]}
 ]}
 real  21m41.958s
 user  0m0.004s
 sys   0m0.028s
 ## Trunk + the 2 patches
 ### Database A
   $ time curl 
 http://localhost:5984/indexer_test_2/_design/test/_view/view1?limit=1
   {total_rows:1102400,offset:0,rows:[
   
 {id:00d49881-7bcf-4c3d-a65d-e44435eeb513,key:[dwarf,assassin,2,1.1],value:[{x:174347.18,y:127272.8},{x:35179.93,y:41550.55},{x:157014.38,y:172052.63},{x:116185.83,y:69871.7
   3},{x:153746.28,y:190006.59}]}
   ]}
   real16m1.820s
   user0m0.000s
   sys 0m0.028s
   (versus 19m46 with trunk)
 ### Database B
   $ time curl 
 http://localhost:5984/indexer_test_3/_design/test/_view/view1?limit=1
   {total_rows:1102400,offset:0,rows:[
   
 {id:00d49881-7bcf-4c3d-a65d-e44435eeb513,key:[dwarf,assassin,2,1.1],value:[{x:174347.18,y:127272.8},{x:35179.93,y:41550.55},{x:157014.38,y:172052.63},{x:116185.83,y:69871.7
   3},{x:153746.28,y:190006.59}]}
   ]}
   real17m22.778s
   user0m0.020s
   sys 0m0.016s
   (versus 21m41s with trunk)
 Repeating these tests, always clearing my OS/fs cache before running them 
 (via `echo 3  /proc/sys/vm/drop_caches`), I always get about the same 
 relative differences.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-665) Replication not possible via IPv6

2012-02-18 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13210963#comment-13210963
 ] 

Filipe Manana commented on COUCHDB-665:
---

Bastiaan, can you paste how you're triggering the replication (request and 
replication object/document) and the errors/stack traces you get (if any)?

 Replication not  possible via IPv6
 --

 Key: COUCHDB-665
 URL: https://issues.apache.org/jira/browse/COUCHDB-665
 Project: CouchDB
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.10.1
 Environment: Linux x200 2.6.32-2 #2 SMP Wed Feb 17 01:00:03 CET 2010 
 x86_64 GNU/Linux
Reporter: Michael Stapelberg
Assignee: Filipe Manana
Priority: Blocker
  Labels: ipv6
 Fix For: 1.1, 1.0.3, 1.2

 Attachments: COUCHDB-665-replication-ipv6.patch, couchdb-ipv6.patch, 
 patch

   Original Estimate: 0.25h
  Remaining Estimate: 0.25h

 I have a host which is only reachable via IPv6. While I can connect to a 
 CouchDB running on this host just fine, I cannot replicate my database to it.
 This is due to the inet6-option missing from the gen_tcp.connect() call. I 
 will attach a patch which fixes the issue.
 To test it, you can use a host which only has an  record in the DNS. 
 CouchDB will immediately return 404 if you want to replicate to it unless you 
 add the inet6 option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1398) improve view filtering in changes

2012-02-15 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13208465#comment-13208465
 ] 

Filipe Manana commented on COUCHDB-1398:


Benôit,

for 1) - Sure, no probem.

for 2) - That's right. It will work when the ddoc signature changes - my 
comment there was if continuing to send changes after that happens is ok or not 
for applications (since the view definition might have changed radically) - 
it's probably ok for many applications. I don't have strong opinion here. If no 
one objects it can remain like that imho.

Now for the the case the ddoc is deleted (and never recreated again), the 
client will hang forever. The process spawned with spawn/1 will terminate with 
a reason different from 'normal' (with a thow {not_found, deleted}) but the 
parent will never know that, because it's not linked to it or monitoring it, so 
it will hang forever waiting for db_updated or view_updated events (the later 
won't come anymore). Like for normal continuous changes feeds we stop if the 
database is deleted, for _view changes we should stop if the ddoc is deleted 
(or its views attribute is deleted in an update) - this case seems to make 
sense.

for 3) - I'm ok with that.

for 4) - Non manual tests are good to have :)

I've also noticed after last comment that send_view_changes/2 in 
couch_changes.erl 
(https://github.com/benoitc/couchdb/compare/master...couch_view_changes#L4R372) 
is buffering the full_doc_infos of every document id found in the view when 
folding it. This seems it can buffer millions (or more) of full_doc_info 
records, no? If it's really unbounded (as it seems just by looking at the 
diff), than it's dangerous (even a few thousand full_doc_infos can be too much, 
if the revision trees have a big depth and/or many branches).,

 improve view filtering in changes
 -

 Key: COUCHDB-1398
 URL: https://issues.apache.org/jira/browse/COUCHDB-1398
 Project: CouchDB
  Issue Type: Improvement
  Components: View Server Support
Affects Versions: 2.0, 1.3
Reporter: Benoit Chesneau
  Labels: changes, view
 Attachments: 0001-white-spaces.patch, 
 0002-initial-step-move-the-code-from-couch_httpd_db-to-co.patch, 
 0003-fix-indent.patch, 
 0004-This-wrapper-is-useless-somehow-split-the-code-in-a-.patch, 
 0005-add-view_updated-event.patch, 0006-immprove-view-filter.patch, 
 0007-useless-info.patch, 0008-whitespaces.patch, 
 0009-handle-native-filters-in-views.patch


 Improve the native view filter `_view` support by really using view index. 
 This patches add following features
 - small refactoring: create the couch_httpd_changes modules, to put all the 
 changes http support in its own module instead having it in couch_httpd_db. 
 - add the `view_updated` event when a view index is updated : {view_updated, 
 {DbName, IndexName}}
 - start the feed using results in the view index instead of all the db index
 - only react on view index changes.
 For now next changes are still get using the couch_db:changes_since function 
 and passing the map function to the results. It could be improved if we had a 
 by_seq btree in the view index too. Other way may be to skip a number of the 
 documents already processed. Not sure it would be faster. Thoughts ?
 The branch couch_view_changes  in my repo contains preliminary support:
 https://github.com/benoitc/couchdb/tree/couch_view_changes
 Diff:
 https://github.com/benoitc/couchdb/compare/master...couch_view_changes
 To use it, use the native filter named _view which take the parameter 
 view=DesignName/Viewname
 eg:
   
 http://server/db/_changes?feed=continuousheartbeat=truefilter=_viewview=DesignName/SomeView
 It has also an interresting side effect: on each db updates the view index 
 refresh is triggered so view updates are triggered. Maybe we could introduce 
 an optionnal parameter to not trigger them though?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1398) improve view filtering in changes

2012-02-14 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13207768#comment-13207768
 ] 

Filipe Manana commented on COUCHDB-1398:


Hi Benoît,

A few comments after quickly looking at the full diff (sorry had no time to 
test).

1) There seems to be a few white space only unintentional changes that don't 
eliminate trailing white spaces or
fix the indentation to comply to the overall indentation style. E.g.:
https://github.com/benoitc/couchdb/compare/master...couch_view_changes#L4L357

2) In the make_update_fun:
https://github.com/benoitc/couchdb/compare/master...couch_view_changes#L4R114

I see a few issues there.

First, if the design document is deleted (the view group is shutdown), and the
_changes feed request is of continuous type, the client will hang forever as 
the code doesn't seem to handle this
and will be forever waiting for more 'view_updated' events which will never 
come (unless a new ddoc with same _id
is created after).

Second, when it receives a db_updated event, it will spawn a process to trigger 
the view update. If this process dies,
the parent doesn't know it and thinks all is fine. It can die here because for 
example
couch_httpd_db:couch_doc_open threw {not_found, deleted} (ddoc deleted, related 
to case 1 mentioned above).
It should stop when the ddoc is deleted or a ddoc update removes its views 
field (which also shutdowns the view group
as soon as no more clients are using it). Possibly when a view group is 
shutdown, you'll want to emit an event so that
you can react to it. Inside that process you're also calling couch_db:reopen - 
you should use couch_db:open[_int] here
because that process never opened the database before.

Third, and this might not be an issue (hard to tell) is when for example the 
ddoc signature changes (like map function was
updated), you keep going as nothing happened. Clients might want to stop 
receiving changes if the view definitions of
a ddoc change - this might be very subjective or application dependent. I don't 
have a strong opinion here.

As of COUCHDB-1309, when a ddoc is updated such that the view group signature 
changes, it's old view group will
shutdown when no more clients are using it. Perhaps when it's shutdown, 
emitting an event with the DDocId and Signature
will help dealing with the cases listed above.

First case where a ddoc is deleted, should probably have a test to verify the 
clients isn't kept blocked forever
after the ddoc is deleted (for continuous feed types).

3) You're reusing couch_db_update_notifier for emitting the view_updated 
events. Probably the couch_mrview application
should have it's own separate event manager. I don't see this as a big issue or 
a blocker, can be addressed later.

4) For the replication, some tests would be very welcome and good to have.

Good work and thanks.


 improve view filtering in changes
 -

 Key: COUCHDB-1398
 URL: https://issues.apache.org/jira/browse/COUCHDB-1398
 Project: CouchDB
  Issue Type: Improvement
  Components: View Server Support
Affects Versions: 2.0, 1.3
Reporter: Benoit Chesneau
  Labels: changes, view
 Attachments: 0001-white-spaces.patch, 
 0002-initial-step-move-the-code-from-couch_httpd_db-to-co.patch, 
 0003-fix-indent.patch, 
 0004-This-wrapper-is-useless-somehow-split-the-code-in-a-.patch, 
 0005-add-view_updated-event.patch, 0006-immprove-view-filter.patch, 
 0007-useless-info.patch, 0008-whitespaces.patch, 
 0009-handle-native-filters-in-views.patch


 Improve the native view filter `_view` support by really using view index. 
 This patches add following features
 - small refactoring: create the couch_httpd_changes modules, to put all the 
 changes http support in its own module instead having it in couch_httpd_db. 
 - add the `view_updated` event when a view index is updated : {view_updated, 
 {DbName, IndexName}}
 - start the feed using results in the view index instead of all the db index
 - only react on view index changes.
 For now next changes are still get using the couch_db:changes_since function 
 and passing the map function to the results. It could be improved if we had a 
 by_seq btree in the view index too. Other way may be to skip a number of the 
 documents already processed. Not sure it would be faster. Thoughts ?
 The branch couch_view_changes  in my repo contains preliminary support:
 https://github.com/benoitc/couchdb/tree/couch_view_changes
 Diff:
 https://github.com/benoitc/couchdb/compare/master...couch_view_changes
 To use it, use the native filter named _view which take the parameter 
 view=DesignName/Viewname
 eg:
   
 http://server/db/_changes?feed=continuousheartbeat=truefilter=_viewview=DesignName/SomeView
 It has also an interresting side effect: on each db 

[jira] [Commented] (COUCHDB-1390) Fix auth_cache etap test

2012-01-25 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13192957#comment-13192957
 ] 

Filipe Manana commented on COUCHDB-1390:


Paul it's unclear to me what this really accomplishes.
Which test assertions were failing for you?

We only need the conflicts and not the full revision tree of a doc. All the 
conflicts (leaf revs) are already in #doc_info records, so a doc open by its 
doc_info is returns its list of conflicting revisions.

 Fix auth_cache etap test
 

 Key: COUCHDB-1390
 URL: https://issues.apache.org/jira/browse/COUCHDB-1390
 Project: CouchDB
  Issue Type: Bug
Reporter: Paul Joseph Davis
 Attachments: COUCHDB-1390.patch


 The auth_cache etap tests were failing for me. Debugged this to make sure it 
 wasn't related to something else. Commit message is:
 Fix for the auth_cache etap
 
 As it turns out, opening a doc by id is different than opening it using
 a #doc_info record due to the inclusion of the full revision path. This
 ended up breaking the auth_cache tests. This way includes the entire
 revision path for all docs and not just first doc loads.
 Patching attaching in a few moments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1392) Increase task status update frequency for continuous replication

2012-01-25 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13192965#comment-13192965
 ] 

Filipe Manana commented on COUCHDB-1392:


Doable.



 Increase task status update frequency for continuous replication
 

 Key: COUCHDB-1392
 URL: https://issues.apache.org/jira/browse/COUCHDB-1392
 Project: CouchDB
  Issue Type: Improvement
Reporter: Paul Joseph Davis
Assignee: Filipe Manana

 I'm not super familiar with the internals of continuous replication but the 
 tests would benefit from a slight tweak to increasing the frequency of task 
 status updates. I'm not entirely certain on the internals, but assuming that 
 its something like wait for update notifications, scan by_seq_btree for new 
 updates, sleep it would be useful to update the task status unconditionally 
 at the end of scan by_seq_tree.
 This would benefit the continuous replication tests because we'd be able to 
 fix the waitForSeq so that as soon as the target db was up to date the tests 
 could continue instead of the broken behavior where they wait for the entire 
 timeout now.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1390) Fix auth_cache etap test

2012-01-25 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13193143#comment-13193143
 ] 

Filipe Manana commented on COUCHDB-1390:


Hum, then there's possibly something weird going on. I never got a failure on 
that test on several machines, the tests' Creds5 and Creds9 are equal for me:

# Changing the auth database again
Creds5: [{_id,org.couchdb.user:joe},
 {_rev,1-bd6145e7bcacccb2c2b9811c8b40abc0},
 {name,joe},
 {type,user},
 {salt,SALT},
 {password_sha,d25de84471bb8039d73d6eab199f2c85512fb91c},
 {roles,[]},
 {_revisions,
  {[{start,1},
{ids,[bd6145e7bcacccb2c2b9811c8b40abc0]}]}}]
Creds9: [{_id,org.couchdb.user:joe},
 {_rev,1-bd6145e7bcacccb2c2b9811c8b40abc0},
 {name,joe},
 {type,user},
 {salt,SALT},
 {password_sha,d25de84471bb8039d73d6eab199f2c85512fb91c},
 {roles,[]},
 {_revisions,
  {[{start,1},
{ids,[bd6145e7bcacccb2c2b9811c8b40abc0]}]}}]
ok 18  - Got same credentials as before the firt auth database change

I'm concerned if there's an issue elsewhere in the test. I assumed it was very 
deterministic. Can you paste the value of Creds5 and Creds9?

And the revs option is a no-op for couch_db:open_doc/3 (only meaningful for 
couch_doc:to_json_obj/2).
Either way adding _revisions to the credentials is not necessary 
(couch_query_servers:json_doc adds it), so I would suggest the following:

http://friendpaste.com/7cZfkn6yIGGGh5CIJAawK3

thanks Paul

 Fix auth_cache etap test
 

 Key: COUCHDB-1390
 URL: https://issues.apache.org/jira/browse/COUCHDB-1390
 Project: CouchDB
  Issue Type: Bug
Reporter: Paul Joseph Davis
 Attachments: COUCHDB-1390.patch


 The auth_cache etap tests were failing for me. Debugged this to make sure it 
 wasn't related to something else. Commit message is:
 Fix for the auth_cache etap
 
 As it turns out, opening a doc by id is different than opening it using
 a #doc_info record due to the inclusion of the full revision path. This
 ended up breaking the auth_cache tests. This way includes the entire
 revision path for all docs and not just first doc loads.
 Patching attaching in a few moments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1342) Asynchronous file writes

2012-01-22 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13190665#comment-13190665
 ] 

Filipe Manana commented on COUCHDB-1342:


I really appreciate Randall's pushing for collaboration rather than expect a 
single person to do it all or let this fall into oblivion.

I will do some updates to the branch soon as well repost some performance 
benchmarks, and instructions how to reproduce them as usual, in comparison to 
latest master (the results posted months ago don't account for many 
improvements that came after such as COUCHDB-1334).

 Asynchronous file writes
 

 Key: COUCHDB-1342
 URL: https://issues.apache.org/jira/browse/COUCHDB-1342
 Project: CouchDB
  Issue Type: Improvement
  Components: Database Core
Reporter: Jan Lehnardt
 Fix For: 1.3

 Attachments: COUCHDB-1342.patch


 This change updates the file module so that it can do
 asynchronous writes. Basically it replies immediately
 to process asking to write something to the file, with
 the position where the chunks will be written to the
 file, while a dedicated child process keeps collecting
 chunks and write them to the file (and batching them
 when possible). After issuing a series of write request
 to the file module, the caller can call its 'flush'
 function which will block the caller until all the
 chunks it requested to write are effectively written
 to the file.
 This maximizes the IO subsystem, as for example, while
 the updater is traversing and modifying the btrees and
 doing CPU bound tasks, the writes are happening in
 parallel.
 Originally described at http://s.apache.org/TVu
 Github Commit: 
 https://github.com/fdmanana/couchdb/commit/e82a673f119b82dddf674ac2e6233cd78c123554

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1342) Asynchronous file writes

2012-01-22 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13190821#comment-13190821
 ] 

Filipe Manana commented on COUCHDB-1342:


Made a few more tests comparing latest master against COUCHDB-1342 branch 
(after merging master into it).

Database writes
=

* 1Kb documents 
(https://github.com/fdmanana/basho_bench_couch/blob/master/couch_docs/doc_1kb.json)

http://graphs.mikeal.couchone.com/#/graph/7c13e2bdebfcd17aab424e68f225fe9a

* 2Kb documents 
(https://github.com/fdmanana/basho_bench_couch/blob/master/couch_docs/doc_2kb.json)

http://graphs.mikeal.couchone.com/#/graph/7c13e2bdebfcd17aab424e68f2261504

* 11Kb documents
(https://github.com/fdmanana/basho_bench_couch/blob/master/couch_docs/doc_11kb.json)

http://graphs.mikeal.couchone.com/#/graph/7c13e2bdebfcd17aab424e68f2262e98


View indexer
==

Test database:  http://fdmanana.iriscouch.com/_utils/many_docs

* master

$ echo 3  /proc/sys/vm/drop_caches
$ time curl http://localhost:5984/many_docs/_design/test/_view/test1
{rows:[
{key:null,value:2000}
]}

real29m42.041s
user0m0.016s
sys 0m0.036s


* master + patch (branch COUCHDB-1342)

$ echo 3  /proc/sys/vm/drop_caches
$ time curl http://localhost:5984/many_docs/_design/test/_view/test1
{rows:[
{key:null,value:2000}
]}

real26m13.112s
user0m0.008s
sys 0m0.036s

Before COUCHDB-1334, and possibly the refactored indexer as well, the 
difference used to be more significant (like in the results presented in the 
dev mail http://s.apache.org/TVu).

 Asynchronous file writes
 

 Key: COUCHDB-1342
 URL: https://issues.apache.org/jira/browse/COUCHDB-1342
 Project: CouchDB
  Issue Type: Improvement
  Components: Database Core
Reporter: Jan Lehnardt
 Fix For: 1.3

 Attachments: COUCHDB-1342.patch


 This change updates the file module so that it can do
 asynchronous writes. Basically it replies immediately
 to process asking to write something to the file, with
 the position where the chunks will be written to the
 file, while a dedicated child process keeps collecting
 chunks and write them to the file (and batching them
 when possible). After issuing a series of write request
 to the file module, the caller can call its 'flush'
 function which will block the caller until all the
 chunks it requested to write are effectively written
 to the file.
 This maximizes the IO subsystem, as for example, while
 the updater is traversing and modifying the btrees and
 doing CPU bound tasks, the writes are happening in
 parallel.
 Originally described at http://s.apache.org/TVu
 Github Commit: 
 https://github.com/fdmanana/couchdb/commit/e82a673f119b82dddf674ac2e6233cd78c123554

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1379) Extend attachment etag checking for compressible data types in test suite

2012-01-17 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13187692#comment-13187692
 ] 

Filipe Manana commented on COUCHDB-1379:


Dave you might want to try the following patch and see if with it, both Windows 
and other OSes produce exactly the same gzip output:

https://github.com/couchbase/couchdb/commit/a7099de2192e428558645a05188f279691a2ea0c

At the moment I don't have a Windows box to test it.

 Extend attachment etag checking for compressible data types in test suite 
 --

 Key: COUCHDB-1379
 URL: https://issues.apache.org/jira/browse/COUCHDB-1379
 Project: CouchDB
  Issue Type: Test
  Components: Database Core
Affects Versions: 1.2
Reporter: Dave Cottlehuber
Assignee: Dave Cottlehuber
Priority: Trivial
  Labels: testsuite
 Fix For: 1.2.1


 Ref COUCHDB-1337 and subsequent 8d83b3 on 1.2.0/12.x branch.  etag testing
 was extended to validate the digest returned by CouchDB for attachments.
 Compressed attachments do not produce a consistent digest across platforms, 
 due
 to differing compression algorithms between Mac/Linux and Windows.
 The test suite should confirm that etags only change when the attachment is 
 updated,
 and are otherwise consistent.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1380) logrotate doesn't work correctly with couchdb 1.2.x

2012-01-17 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13187705#comment-13187705
 ] 

Filipe Manana commented on COUCHDB-1380:


Agree with Robert's analys. Indeed it seems disk_log does pwrites and fseeks 
(file:position/2) in a few places. This partly explains why disk_log offers log 
rotating features.
Other then reverting the commit which added disk_log to couch_log, I don't see 
a better short term solution.

 logrotate doesn't work correctly with couchdb 1.2.x
 ---

 Key: COUCHDB-1380
 URL: https://issues.apache.org/jira/browse/COUCHDB-1380
 Project: CouchDB
  Issue Type: Bug
Affects Versions: 1.2
 Environment: CentOS 5.6 x64, couchdb 1.2.x (13th Jan 2012 - 
 1.2.0a-08d8f89-git), logrotate 3.7.4
Reporter: Alex Markham
Priority: Blocker
  Labels: logging, logrotate

 Running logrotate -f with couchdb 1.2.x leaves null data at the start of the 
 couch.log file, I'm guessing equal to the size of data that should have been 
 removed and rotated into the log.1 (eg head -c 10 couch.log is 
 blank)
 This does not happen on couchdb 1.1.1, 1.0.2 or 1.0.3
 The log files then stay large, and when trying to grep or less them, they are 
 reported as binary.
 This seems to have happened to another user, but no details of OS or version 
 were reported: http://comments.gmane.org/gmane.comp.db.couchdb.user/16049 
 The logrotate config used is very similar to the one installed with couchdb -
 /var/log/couchdb/*.log {
size=150M
rotate 5
copytruncate
compress
delaycompress
notifempty
missingok
 }
 Has there been any changes to the interaction with log files/file handles 
 since 1.1.1? Does couchdb need to receive a SIGHUP? Or can anyone reproduce 
 this?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1327) CLONE - remote to local replication fails when using a proxy

2012-01-17 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13187955#comment-13187955
 ] 

Filipe Manana commented on COUCHDB-1327:


Likely fixed by COUCHDB-1340
Any chance to try it?

 CLONE - remote to local replication fails when using a proxy
 

 Key: COUCHDB-1327
 URL: https://issues.apache.org/jira/browse/COUCHDB-1327
 Project: CouchDB
  Issue Type: Bug
  Components: Replication
Affects Versions: 1.1.2
Reporter: Eliseo Soto

 The following is failing for me:
 curl -X POST -H Content-Type:application/json 
 http://localhost:5984/_replicate -d 
 '{source:http://isaacs.iriscouch.com/registry/;, target:registry, 
 proxy: http://wwwgate0.myproxy.com:1080}'
 This is the error:
 {error:json_encode,reason:{bad_term,{nocatch,{invalid_json,
 I have no clue about what's wrong, I can curl 
 http://isaacs.iriscouch.com/registry/ directly and it works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1238) CouchDB uses _users db for storing oauth credentials

2012-01-05 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13180698#comment-13180698
 ] 

Filipe Manana commented on COUCHDB-1238:


Applied to master and branch 1.2.x (revisions 
d01faab3f464ff0806f4ad9f4166ca7a498a4866 and 
e768eb2d504824fa209cc19330850dd2244e541b).

Some documentation left in the commit message and soon more to be added to the 
wiki.

 CouchDB uses _users db for storing oauth credentials
 

 Key: COUCHDB-1238
 URL: https://issues.apache.org/jira/browse/COUCHDB-1238
 Project: CouchDB
  Issue Type: New Feature
  Components: Database Core
Affects Versions: 1.1
Reporter: Pete Vander Giessen
Assignee: Filipe Manana
 Fix For: 1.2

 Attachments: git_commits_as_patch.zip, oauth_users_db_patch.zip


 We want to store oauth credentials in the _users db, rather than in the .ini. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1365) Fix merging of document with attachment stubs

2011-12-16 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13171032#comment-13171032
 ] 

Filipe Manana commented on COUCHDB-1365:


Paul, it makes the merging throw a missing_stubs error instead of crashing with 
a function_clause like this:

{error:{worker_died,0.4681.0,\n {function_clause,\n 
[{couch_doc,merge_stubs,\n [{doc,\document\,\n {12,\n 
[173,196,77,158,203,221,181,67,212,246,74,43,219,237,94,\n 140,\n 
72,74,171,180,242,101,33,172,166,195,136,125,231,134,65,\n 37,\n 
83,182,225,95,63,211,178,52,96,92,102,114,130,224,138,145,\n 
95,234,29,149,2,72,24,226,188,255,99,148,120,126,85,103,\n 
87,169,231,120,244,95,82,97,203,2,50,37,173,50,61,2,\n 
215,75,114,100,77,137,213,77,75,174,41,30,112,205,156,95,\n 
65,124,138,146,230,4,155,92,52,14,189,152,167,30,231,96,\n 
68,16,172,15,172,224,254,189,163,203,141,251,172,187,194,\n 27,\n 
34,8,58,215,190,139,123,102,40,55,20,84,179,249,85,8,\n 
29,39,81,231,191,249,211,32,50,145,38,97,78,100,102,188,\n 
88,5,136,102,228,34,222,205,62,252,107,44,97,23,101,81,\n 
222,52,110,92,212,100,161,137,190,163,109,245,255,233,\n 85,71]},\n 
131,104,1,108,0,0,0,1,104,2,109,0,0,0,4,110,97,109,101,109,\n 
0,0,0,6,99,98,109,97,50,50,106,\n 
[{att,\./reproduce-CBMA-22.sh\,\text/plain\,41,21,\n 
47,253,88,99,174,109,103,33,178,178,139,137,178,159,\n 243,29,\n 
2,stub,gzip}],\n false,[]},\n nil]},\n 
{couch_db,'-prep_and_validate_replicated_updates/5-fun-5-',4},\n 
{lists,foldl,3},\n {couch_db,prep_and_validate_replicated_updates,5},\n 
{couch_db,update_docs,4},\n {couch_db,update_doc,4},\n 
{couch_replicator_worker,flush_doc,2},\n 
{couch_replicator_worker,local_doc_handler,2}]}}}

When the replicator catches a missing_stubs error, it retries replicating the 
document but without incremental attachment replication (sends all attachments, 
just like in 1.1 and below).

 Fix merging of document with attachment stubs
 -

 Key: COUCHDB-1365
 URL: https://issues.apache.org/jira/browse/COUCHDB-1365
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core, Replication
Reporter: Filipe Manana
Priority: Blocker
 Fix For: 1.2, 1.1.2

 Attachments: 
 0001-Fix-merging-of-documents-with-attachment-stubs.patch, 
 reproduce-CBMA-22.sh


 This issue was found by Marty Schoch and is reproducible the following 
 attached script.
 The commit message in the patch explains the issue:
 During replicated updates, merging of documents with
 attachment stubs will fail if, after merging the received
 document's revision tree with the current on disk revision
 tree, produces a revision tree which doesn't contain the revision
 that immediately precedes the received document's revision.
 This happens when the received document doesn't contain in its
 revision history any of the revisions in the revision tree
 of the currently on disk document. This is possible when the
 document had a number of updates higher than the value of revs
 limit defined for the source database.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1365) Fix merging of document with attachment stubs

2011-12-16 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13171042#comment-13171042
 ] 

Filipe Manana commented on COUCHDB-1365:


Actually the commit message has a small mistake:

This is possible when the
document had a number of updates higher than the value of revs
limit defined for the source database.

Here I didn't want to say source but target database instead.

 Fix merging of document with attachment stubs
 -

 Key: COUCHDB-1365
 URL: https://issues.apache.org/jira/browse/COUCHDB-1365
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core, Replication
Reporter: Filipe Manana
Priority: Blocker
 Fix For: 1.2, 1.1.2

 Attachments: 
 0001-Fix-merging-of-documents-with-attachment-stubs.patch, 
 0001-Fix-merging-of-documents-with-attachment-stubs.patch, 
 reproduce-CBMA-22.sh


 This issue was found by Marty Schoch and is reproducible the following 
 attached script.
 The commit message in the patch explains the issue:
 During replicated updates, merging of documents with
 attachment stubs will fail if, after merging the received
 document's revision tree with the current on disk revision
 tree, produces a revision tree which doesn't contain the revision
 that immediately precedes the received document's revision.
 This happens when the received document doesn't contain in its
 revision history any of the revisions in the revision tree
 of the currently on disk document. This is possible when the
 document had a number of updates higher than the value of revs
 limit defined for the source database.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1364) Replication hanging/failing on docs with lots of revisions

2011-12-15 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13170150#comment-13170150
 ] 

Filipe Manana commented on COUCHDB-1364:


Hi Alex.
For the push replication case, right before the error, was the local source 
database compacted?

 Replication hanging/failing on docs with lots of revisions
 --

 Key: COUCHDB-1364
 URL: https://issues.apache.org/jira/browse/COUCHDB-1364
 Project: CouchDB
  Issue Type: Bug
  Components: Replication
Affects Versions: 1.0.3, 1.1.1
 Environment: Centos 5.6/x64 spidermonkey 1.8.5, couchdb 1.1.1 patched 
 for COUCHDB-1340 and COUCHDB-1333
Reporter: Alex Markham
  Labels: open_revs, replication
 Attachments: replication error changes_loop died redacted.txt


 We have a setup where replication from a 1.1.1 couch is hanging - this is WAN 
 replication which previously worked 1.0.3 - 1.0.3.
 Replicating from the 1.1.1 - 1.0.3 showed an error very similar to 
 COUCHDB-1340 - which I presumed meant the url was too long. So I upgraded the 
 1.0.3 couch to our 1.1.1 build which had this patched.
 However - the replication between the 2 1.1.1 couches is hanging at a certain 
 point when doing continuous pull replication - it doesn't checkpoint, just 
 stays on starting however, when cancelled and restarted it gets the latest 
 documents (so doc counts are equal). The last calls I see to the source db 
 when it hangs are multiple long GETs for a document with 2051 open revisions 
 on the source and 498 on the target.
 When doing a push replication the _replicate call just gives a 500 error (at 
 about the same seq id as the pull replication hangs at) saying:
 [Thu, 15 Dec 2011 10:09:17 GMT] [error] [0.11306.115] changes_loop died 
 with reason {noproc,
{gen_server,call,
 [0.6382.115,
  {pread_iolist,
   79043596434},
  infinity]}}
 when the last call in the target of the push replication is:
 [Thu, 15 Dec 2011 10:09:17 GMT] [info] [0.580.50] 10.35.9.79 - - 'POST' 
 /master_db/_missing_revs 200
 with no stack trace.
 Comparing the open_revs=all count on the documents with many open revs shows 
 differing numbers on each side of the replication WAN and between different 
 couches in the same datacentre. Some of these documents have not been updated 
 for months. Is it possible that 1.0.3 just skipped over this issue and 
 carried on replicating, but 1.1.1 does not?
 I know I can hack the replication to work by updating the checkpoint seq past 
 this point in the _local document, but I think there is a real bug here 
 somewhere.
 If wireshark/debug data is required, please say

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1363) Race condition edge case when pulling local changes

2011-12-15 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13170157#comment-13170157
 ] 

Filipe Manana commented on COUCHDB-1363:


Hi Randall,

This would only minimize the problem, as right after re-opening the database 
some changes can happen.

As for the replicator_db.js test, the only case I can see this happening is 
when triggering a non-continuous replication, immediately after add some docs 
to the source database and then assert the docs were written to the target. I 
think this would be more a problem of the test then anything else. I don't 
recall if there's any test function which does that in replicator_db.js. Is 
there any? Which one did you find?

Also the following line you changed is a bit dangerous:

-fun({_, DbName}) when DbName == Db#db.name -
+fun({_, DbName}) -

The DbName inside the fun is shadowing the DbName in the handle_changes clause. 
This means you'll accept updates for any database.
The compiler should give you a warning about this.

I would also prefer you update the commit's title because this is not 
replication specific, but rather couch_changes specific.

I'm mostly convinced it's a test issue anyway.

 Race condition edge case when pulling local changes
 ---

 Key: COUCHDB-1363
 URL: https://issues.apache.org/jira/browse/COUCHDB-1363
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core
Affects Versions: 1.0.3, 1.1.1
Reporter: Randall Leeds
Assignee: Filipe Manana
Priority: Minor
 Fix For: 1.2, 1.3

 Attachments: 0001-Fix-a-race-condition-starting-replications.patch


 It's necessary to re-open the #db after subscribing to notifications so that 
 updates are not lost. In practice, this is rarely problematic because the 
 next change will cause everything to catch up, but if a quick burst of 
 changes happens while replication is starting the replication can go stale. 
 Detected by intermittent replicator_db js test failures.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1364) Replication hanging/failing on docs with lots of revisions

2011-12-15 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13170262#comment-13170262
 ] 

Filipe Manana commented on COUCHDB-1364:


Alex, the patch is for the side doing the push replication. It's only meant to 
fix the following stack trace you pasted:

[Thu, 15 Dec 2011 10:09:17 GMT] [error] [0.11306.115] changes_loop died with 
reason {noproc, 
   {gen_server,call, 
[0.6382.115, 
 {pread_iolist, 
  79043596434}, 
 infinity]}} 

Your error with the _ensure_full_commit seems to be because that http request 
failed 10 times. At that point the replication process crashes.
Before the first retry it waits 0.5 seconds, before the 2nd retry it waits 1 
second, etc (it always doubles). So it takes about 8.5 minutes before it 
crashes.
Maybe your network is too busy.

 Replication hanging/failing on docs with lots of revisions
 --

 Key: COUCHDB-1364
 URL: https://issues.apache.org/jira/browse/COUCHDB-1364
 Project: CouchDB
  Issue Type: Bug
  Components: Replication
Affects Versions: 1.0.3, 1.1.1
 Environment: Centos 5.6/x64 spidermonkey 1.8.5, couchdb 1.1.1 patched 
 for COUCHDB-1340 and COUCHDB-1333
Reporter: Alex Markham
  Labels: open_revs, replication
 Attachments: COUCHDB-1364-11x.patch, do_checkpoint error push.txt, 
 replication error changes_loop died redacted.txt


 We have a setup where replication from a 1.1.1 couch is hanging - this is WAN 
 replication which previously worked 1.0.3 - 1.0.3.
 Replicating from the 1.1.1 - 1.0.3 showed an error very similar to 
 COUCHDB-1340 - which I presumed meant the url was too long. So I upgraded the 
 1.0.3 couch to our 1.1.1 build which had this patched.
 However - the replication between the 2 1.1.1 couches is hanging at a certain 
 point when doing continuous pull replication - it doesn't checkpoint, just 
 stays on starting however, when cancelled and restarted it gets the latest 
 documents (so doc counts are equal). The last calls I see to the source db 
 when it hangs are multiple long GETs for a document with 2051 open revisions 
 on the source and 498 on the target.
 When doing a push replication the _replicate call just gives a 500 error (at 
 about the same seq id as the pull replication hangs at) saying:
 [Thu, 15 Dec 2011 10:09:17 GMT] [error] [0.11306.115] changes_loop died 
 with reason {noproc,
{gen_server,call,
 [0.6382.115,
  {pread_iolist,
   79043596434},
  infinity]}}
 when the last call in the target of the push replication is:
 [Thu, 15 Dec 2011 10:09:17 GMT] [info] [0.580.50] 10.35.9.79 - - 'POST' 
 /master_db/_missing_revs 200
 with no stack trace.
 Comparing the open_revs=all count on the documents with many open revs shows 
 differing numbers on each side of the replication WAN and between different 
 couches in the same datacentre. Some of these documents have not been updated 
 for months. Is it possible that 1.0.3 just skipped over this issue and 
 carried on replicating, but 1.1.1 does not?
 I know I can hack the replication to work by updating the checkpoint seq past 
 this point in the _local document, but I think there is a real bug here 
 somewhere.
 If wireshark/debug data is required, please say

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1363) callback invocation for docs added during couch_changes startup can be delayed by race condition

2011-12-15 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13170574#comment-13170574
 ] 

Filipe Manana commented on COUCHDB-1363:


Go ahead Randall, it's a genuine issue. +1

 callback invocation for docs added during couch_changes startup can be 
 delayed by race condition
 

 Key: COUCHDB-1363
 URL: https://issues.apache.org/jira/browse/COUCHDB-1363
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core
Affects Versions: 1.0.3, 1.1.1
Reporter: Randall Leeds
Assignee: Filipe Manana
Priority: Minor
 Fix For: 1.2, 1.3

 Attachments: 0001-Fix-a-race-condition-starting-replications.patch


 After subscribing to notifications it's necessary to re-open the #db a so 
 that the header points at all updates for which the updater notifier has 
 already fired events. In practice, this is rarely problematic because the 
 next change will cause everything to catch up, but if a quick burst of 
 changes happens while, e.g., replication is starting the replication can go 
 stale. Detected by intermittent replicator_db js test failures.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1320) OAuth authentication doesn't work with VHost entry

2011-12-10 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13166972#comment-13166972
 ] 

Filipe Manana commented on COUCHDB-1320:


 Why x-couchdb-vhost-path couldn't have been used for the oauth calculation ?

Not understanding your question. The changes I made to couch_httpd_oauth.erl 
make use of the header x-couchdb-vhost-path to compute the OAuth signature.

nm . I am just confused by the the user_ctx thing I think. Sound
really overkill.

Overkill in which sense?

What's important is passing a user_ctx to the 2nd (post rewrite resolution) 
couch_httpd:handle_request_int call, so that it doesn't run all the auth 
handlers again. About using the process dictionary versus a new 
couch_httpd:handle_request_int function with an extra argument (UserCtx), I 
don't see any of them overkill compared to the other.



 OAuth authentication doesn't work with VHost entry
 --

 Key: COUCHDB-1320
 URL: https://issues.apache.org/jira/browse/COUCHDB-1320
 Project: CouchDB
  Issue Type: Bug
  Components: HTTP Interface
Affects Versions: 1.1
 Environment: Ubuntu
Reporter: Martin Higham
Assignee: Filipe Manana
 Fix For: 1.2

 Attachments: Fix-OAuth-that-broke-with-vhost.patch, 
 fdmanana-0001-Fix-OAuth-authentication-with-VHosts-URL-rewriting.patch


 If you have a vhost entry that modifies the path (such as my host.com = 
 /mainDB/_design/main/_rewrite ) trying to authenticate a request to this host 
 using OAuth fails.
 couch_httpd_oauth uses the modified path rather than the original 
 x-couchdb-vhost-path when calculating the signature.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1357) Authentication failure after updating password in user document

2011-12-10 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13166981#comment-13166981
 ] 

Filipe Manana commented on COUCHDB-1357:


Pete, your scenario makes sense. It will cause all database process to be 
killed (couch_server dies).

 Authentication failure after updating password in user document
 ---

 Key: COUCHDB-1357
 URL: https://issues.apache.org/jira/browse/COUCHDB-1357
 Project: CouchDB
  Issue Type: Bug
Affects Versions: 1.1.1
Reporter: Filipe Manana
 Attachments: 
 0001-Let-the-credentials-cache-daemon-crash-if-_users-db-.patch


 From the report at the users mailing list:
 http://s.apache.org/9OG
 Seems like after updating the password in a user doc, the user is not able to 
 login with the new password unless Couch is restarted. Sounds like a caching 
 issue.
 The only case of getting the cache consistent with the _users database 
 content is if the _users database processes crash and after the crash user 
 documents are updated. The cache daemon is ignoring the database crash.
 The following patch updates the daemon to monitor the _users database and 
 crash (letting the supervisor restart it) if the database process crashes.
 Etap test included.
 This might be related to COUCHDB-1212.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1357) Authentication failure after updating password in user document

2011-12-07 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13164343#comment-13164343
 ] 

Filipe Manana commented on COUCHDB-1357:


Pete, I'm not sure this issue relates directly yo your issue.
From the error {error,system_limit, it seems your system reached the maximum 
limit of available file descriptors. I have no idea how to increase this limit 
on Windows, perhaps Dave can help here.

 Authentication failure after updating password in user document
 ---

 Key: COUCHDB-1357
 URL: https://issues.apache.org/jira/browse/COUCHDB-1357
 Project: CouchDB
  Issue Type: Bug
Affects Versions: 1.1.1
Reporter: Filipe Manana
 Attachments: 
 0001-Let-the-credentials-cache-daemon-crash-if-_users-db-.patch


 From the report at the users mailing list:
 http://s.apache.org/9OG
 Seems like after updating the password in a user doc, the user is not able to 
 login with the new password unless Couch is restarted. Sounds like a caching 
 issue.
 The only case of getting the cache consistent with the _users database 
 content is if the _users database processes crash and after the crash user 
 documents are updated. The cache daemon is ignoring the database crash.
 The following patch updates the daemon to monitor the _users database and 
 crash (letting the supervisor restart it) if the database process crashes.
 Etap test included.
 This might be related to COUCHDB-1212.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1357) Authentication failure after updating password in user document

2011-12-07 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13164378#comment-13164378
 ] 

Filipe Manana commented on COUCHDB-1357:


Pete, there's this Erlang thread (Increase the handle limit) Dave 
participated on which covers your issue:
http://erlang.org/pipermail/erlang-questions/2011-October/thread.html#61859

 Authentication failure after updating password in user document
 ---

 Key: COUCHDB-1357
 URL: https://issues.apache.org/jira/browse/COUCHDB-1357
 Project: CouchDB
  Issue Type: Bug
Affects Versions: 1.1.1
Reporter: Filipe Manana
 Attachments: 
 0001-Let-the-credentials-cache-daemon-crash-if-_users-db-.patch


 From the report at the users mailing list:
 http://s.apache.org/9OG
 Seems like after updating the password in a user doc, the user is not able to 
 login with the new password unless Couch is restarted. Sounds like a caching 
 issue.
 The only case of getting the cache consistent with the _users database 
 content is if the _users database processes crash and after the crash user 
 documents are updated. The cache daemon is ignoring the database crash.
 The following patch updates the daemon to monitor the _users database and 
 crash (letting the supervisor restart it) if the database process crashes.
 Etap test included.
 This might be related to COUCHDB-1212.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1323) couch_replicator

2011-12-04 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13162504#comment-13162504
 ] 

Filipe Manana commented on COUCHDB-1323:


Benoît, just tried out your github branch. All tests pass for me and it looks 
ok, so +1

Looking at .gitignore changes, I see the following line added and I'm curious 
about it.

+*.sw*

What are the *.sw* files?

 couch_replicator
 

 Key: COUCHDB-1323
 URL: https://issues.apache.org/jira/browse/COUCHDB-1323
 Project: CouchDB
  Issue Type: Improvement
  Components: Replication
Affects Versions: 2.0, 1.3
Reporter: Benoit Chesneau
  Labels: replication
 Attachments: 0001-create-couch_replicator-application.patch, 
 0001-move-couchdb-replication-to-a-standalone-application.patch, 
 0001-move-couchdb-replication-to-a-standalone-application.patch, 
 0001-move-couchdb-replication-to-a-standalone-application.patch, 
 0001-move-couchdb-replication-to-a-standalone-application.patch, 
 0002-refactor-couch_replicator.patch, 0002-refactor-couch_replicator.patch


 patch to move couch replication modules in in a standalone couch_replicator 
 application. It's also available on my github:
 https://github.com/benoitc/couchdb/compare/master...couch_replicator

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1212) Newly created user accounts cannot sign-in after _user database crashes

2011-12-01 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13160937#comment-13160937
 ] 

Filipe Manana commented on COUCHDB-1212:


Benoit, the point I'm trying to expose to you is that a call, with a 
non-infinity timeout, to an idle gen_server can stimeout if the system is under 
heavy load. So I don't get what you're referring to (which task?) with because 
this task shouldn't be infinite.

 Newly created user accounts cannot sign-in after _user database crashes 
 

 Key: COUCHDB-1212
 URL: https://issues.apache.org/jira/browse/COUCHDB-1212
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core, HTTP Interface
Affects Versions: 1.0.2
 Environment: Ubuntu 10.10, Erlang R14B02 (erts-5.8.3)
Reporter: Jan van den Berg
Priority: Critical
  Labels: _users, authentication
 Attachments: couchdb-1212.patch


 We have one (4,5 GB) couch database and we use the (default) _users database 
 to store user accounts for a website. Once a week we need to restart couchdb 
 because newly sign-up user accounts cannot login any more. They get a HTTP 
 statuscode 401 from the _session HTTP interface. We update, and compact the 
 database three times a day.
 This is the a stacktrace I see in the couch database log prior to when these 
 issues occur.
 --- couch.log ---
 [Wed, 29 Jun 2011 22:02:46 GMT] [info] [0.117.0] Starting compaction for db 
 fbm
 [Wed, 29 Jun 2011 22:02:46 GMT] [info] [0.5753.79] 127.0.0.1 - - 'POST' 
 /fbm/_compact 202
 [Wed, 29 Jun 2011 22:02:46 GMT] [info] [0.5770.79] 127.0.0.1 - - 'POST' 
 /fbm/_view_cleanup 202
 [Wed, 29 Jun 2011 22:10:19 GMT] [info] [0.5773.79] 86.9.246.184 - - 'GET' 
 /_session 200
 [Wed, 29 Jun 2011 22:24:39 GMT] [info] [0.6236.79] 85.28.105.161 - - 'GET' 
 /_session 200
 [Wed, 29 Jun 2011 22:25:06 GMT] [error] [0.84.0] ** Generic server 
 couch_server terminating 
 ** Last message in was {open,fbm,
  [{user_ctx,{user_ctx,null,[],undefined}}]}
 ** When Server state == {server,/opt/couchbase-server/var/lib/couchdb,
 {re_pattern,0,0,
 69,82,67,80,116,0,0,0,16,0,0,0,1,0,0,0,0,0,
   0,0,0,0,0,0,40,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
   0,93,0,72,25,77,0,0,0,0,0,0,0,0,0,0,0,0,254,
   255,255,7,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
   77,0,0,0,0,16,171,255,3,0,0,0,128,254,255,
   255,7,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,69,26,
   84,0,72,0},
 100,2,Sat, 18 Jun 2011 14:00:44 GMT}
 ** Reason for termination == 
 ** {timeout,{gen_server,call,[0.116.0,{open_ref_count,0.10417.79}]}}
 [Wed, 29 Jun 2011 22:25:06 GMT] [error] [0.84.0] {error_report,0.31.0,
 {0.84.0,crash_report,
  [[{initial_call,{couch_server,init,['Argument__1']}},
{pid,0.84.0},
{registered_name,couch_server},
{error_info,
{exit,
{timeout,
{gen_server,call,
[0.116.0,{open_ref_count,0.10417.79}]}},
[{gen_server,terminate,6},{proc_lib,init_p_do_apply,3}]}},
{ancestors,[couch_primary_services,couch_server_sup,0.32.0]},
{messages,[]},
{links,[0.91.0,0.483.0,0.116.0,0.79.0]},
{dictionary,[]},
{trap_exit,true},
{status,running},
{heap_size,6765},
{stack_size,24},
{reductions,206710598}],
   []]}}
 [Wed, 29 Jun 2011 22:25:06 GMT] [error] [0.79.0] {error_report,0.31.0,
 {0.79.0,supervisor_report,
  [{supervisor,{local,couch_primary_services}},
   {errorContext,child_terminated},
   {reason,
   {timeout,
   {gen_server,call,[0.116.0,{open_ref_count,0.10417.79}]}}},
   {offender,
   [{pid,0.84.0},
{name,couch_server},
{mfargs,{couch_server,sup_start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]}]}}
 [Wed, 29 Jun 2011 22:25:06 GMT] [error] [0.91.0] ** Generic server 0.91.0 
 terminating 
 ** Last message in was {'EXIT',0.84.0,
{timeout,
{gen_server,call,
[0.116.0,
 {open_ref_count,0.10417.79}]}}}
 ** When Server state == {db,0.91.0,0.92.0,nil,1308405644393791,
 0.90.0,0.94.0,
 {db_header,5,91,0,
 {378285,{30,9}},
 

[jira] [Commented] (COUCHDB-1349) _replicator docs should include the additional info from _active_tasks

2011-11-29 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13159203#comment-13159203
 ] 

Filipe Manana commented on COUCHDB-1349:


Sounds good Rogutės.
All the necessary infrastructure is already there.

 _replicator docs should include the additional info from _active_tasks
 --

 Key: COUCHDB-1349
 URL: https://issues.apache.org/jira/browse/COUCHDB-1349
 Project: CouchDB
  Issue Type: Improvement
  Components: Replication
Reporter: Rogutės Sparnuotos
Priority: Minor

 There are some nice replication stats at /_active_tasks. I think that these 
 should be exposed in the corresponding /_replicator documents (well, at least 
 the first 3):
 {
   doc_write_failures: 0,
   docs_read: 0,
   docs_written: 0,
   updated_on: 1322521694,
   started_on: 1322521569
 }
 This would make it easier to map a replication doc to its status.
 This would benefit Futon, which currently seems to have only a limited 
 interface to /_active_tasks.
 This would bring _replicator closer to the old _replicate API, which returns 
 the stats after one-time replication:
 {
 ok: true,
 no_changes: true,
 session_id: 6647e26bc340b706bcf8f3c1ca709846,
 source_last_seq: 95,
 replication_id_version: 2,
 history: [
 {
 session_id: 6647e26bc340b706bcf8f3c1ca709846,
 start_time: Mon, 28 Nov 2011 23:44:28 GMT,
 end_time: Mon, 28 Nov 2011 23:44:33 GMT,
 start_last_seq: 0,
 end_last_seq: 95,
 recorded_seq: 95,
 missing_checked: 24,
 missing_found: 24,
 docs_read: 24,
 docs_written: 24,
 doc_write_failures: 0
 }
 ]
 }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1289) heartbeats skipped when continuous changes feed filter function produces no results

2011-11-28 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13158338#comment-13158338
 ] 

Filipe Manana commented on COUCHDB-1289:


Bob, +1.
You did most of the hard work, no need to set me as the author for one of them.
Lets push it to master and 1.2.x?

Thanks :)

 heartbeats skipped when continuous changes feed filter function produces no 
 results
 ---

 Key: COUCHDB-1289
 URL: https://issues.apache.org/jira/browse/COUCHDB-1289
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core
Reporter: Bob Dionne
Assignee: Bob Dionne
Priority: Minor
 Attachments: 0001-Ensure-heartbeats-are-not-skipped.patch, 
 0002-Failing-etap-for-heartbeats-skipped.patch


 if the changes feed has a filter function that produces no results, 
 db_updated messages will still be sent and the heartbeat timeout will never 
 be reached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1289) heartbeats skipped when continuous changes feed filter function produces no results

2011-11-28 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13158378#comment-13158378
 ] 

Filipe Manana commented on COUCHDB-1289:


(Jira messed up with the patch indentation)

 heartbeats skipped when continuous changes feed filter function produces no 
 results
 ---

 Key: COUCHDB-1289
 URL: https://issues.apache.org/jira/browse/COUCHDB-1289
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core
Reporter: Bob Dionne
Assignee: Bob Dionne
Priority: Minor
  Labels: patch
 Fix For: 1.2

 Attachments: 0001-Ensure-heartbeats-are-not-skipped.patch, 
 0002-Failing-etap-for-heartbeats-skipped.patch


 if the changes feed has a filter function that produces no results, 
 db_updated messages will still be sent and the heartbeat timeout will never 
 be reached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1289) heartbeats skipped when continuous changes feed filter function produces no results

2011-11-28 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13158376#comment-13158376
 ] 

Filipe Manana commented on COUCHDB-1289:


Bob, just caught another case where the default TimeoutFun will be called 
(which stops the streaming immediately):


From 7160021512161e3fa8f83165adc1a64285da6518 Mon Sep 17 00:00:00 2001
From: Filipe David Borba Manana fdman...@apache.org
Date: Mon, 28 Nov 2011 12:30:50 +
Subject: [PATCH 2/2] Make reset_heartbeat/0 a no-op

If no heartbeat option was given, the couch_changes heartbeat
function should really be a no-op, that is, to not set the
last_changes_heartbeat to the current time, otherwise the
default TimeoutFun (when no heartbeat option is given) might
be called which causes the changes feed to stop immediately
before folding the whole seq tree.
---
 src/couchdb/couch_changes.erl |7 ++-
 1 files changed, 6 insertions(+), 1 deletions(-)

diff --git a/src/couchdb/couch_changes.erl b/src/couchdb/couch_changes.erl
index f124567..72ee346 100644
--- a/src/couchdb/couch_changes.erl
+++ b/src/couchdb/couch_changes.erl
@@ -531,7 +531,12 @@ get_rest_db_updated(UserAcc) -
 end.
 
 reset_heartbeat() -
-put(last_changes_heartbeat,now()).
+case get(last_changes_heartbeat) of
+undefined -
+ok;
+_ -
+put(last_changes_heartbeat, now())
+end.
 
 maybe_heartbeat(Timeout, TimeoutFun, Acc) -
 Before = get(last_changes_heartbeat),
-- 
1.7.4.4

Do you agree?

 heartbeats skipped when continuous changes feed filter function produces no 
 results
 ---

 Key: COUCHDB-1289
 URL: https://issues.apache.org/jira/browse/COUCHDB-1289
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core
Reporter: Bob Dionne
Assignee: Bob Dionne
Priority: Minor
  Labels: patch
 Fix For: 1.2

 Attachments: 0001-Ensure-heartbeats-are-not-skipped.patch, 
 0002-Failing-etap-for-heartbeats-skipped.patch


 if the changes feed has a filter function that produces no results, 
 db_updated messages will still be sent and the heartbeat timeout will never 
 be reached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1289) heartbeats skipped when continuous changes feed filter function produces no results

2011-11-26 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13157464#comment-13157464
 ] 

Filipe Manana commented on COUCHDB-1289:


Looks good Bob, only a small regression compared to the previous versions, 
which is:

The last time a row was sent is never reset to now() once the filter function 
returns true. Basically the maybe_timeout(true, ...) function clause is never 
used in this new version.
This could cause unnecessary timeouts when the filter functions returns false 
several times in a row, then returns true (1 or more times in a row) and then 
starts returning false again. Here's what I mean:

https://github.com/fdmanana/couchdb/commit/31c80e34875932969716791b4b8adf374d240821

A test for this case would be awesome :)

Thanks for working on this, this has been affecting many users for a long time 
(when doing filtered replications).

 heartbeats skipped when continuous changes feed filter function produces no 
 results
 ---

 Key: COUCHDB-1289
 URL: https://issues.apache.org/jira/browse/COUCHDB-1289
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core
Reporter: Bob Dionne
Assignee: Bob Dionne
Priority: Minor
 Attachments: 0001-Ensure-heartbeats-are-not-skipped.patch, 
 0002-Failing-etap-for-heartbeats-skipped.patch


 if the changes feed has a filter function that produces no results, 
 db_updated messages will still be sent and the heartbeat timeout will never 
 be reached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1320) OAuth authentication doesn't work with VHost entry

2011-11-25 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13157137#comment-13157137
 ] 

Filipe Manana commented on COUCHDB-1320:


Thanks Martin.
I'll give it a try soon.

 OAuth authentication doesn't work with VHost entry
 --

 Key: COUCHDB-1320
 URL: https://issues.apache.org/jira/browse/COUCHDB-1320
 Project: CouchDB
  Issue Type: Bug
  Components: HTTP Interface
Affects Versions: 1.1
 Environment: Ubuntu
Reporter: Martin Higham
Assignee: Filipe Manana
 Attachments: Fix-OAuth-that-broke-with-vhost.patch


 If you have a vhost entry that modifies the path (such as my host.com = 
 /mainDB/_design/main/_rewrite ) trying to authenticate a request to this host 
 using OAuth fails.
 couch_httpd_oauth uses the modified path rather than the original 
 x-couchdb-vhost-path when calculating the signature.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1289) heartbeats skipped when continuous changes feed filter function produces no results

2011-11-23 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13155811#comment-13155811
 ] 

Filipe Manana commented on COUCHDB-1289:


Bob, np.

Looking at the whole context, not limited to see only the diff, I think you're 
right. The patch addresses the case I mentioned before, since it each tree fold 
has changes_sent set to false for initial accumulator and the 
changes_enumerator callback sets it to true once a row is accepted by the 
filter. However this is set only for the continuous clause of 
changes_enumerator. The other clause should set changes_set to true according 
to the same condition.

 heartbeats skipped when continuous changes feed filter function produces no 
 results
 ---

 Key: COUCHDB-1289
 URL: https://issues.apache.org/jira/browse/COUCHDB-1289
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core
Reporter: Bob Dionne
Assignee: Bob Dionne
Priority: Minor
 Attachments: 
 0001-Ensure-heartbeats-are-not-skipped-in-continuous-chan.patch


 if the changes feed has a filter function that produces no results, 
 db_updated messages will still be sent and the heartbeat timeout will never 
 be reached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1289) heartbeats skipped when continuous changes feed filter function produces no results

2011-11-22 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13155246#comment-13155246
 ] 

Filipe Manana commented on COUCHDB-1289:


Bob looks good.

Just have a few comments:

1) The indentation is a bit messed up, like the if expression true clause 
and its inner cause expression:  
https://github.com/bdionne/couchdb/compare/master...1289-heartbeats-skipped2#L0R398

2) This issue doesn't happen exclusively once the code finishes folding the seq 
tree and starts listening for db update events. Lets say we have a heartbeat 
timeout of 10 seconds, a seq tree with millions of entries, a filter functions 
which returns false for many consecutive entries. It's quite possible that 
before folding the entire seq tree snapshot and sending the first changes row 
which passes the filter (if any) the heartbeat timeout is exceeded. Makes sense?

3) The test can have way less duplicated code (spawn_consumer_heartbeat is 
basically a copy-paste of spawn_consumer). I made a few changes to it in my 
repo:   
https://github.com/fdmanana/couchdb/commit/2af5d56b909449569d60573f8118d7187e9334ca

thanks for working on this one



 heartbeats skipped when continuous changes feed filter function produces no 
 results
 ---

 Key: COUCHDB-1289
 URL: https://issues.apache.org/jira/browse/COUCHDB-1289
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core
Reporter: Bob Dionne
Assignee: Bob Dionne
Priority: Minor
 Attachments: 
 0001-Ensure-heartbeats-are-not-skipped-in-continuous-chan.patch


 if the changes feed has a filter function that produces no results, 
 db_updated messages will still be sent and the heartbeat timeout will never 
 be reached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1289) heartbeats skipped when continuous changes feed filter function produces no results

2011-11-22 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13155273#comment-13155273
 ] 

Filipe Manana commented on COUCHDB-1289:


Bob, for the indentation we have those rules in the wiki:  
http://wiki.apache.org/couchdb/Coding_Standards.
For e.g. the true at:  
https://github.com/bdionne/couchdb/compare/master...1289-heartbeats-skipped2#L0R401
Is not aligned with corresponding if (line 398) neither it is on a column 
multiple of 4 spaces.

Yep, for 2) it's an extra scenario. Probably after passing a row to the filter 
function, if the filter returns false and the time difference between now() 
and the time the last row was sent is greater than the threshold, it would call 
the callback with timeout (so it sends the newline to http clients). If the 
filter returns true, set that last row sent time to now(). The only case 
where this wouldn't work is if the filter function takes heartbeat seconds to 
evaluate a changes row (possibly very unlikely).
This can probably be addressed separately, I have no objection about that.

 heartbeats skipped when continuous changes feed filter function produces no 
 results
 ---

 Key: COUCHDB-1289
 URL: https://issues.apache.org/jira/browse/COUCHDB-1289
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core
Reporter: Bob Dionne
Assignee: Bob Dionne
Priority: Minor
 Attachments: 
 0001-Ensure-heartbeats-are-not-skipped-in-continuous-chan.patch


 if the changes feed has a filter function that produces no results, 
 db_updated messages will still be sent and the heartbeat timeout will never 
 be reached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1289) heartbeats skipped when continuous changes feed filter function produces no results

2011-11-22 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13155293#comment-13155293
 ] 

Filipe Manana commented on COUCHDB-1289:


Bob, the thing about the if is that the true clause is not a column 
multiple of 4 (it's on column 11), which violates the soft convention we have.
As for the cases, some indent the clauses to the case keyword, others indent 
it one more level (4 spaces). I think either is readable and don't care much 
about it unless it starts to be unreadable (not on 4 space multiples, of a big 
mix of both styles in the same module/region).

 heartbeats skipped when continuous changes feed filter function produces no 
 results
 ---

 Key: COUCHDB-1289
 URL: https://issues.apache.org/jira/browse/COUCHDB-1289
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core
Reporter: Bob Dionne
Assignee: Bob Dionne
Priority: Minor
 Attachments: 
 0001-Ensure-heartbeats-are-not-skipped-in-continuous-chan.patch


 if the changes feed has a filter function that produces no results, 
 db_updated messages will still be sent and the heartbeat timeout will never 
 be reached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1344) add etap coverage for _changes with filters and since

2011-11-21 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13154109#comment-13154109
 ] 

Filipe Manana commented on COUCHDB-1344:


Thanks for the reminder Randall.
For the comparison thing, I couldn't get the JavaScript tests to fail neither 
on the browser nor via the cli.
I added a few days ago an equivalent test to the etap test suite:

http://git-wip-us.apache.org/repos/asf?p=couchdb.git;a=commitdiff;h=c9a0d2af;hp=27ce101a3223e2c311b82654caed6363d1fd7288

It also covers deleted changes and the built-in _design filter.
Feel free to add any other tests you think that should be there.

 add etap coverage for _changes with filters and since
 -

 Key: COUCHDB-1344
 URL: https://issues.apache.org/jira/browse/COUCHDB-1344
 Project: CouchDB
  Issue Type: Improvement
  Components: Database Core
Affects Versions: 1.2
Reporter: Randall Leeds
Assignee: Randall Leeds
Priority: Minor
 Fix For: 1.2


 I described a problem with the js tests for _changes after new filtering 
 stuff:
It was the andmore-only section near the end:
T(line.seq == 8)
T(line.id == andmore)
The problem is that seq _should_ be strictly greater than since.
If you look at couch_db:changes_since/5 and related functions, that
module adds one to the sequence number in order to get this behavior.
Only when using filters was this broken and became =.
 Filipe said:
I've never had such failure locally. If it's due to browser caching,
it would be a good idea to add a test to test/etap/073-changes.t
 I said I'd add an etap test. This ticket is to remind me to do it:
I can add an etap test. I'm positive it's not browser caching because
I identified and fixed it. Also, because I just committed a change to
improve Futon's browser caching :).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-911) Concurrent updates to the same document generate erroneous conflict messages

2011-11-18 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153014#comment-13153014
 ] 

Filipe Manana commented on COUCHDB-911:
---

Thanks Adam. My understanding from your previous comment was that if 2 clients 
try to update the same doc and the updater collects both updates, it would send 
a conflict error to both clients (while what happened was that one doc got 
persisted and the other didn't).

 Concurrent updates to the same document generate erroneous conflict messages
 

 Key: COUCHDB-911
 URL: https://issues.apache.org/jira/browse/COUCHDB-911
 Project: CouchDB
  Issue Type: Bug
  Components: HTTP Interface
Affects Versions: 1.0
 Environment: Cloudant BigCouch EC2 node
Reporter: Jay Nelson
Priority: Minor
 Fix For: 1.2

 Attachments: 
 0001-Add-test-test-etap-074-doc-update-conflicts.t.patch, 
 0001-Add-test-test-etap-074-doc-update-conflicts.t.patch, 
 0001-Fix-whitespace.patch, 
 0002-Failing-test-for-duplicates-in-bulk-docs.patch, 
 0003-Add-references-to-docs-to-prevent-dups-from-being-co.patch

   Original Estimate: 48h
  Remaining Estimate: 48h

 Repeating an _id in a _bulk_docs post data file results in both entries 
 being reported as document conflict errors.  The first occurrence actual 
 inserts into the database, and only the second occurrence should report a 
 conflict.
 curl -d '{ docs: [ {_id:foo}, {_id,foo} ] }' -H 
 'Content-Type:application/json' -X POST 
 http://appadvice.cloudant.com/foo/_bulk_docs
 [{id:foo,error:conflict,reason:Document update 
 conflict.},{id:foo,error:conflict,reason:Document update 
 conflict.}]
 But the database shows that one new document was actually inserted.
 Only the second occurrence should report conflict.  The first occurrence 
 should report the _rev property of the newly inserted doc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1340) Replication: Invalid JSON reported

2011-11-17 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13152065#comment-13152065
 ] 

Filipe Manana commented on COUCHDB-1340:


Alex, that's great news :)
I found that the problem is mochiweb (our http server) doesn't allow request 
lines to exceed 8192 characters. We were accounting for this, but ignoring the 
extra characters account by GET  and the trailing  HTTP/1.1\r\n in the 
first request line. So my final patch is now this:

http://friendpaste.com/1TWqKb1Ac2hmKYh7VgNMLO


Index: src/couchdb/couch_rep_reader.erl
===
--- src/couchdb/couch_rep_reader.erl(revision 1177549)
+++ src/couchdb/couch_rep_reader.erl(working copy)
@@ -177,7 +177,7 @@
 hd(State#state.opened_seqs).
 
 split_revlist(Rev, {[CurrentAcc|Rest], BaseLength, Length}) -
-case Length+size(Rev)+3  8192 of
+case Length+size(Rev)+3 = 8192 of
 false -
 {[[Rev|CurrentAcc] | Rest], BaseLength, Length+size(Rev)+3};
 true -
@@ -214,7 +214,9 @@
 %% MochiWeb into multiple requests
 BaseQS = [{revs,true}, {latest,true}, {att_encoding_info,true}],
 BaseReq = DbS#http_db{resource=encode_doc_id(DocId), qs=BaseQS},
-BaseLength = length(couch_rep_httpc:full_url(BaseReq) ++ open_revs=[]),
+BaseLength = length(
+GET  ++ couch_rep_httpc:full_url(BaseReq) ++
+open_revs=[] ++  HTTP/1.1\r\n),
 
 {RevLists, _, _} = lists:foldl(fun split_revlist/2,
 {[[]], BaseLength, BaseLength}, couch_doc:revs_to_strs(Revs)),


Thanks for testing and helping debug it. 

 Replication: Invalid JSON reported
 --

 Key: COUCHDB-1340
 URL: https://issues.apache.org/jira/browse/COUCHDB-1340
 Project: CouchDB
  Issue Type: Bug
  Components: Replication
Affects Versions: 1.1.1
 Environment: CentOS 5.6 x86_64, Couchdb 1.1.1 (Patched for 
 COUCHDB-1333), spidermonkey 1.8.5, curl 7.21, erlang 14b03
Reporter: Alex Markham
  Labels: invalid, json
 Attachments: 9c94ed0e23508f4ec3d18f8949c06a5b replicaton from 
 wireshark cut.txt, replication error wireshark.txt, source couch error.log, 
 target couch error.log


 It seems our replication has stopped, reporting an error
 [emulator] Error in process 0.21599.306 {{nocatch,{invalid_json,0 
 bytes}},[{couch_util,json_decode,1},{couch_rep_reader,'-open_doc_revs/3-lc$^1/1-1-',1},{couch_rep_reader,'-open_doc_revs/3-lc$^1/1-1-',1},{couch_rep_reader,open_doc_revs,3},{couch_rep_reader,'-spawn_document_request/4-fun-0-'...
  
 It was all working until we upgraded some other couches in our replication 
 web from couch 1.0.3 to couch 1.1.1. We then set of database and view 
 compactions, and sometime overnight some of the replication links stopped.
 I have curled the command myself, both as a multipart message and a single 
 json response (with header Accept:application/json ) and it can be parsed 
 correctly by Python simplejson - I have attached it here aswell - called 
 troublecurl-redacted.txt - though it is 18.8mb. The request takes about 6 
 seconds.
 I don't quite understand why it is reported as invalid JSON? Other reports 
 similar to this that I googled mentioned blank document ids, but I can't see 
 any of these.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1340) Replication: Invalid JSON reported

2011-11-17 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13152106#comment-13152106
 ] 

Filipe Manana commented on COUCHDB-1340:


Although not visible in your capture log, a possible mistake here is that the 
code is incrementing by +1 for the following characters: [, ], ,, \.
Some of them must be percent encoded, others like double quote normally are 
(http://en.wikipedia.org/wiki/Percent-encoding).
We should increment each by +3 and not +1.

Alex, new patch (on top of the previous one):

http://friendpaste.com/3cPf7tFlNOeTZedlmgfDu2

 Replication: Invalid JSON reported
 --

 Key: COUCHDB-1340
 URL: https://issues.apache.org/jira/browse/COUCHDB-1340
 Project: CouchDB
  Issue Type: Bug
  Components: Replication
Affects Versions: 1.1.1
 Environment: CentOS 5.6 x86_64, Couchdb 1.1.1 (Patched for 
 COUCHDB-1333), spidermonkey 1.8.5, curl 7.21, erlang 14b03
Reporter: Alex Markham
  Labels: invalid, json
 Fix For: 1.2, 1.1.2

 Attachments: 9c94ed0e23508f4ec3d18f8949c06a5b replicaton from 
 wireshark cut.txt, replication error wireshark.txt, source couch error.log, 
 target couch error.log


 It seems our replication has stopped, reporting an error
 [emulator] Error in process 0.21599.306 {{nocatch,{invalid_json,0 
 bytes}},[{couch_util,json_decode,1},{couch_rep_reader,'-open_doc_revs/3-lc$^1/1-1-',1},{couch_rep_reader,'-open_doc_revs/3-lc$^1/1-1-',1},{couch_rep_reader,open_doc_revs,3},{couch_rep_reader,'-spawn_document_request/4-fun-0-'...
  
 It was all working until we upgraded some other couches in our replication 
 web from couch 1.0.3 to couch 1.1.1. We then set of database and view 
 compactions, and sometime overnight some of the replication links stopped.
 I have curled the command myself, both as a multipart message and a single 
 json response (with header Accept:application/json ) and it can be parsed 
 correctly by Python simplejson - I have attached it here aswell - called 
 troublecurl-redacted.txt - though it is 18.8mb. The request takes about 6 
 seconds.
 I don't quite understand why it is reported as invalid JSON? Other reports 
 similar to this that I googled mentioned blank document ids, but I can't see 
 any of these.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1340) Replication: Invalid JSON reported

2011-11-16 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13151137#comment-13151137
 ] 

Filipe Manana commented on COUCHDB-1340:


Alex, the only cases I've seen this happening are due to the pipeline or 
documents in the source with an empty ID (or consisting only of white spaces). 
Or perhaps, there's some firewall in the middle interfering.
Also, for a quick test, do you think you can try pull replicating with the 
1.2.x branch? (it's not an official release, only for testing) That branch has 
a new replicator.

thanks

 Replication: Invalid JSON reported
 --

 Key: COUCHDB-1340
 URL: https://issues.apache.org/jira/browse/COUCHDB-1340
 Project: CouchDB
  Issue Type: Bug
  Components: Replication
Affects Versions: 1.1.1
 Environment: CentOS 5.6 x86_64, Couchdb 1.1.1 (Patched for 
 COUCHDB-1333), spidermonkey 1.8.5, curl 7.21, erlang 14b03
Reporter: Alex Markham
  Labels: invalid, json
 Attachments: source couch error.log, target couch error.log


 It seems our replication has stopped, reporting an error
 [emulator] Error in process 0.21599.306 {{nocatch,{invalid_json,0 
 bytes}},[{couch_util,json_decode,1},{couch_rep_reader,'-open_doc_revs/3-lc$^1/1-1-',1},{couch_rep_reader,'-open_doc_revs/3-lc$^1/1-1-',1},{couch_rep_reader,open_doc_revs,3},{couch_rep_reader,'-spawn_document_request/4-fun-0-'...
  
 It was all working until we upgraded some other couches in our replication 
 web from couch 1.0.3 to couch 1.1.1. We then set of database and view 
 compactions, and sometime overnight some of the replication links stopped.
 I have curled the command myself, both as a multipart message and a single 
 json response (with header Accept:application/json ) and it can be parsed 
 correctly by Python simplejson - I have attached it here aswell - called 
 troublecurl-redacted.txt - though it is 18.8mb. The request takes about 6 
 seconds.
 I don't quite understand why it is reported as invalid JSON? Other reports 
 similar to this that I googled mentioned blank document ids, but I can't see 
 any of these.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1341) calculate replication id using only db name in remote case

2011-11-16 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13151140#comment-13151140
 ] 

Filipe Manana commented on COUCHDB-1341:


Hi Bob, that's definitely an issue.

However I think that are 2 issues by using this approach of considering only 
the remote db name and excluding the server/port component of the URL.

Imagine that on a CouchDB instance you trigger these 2 replications:

(replication 1)
{
source: http://server1.com/foo;,
target: bar
}

(replication 2)
{
source: http://server2.com/foo;,
target: bar
}

From what I understand, both will have the same replication ID with this 
patch, right?
If so you can't have both replications running in parallel, one of them will 
have conflict error when updating its checkpoint local doc (because it's the 
same for both).

Also, if you start replication 1, followed by replication 2 and then followed 
by replication 1 again, we can lose the benefits of the checkpointing.
Suppose you start replication 1, after it finishes the checkpoint document's 
most recent history entry has source sequence 1 000 000.
Then you start replication 2. Because the ID is the same as replication 1, it 
will overwrite the checkpoint document. If it checkpoints more than 50 times 
(the maximum history length), all checkpoint entries from replication 1 are 
gone. When it finishes, if you start replication 1 again, it will no longer 
find entries in the checkpoint history related to it, so the replication will 
start from sequence 0 instead of 1 000 000.

Basically, if we have a source or target like 
http://user:passw...@server.com/dbname;, I think we should consider everything 
from the URL except the password (eventually the protocol as well).

 calculate replication id using only db name in remote case
 --

 Key: COUCHDB-1341
 URL: https://issues.apache.org/jira/browse/COUCHDB-1341
 Project: CouchDB
  Issue Type: Improvement
  Components: Replication
Reporter: Bob Dionne
Assignee: Bob Dionne
Priority: Minor

 currently if the source or target in a replication spec contains user/pwd 
 information it gets encoded in the id which can cause restarts if the 
 password changes. Change it to use just the db name as the local case does, 
 Here's a draft[1] of a solution.
 [1] https://github.com/bdionne/couchdb/compare/master...9767-fix-repl-id

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1340) Replication: Invalid JSON reported

2011-11-16 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13151189#comment-13151189
 ] 

Filipe Manana commented on COUCHDB-1340:


Alex good observation.
The last 1 is a chunk which contains only a \n character. This is intentionally 
added by couch for all json responses:

https://github.com/apache/couchdb/blob/1.1.x/src/couchdb/couch_httpd.erl#L654

I'm unsure this would be the cause for your problem (chunks are separated by 
\r\n). If you're able to build from source, you can try commenting the line 
which sends the chunk containing only the \n. This change would be on the 
source database server.

 Replication: Invalid JSON reported
 --

 Key: COUCHDB-1340
 URL: https://issues.apache.org/jira/browse/COUCHDB-1340
 Project: CouchDB
  Issue Type: Bug
  Components: Replication
Affects Versions: 1.1.1
 Environment: CentOS 5.6 x86_64, Couchdb 1.1.1 (Patched for 
 COUCHDB-1333), spidermonkey 1.8.5, curl 7.21, erlang 14b03
Reporter: Alex Markham
  Labels: invalid, json
 Attachments: 9c94ed0e23508f4ec3d18f8949c06a5b replicaton from 
 wireshark cut.txt, replication error wireshark.txt, source couch error.log, 
 target couch error.log


 It seems our replication has stopped, reporting an error
 [emulator] Error in process 0.21599.306 {{nocatch,{invalid_json,0 
 bytes}},[{couch_util,json_decode,1},{couch_rep_reader,'-open_doc_revs/3-lc$^1/1-1-',1},{couch_rep_reader,'-open_doc_revs/3-lc$^1/1-1-',1},{couch_rep_reader,open_doc_revs,3},{couch_rep_reader,'-spawn_document_request/4-fun-0-'...
  
 It was all working until we upgraded some other couches in our replication 
 web from couch 1.0.3 to couch 1.1.1. We then set of database and view 
 compactions, and sometime overnight some of the replication links stopped.
 I have curled the command myself, both as a multipart message and a single 
 json response (with header Accept:application/json ) and it can be parsed 
 correctly by Python simplejson - I have attached it here aswell - called 
 troublecurl-redacted.txt - though it is 18.8mb. The request takes about 6 
 seconds.
 I don't quite understand why it is reported as invalid JSON? Other reports 
 similar to this that I googled mentioned blank document ids, but I can't see 
 any of these.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1340) Replication: Invalid JSON reported

2011-11-16 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13151203#comment-13151203
 ] 

Filipe Manana commented on COUCHDB-1340:


Alex, that 8197 byte length is for what exactly?
I selected the whole HTTP request region in Emacs (starting from GET up to the 
blank line after Content-Length: 0) and Emacs tells me that there are 7124 
characters in the request (including the CRLFs separating headers).

You're issue however happened before in the past. Can you try the following 
patch:

Index: src/couchdb/couch_rep_reader.erl
===
--- src/couchdb/couch_rep_reader.erl(revision 1177549)
+++ src/couchdb/couch_rep_reader.erl(working copy)
@@ -177,7 +177,7 @@
 hd(State#state.opened_seqs).
 
 split_revlist(Rev, {[CurrentAcc|Rest], BaseLength, Length}) -
-case Length+size(Rev)+3  8192 of
+case Length+size(Rev)+3  8000 of
 false -
 {[[Rev|CurrentAcc] | Rest], BaseLength, Length+size(Rev)+3};
 true -

Pasted at http://friendpaste.com/3DVTw8HgpvcUsOECEO1eoI as well.

Thanks for debugging this

 Replication: Invalid JSON reported
 --

 Key: COUCHDB-1340
 URL: https://issues.apache.org/jira/browse/COUCHDB-1340
 Project: CouchDB
  Issue Type: Bug
  Components: Replication
Affects Versions: 1.1.1
 Environment: CentOS 5.6 x86_64, Couchdb 1.1.1 (Patched for 
 COUCHDB-1333), spidermonkey 1.8.5, curl 7.21, erlang 14b03
Reporter: Alex Markham
  Labels: invalid, json
 Attachments: 9c94ed0e23508f4ec3d18f8949c06a5b replicaton from 
 wireshark cut.txt, replication error wireshark.txt, source couch error.log, 
 target couch error.log


 It seems our replication has stopped, reporting an error
 [emulator] Error in process 0.21599.306 {{nocatch,{invalid_json,0 
 bytes}},[{couch_util,json_decode,1},{couch_rep_reader,'-open_doc_revs/3-lc$^1/1-1-',1},{couch_rep_reader,'-open_doc_revs/3-lc$^1/1-1-',1},{couch_rep_reader,open_doc_revs,3},{couch_rep_reader,'-spawn_document_request/4-fun-0-'...
  
 It was all working until we upgraded some other couches in our replication 
 web from couch 1.0.3 to couch 1.1.1. We then set of database and view 
 compactions, and sometime overnight some of the replication links stopped.
 I have curled the command myself, both as a multipart message and a single 
 json response (with header Accept:application/json ) and it can be parsed 
 correctly by Python simplejson - I have attached it here aswell - called 
 troublecurl-redacted.txt - though it is 18.8mb. The request takes about 6 
 seconds.
 I don't quite understand why it is reported as invalid JSON? Other reports 
 similar to this that I googled mentioned blank document ids, but I can't see 
 any of these.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1341) calculate replication id using only db name in remote case

2011-11-16 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13151241#comment-13151241
 ] 

Filipe Manana commented on COUCHDB-1341:


Adam, I'm not very keen on using _session.
If between 2 consecutive requests made by the replicator the cookie expires, 
then we need extra code to POST to session again and retry the 2nd request, 
making it all more complex then it already is. This is much more likely to 
happen when retrying a request (due to the sort of exponential retry wait 
period), or during continuous replications for longs periods where both source 
and target are in sync (no new requests done to the source).

 calculate replication id using only db name in remote case
 --

 Key: COUCHDB-1341
 URL: https://issues.apache.org/jira/browse/COUCHDB-1341
 Project: CouchDB
  Issue Type: Improvement
  Components: Replication
Reporter: Bob Dionne
Assignee: Bob Dionne
Priority: Minor

 currently if the source or target in a replication spec contains user/pwd 
 information it gets encoded in the id which can cause restarts if the 
 password changes. Change it to use just the db name as the local case does, 
 Here's a draft[1] of a solution.
 [1] https://github.com/bdionne/couchdb/compare/master...9767-fix-repl-id

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1340) Replication: Invalid JSON reported

2011-11-15 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13150628#comment-13150628
 ] 

Filipe Manana commented on COUCHDB-1340:


This seems similar to COUCHDB-1327.

Can you try replicating after setting http_pipeline_size to 1 (in the .ini 
config, or via Futon, or REST api)?

 Replication: Invalid JSON reported
 --

 Key: COUCHDB-1340
 URL: https://issues.apache.org/jira/browse/COUCHDB-1340
 Project: CouchDB
  Issue Type: Bug
  Components: Replication
Affects Versions: 1.1.1
 Environment: CentOS 5.6 x86_64, Couchdb 1.1.1 (Patched for 
 COUCHDB-1333), spidermonkey 1.8.5, curl 7.21, erlang 14b03
Reporter: Alex Markham
  Labels: invalid, json
 Attachments: source couch error.log, target couch error.log


 It seems our replication has stopped, reporting an error
 [emulator] Error in process 0.21599.306 {{nocatch,{invalid_json,0 
 bytes}},[{couch_util,json_decode,1},{couch_rep_reader,'-open_doc_revs/3-lc$^1/1-1-',1},{couch_rep_reader,'-open_doc_revs/3-lc$^1/1-1-',1},{couch_rep_reader,open_doc_revs,3},{couch_rep_reader,'-spawn_document_request/4-fun-0-'...
  
 It was all working until we upgraded some other couches in our replication 
 web from couch 1.0.3 to couch 1.1.1. We then set of database and view 
 compactions, and sometime overnight some of the replication links stopped.
 I have curled the command myself, both as a multipart message and a single 
 json response (with header Accept:application/json ) and it can be parsed 
 correctly by Python simplejson - I have attached it here aswell - called 
 troublecurl-redacted.txt - though it is 18.8mb. The request takes about 6 
 seconds.
 I don't quite understand why it is reported as invalid JSON? Other reports 
 similar to this that I googled mentioned blank document ids, but I can't see 
 any of these.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1334) Indexer speedup (for non-native view servers)

2011-11-09 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13147291#comment-13147291
 ] 

Filipe Manana commented on COUCHDB-1334:


@Adam I haven't heard yet about plans to release 1.2.0 soon (where soon can 
mean in a few months). I think this gives a very good benefit that many users 
would be happy for. But I understand your point, it's not wrong.

@Paul, yes makes sense. This was more of an experiment. I looked into 
port_connect/2 before and was getting exit_status errors before receiving any 
responses back, so I did something wrong before. I just updated the patch, 
against master, so that all this logic is into couch_os_process, doesn't spawn 
an helper process and uses port_connect. Let me know what you think. Thanks.

 Indexer speedup (for non-native view servers)
 -

 Key: COUCHDB-1334
 URL: https://issues.apache.org/jira/browse/COUCHDB-1334
 Project: CouchDB
  Issue Type: Improvement
  Components: Database Core, JavaScript View Server, View Server 
 Support
Reporter: Filipe Manana
Assignee: Filipe Manana
 Fix For: 1.2

 Attachments: 0001-More-efficient-view-updater-writes.patch, 
 0002-More-efficient-communication-with-the-view-server.patch, 
 master-0002-More-efficient-communication-with-the-view-server.patch


 The following 2 patches significantly improve view index generation/update 
 time and reduce CPU consumption.
 The first patch makes the view updater's batching more efficient, by ensuring 
 each btree bulk insertion adds/removes a minimum of N (=100) key/value 
 pairts. This also makes the index file size grow not so fast with old data 
 (old btree nodes basically). This behaviour is already done in master/trunk 
 in the new indexer (by Paul Davis).
 The second patch maximizes the throughput with an external view server (such 
 as couchjs). Basically it makes the pipe (erlang port) communication between 
 the Erlang VM (couch_os_process basically) and the view server more efficient 
 since the 2 sides spend less time block on reading from the pipe.
 Here follow some benchmarks.
 test database at  http://fdmanana.iriscouch.com/test_db  (1 million documents)
 branch 1.2.x
 $ echo 3  /proc/sys/vm/drop_caches
 $ time curl http://localhost:5984/test_db/_design/test/_view/test1
 {rows:[
 {key:null,value:100}
 ]}
 real  2m45.097s
 user  0m0.006s
 sys   0m0.007s
 view file size: 333Mb
 CPU usage:
 $ sar 1 60
 22:27:20  %usr  %nice   %sys   %idle
 22:27:21   38  0 12 50
 ()
 22:28:21   39  0 13 49
 Average: 39  0 13 47   
 branch 1.2.x + batch patch (first patch)
 $ echo 3  /proc/sys/vm/drop_caches
 $ time curl http://localhost:5984/test_db/_design/test/_view/test1
 {rows:[
 {key:null,value:100}
 ]}
 real  2m12.736s
 user  0m0.006s
 sys   0m0.005s
 view file size 72Mb
 branch 1.2.x + batch patch + os_process patch
 $ echo 3  /proc/sys/vm/drop_caches
 $ time curl http://localhost:5984/test_db/_design/test/_view/test1
 {rows:[
 {key:null,value:100}
 ]}
 real  1m9.330s
 user  0m0.006s
 sys   0m0.004s
 view file size:  72Mb
 CPU usage:
 $ sar 1 60
 22:22:55  %usr  %nice   %sys   %idle
 22:23:53   22  0  6 72
 ()
 22:23:55   22  0  6 72
 Average: 22  0  7 70   
 master/trunk
 $ echo 3  /proc/sys/vm/drop_caches
 $ time curl http://localhost:5984/test_db/_design/test/_view/test1
 {rows:[
 {key:null,value:100}
 ]}
 real  1m57.296s
 user  0m0.006s
 sys   0m0.005s
 master/trunk + os_process patch
 $ echo 3  /proc/sys/vm/drop_caches
 $ time curl http://localhost:5984/test_db/_design/test/_view/test1
 {rows:[
 {key:null,value:100}
 ]}
 real  0m53.768s
 user  0m0.006s
 sys   0m0.006s

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1334) Indexer speedup (for non-native view servers)

2011-11-09 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13147393#comment-13147393
 ] 

Filipe Manana commented on COUCHDB-1334:


Paul, yep performance was the same as before, forgot to mention that.

There's a thing, the port_connect in after might fail because of 2 things:
1) os process died (it was linked to the port)
2) readline got an error or timeout, closed the port and threw an exception, so 
the port_connect in after will fail with badarg because the port was closed

This diff over the previous patch makes it more clear, besides adding a 
necessary unlink:

-% Can throw badarg error, when OsProc Pid is dead.
-(catch port_connect(OsProc#os_proc.port, Pid))
+% Can throw badarg error, when OsProc Pid is dead or port was closed
+% by the readline function on error/timeout.
+(catch port_connect(OsProc#os_proc.port, Pid)),
+unlink(OsProc#os_proc.port)

(uploading new patch)

 Indexer speedup (for non-native view servers)
 -

 Key: COUCHDB-1334
 URL: https://issues.apache.org/jira/browse/COUCHDB-1334
 Project: CouchDB
  Issue Type: Improvement
  Components: Database Core, JavaScript View Server, View Server 
 Support
Reporter: Filipe Manana
Assignee: Filipe Manana
 Fix For: 1.2

 Attachments: 0001-More-efficient-view-updater-writes.patch, 
 0002-More-efficient-communication-with-the-view-server.patch, 
 master-0002-More-efficient-communication-with-the-view-server.patch, 
 master-2-0002-More-efficient-communication-with-the-view-server.patch, 
 master-3-0002-More-efficient-communication-with-the-view-server.patch


 The following 2 patches significantly improve view index generation/update 
 time and reduce CPU consumption.
 The first patch makes the view updater's batching more efficient, by ensuring 
 each btree bulk insertion adds/removes a minimum of N (=100) key/value 
 pairts. This also makes the index file size grow not so fast with old data 
 (old btree nodes basically). This behaviour is already done in master/trunk 
 in the new indexer (by Paul Davis).
 The second patch maximizes the throughput with an external view server (such 
 as couchjs). Basically it makes the pipe (erlang port) communication between 
 the Erlang VM (couch_os_process basically) and the view server more efficient 
 since the 2 sides spend less time block on reading from the pipe.
 Here follow some benchmarks.
 test database at  http://fdmanana.iriscouch.com/test_db  (1 million documents)
 branch 1.2.x
 $ echo 3  /proc/sys/vm/drop_caches
 $ time curl http://localhost:5984/test_db/_design/test/_view/test1
 {rows:[
 {key:null,value:100}
 ]}
 real  2m45.097s
 user  0m0.006s
 sys   0m0.007s
 view file size: 333Mb
 CPU usage:
 $ sar 1 60
 22:27:20  %usr  %nice   %sys   %idle
 22:27:21   38  0 12 50
 ()
 22:28:21   39  0 13 49
 Average: 39  0 13 47   
 branch 1.2.x + batch patch (first patch)
 $ echo 3  /proc/sys/vm/drop_caches
 $ time curl http://localhost:5984/test_db/_design/test/_view/test1
 {rows:[
 {key:null,value:100}
 ]}
 real  2m12.736s
 user  0m0.006s
 sys   0m0.005s
 view file size 72Mb
 branch 1.2.x + batch patch + os_process patch
 $ echo 3  /proc/sys/vm/drop_caches
 $ time curl http://localhost:5984/test_db/_design/test/_view/test1
 {rows:[
 {key:null,value:100}
 ]}
 real  1m9.330s
 user  0m0.006s
 sys   0m0.004s
 view file size:  72Mb
 CPU usage:
 $ sar 1 60
 22:22:55  %usr  %nice   %sys   %idle
 22:23:53   22  0  6 72
 ()
 22:23:55   22  0  6 72
 Average: 22  0  7 70   
 master/trunk
 $ echo 3  /proc/sys/vm/drop_caches
 $ time curl http://localhost:5984/test_db/_design/test/_view/test1
 {rows:[
 {key:null,value:100}
 ]}
 real  1m57.296s
 user  0m0.006s
 sys   0m0.005s
 master/trunk + os_process patch
 $ echo 3  /proc/sys/vm/drop_caches
 $ time curl http://localhost:5984/test_db/_design/test/_view/test1
 {rows:[
 {key:null,value:100}
 ]}
 real  0m53.768s
 user  0m0.006s
 sys   0m0.006s

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1335) _replicator document (1.1.1 to 1.1.1 continuous) lingers in Starting state

2011-11-07 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13145504#comment-13145504
 ] 

Filipe Manana commented on COUCHDB-1335:


Dirkjan, if you don't use the _replicator database, does the same thing happens?
The only case I can see that happening is when the target already has all 
changes from the source - in that case I think the _active_tasks status is left 
as Starting.

 _replicator document (1.1.1 to 1.1.1 continuous) lingers in Starting state
 --

 Key: COUCHDB-1335
 URL: https://issues.apache.org/jira/browse/COUCHDB-1335
 Project: CouchDB
  Issue Type: Bug
  Components: Replication
Affects Versions: 1.1.1
 Environment: Linux, Erlang 13B04.
Reporter: Dirkjan Ochtman
 Fix For: 1.1.2


 I just upgraded two of our boxes to 1.1.1. I have two replication streams 
 (between two separate databases), both administrated via the _replicator 
 database. However, since the upgrade one of them stays in Starting state, 
 while the other correctly moves on to replicate and reports its status as 
 normal.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1327) CLONE - remote to local replication fails when using a proxy

2011-11-07 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13145518#comment-13145518
 ] 

Filipe Manana commented on COUCHDB-1327:


Eliseo, I get the same issue as you.

I confirmed that the receiving side no longer receives documents with empty ID 
(or an ID consisting of white spaces only). What happens is that the response 
to some document GETs is empty, which makes the replicator since it expects a 
non-empty JSON body.
I think this is an issue either with the http client's pipeline implementation.

Setting max_http_pipeline_size to 1 in the .ini config works, but the 
replication will be very very slow, since that particular database has many 
large attachments. The replicator in the 1.2.x branch (or master) is way faster 
for this database.

Let me know if it works for you.

 CLONE - remote to local replication fails when using a proxy
 

 Key: COUCHDB-1327
 URL: https://issues.apache.org/jira/browse/COUCHDB-1327
 Project: CouchDB
  Issue Type: Bug
  Components: Replication
Affects Versions: 1.1.2
Reporter: Eliseo Soto

 The following is failing for me:
 curl -X POST -H Content-Type:application/json 
 http://localhost:5984/_replicate -d 
 '{source:http://isaacs.iriscouch.com/registry/;, target:registry, 
 proxy: http://wwwgate0.myproxy.com:1080}'
 This is the error:
 {error:json_encode,reason:{bad_term,{nocatch,{invalid_json,
 I have no clue about what's wrong, I can curl 
 http://isaacs.iriscouch.com/registry/ directly and it works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1335) _replicator document (1.1.1 to 1.1.1 continuous) lingers in Starting state

2011-11-07 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13145533#comment-13145533
 ] 

Filipe Manana commented on COUCHDB-1335:


Dirkjan I agree it's confusing. It is no longer the case in the branch 1.2.x 
and onwards.
Is it ok for you to close this ticket then?

 _replicator document (1.1.1 to 1.1.1 continuous) lingers in Starting state
 --

 Key: COUCHDB-1335
 URL: https://issues.apache.org/jira/browse/COUCHDB-1335
 Project: CouchDB
  Issue Type: Bug
  Components: Replication
Affects Versions: 1.1.1
 Environment: Linux, Erlang 13B04.
Reporter: Dirkjan Ochtman
 Fix For: 1.1.2


 I just upgraded two of our boxes to 1.1.1. I have two replication streams 
 (between two separate databases), both administrated via the _replicator 
 database. However, since the upgrade one of them stays in Starting state, 
 while the other correctly moves on to replicate and reports its status as 
 normal.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1320) OAuth authentication doesn't work with VHost entry

2011-11-05 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13144709#comment-13144709
 ] 

Filipe Manana commented on COUCHDB-1320:


Good point Martin, I didn't thought about that.

There's an etap (Erlang) unit test for vhosts:  test/etap/160-vhosts.t

Perhaps it's possible to get the test in Erlang. There's an Erlang Oauth 
library Couch ships with that can help (see src/erlang-oauth/oauth.erl).

Let me know if it works for you.
Thanks

 OAuth authentication doesn't work with VHost entry
 --

 Key: COUCHDB-1320
 URL: https://issues.apache.org/jira/browse/COUCHDB-1320
 Project: CouchDB
  Issue Type: Bug
  Components: HTTP Interface
Affects Versions: 1.1
 Environment: Ubuntu
Reporter: Martin Higham

 If you have a vhost entry that modifies the path (such as my host.com = 
 /mainDB/_design/main/_rewrite ) trying to authenticate a request to this host 
 using OAuth fails.
 couch_httpd_oauth uses the modified path rather than the original 
 x-couchdb-vhost-path when calculating the signature.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1333) views hangs, time-out occurs, error probably related to COUCHDB-1246

2011-11-04 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13143994#comment-13143994
 ] 

Filipe Manana commented on COUCHDB-1333:


This is likely because your server is busy and the internal Erlang processes 
are using Erlang's gen_server default call timeout of 5 seconds which isn't 
enough for such cases (Erlang is soft real time).
I've hit this a few times as well, so updating those timeouts to 'infinity' 
should do it (for me it did at least).

 views hangs, time-out occurs,  error probably related to COUCHDB-1246
 -

 Key: COUCHDB-1333
 URL: https://issues.apache.org/jira/browse/COUCHDB-1333
 Project: CouchDB
  Issue Type: Bug
  Components: JavaScript View Server
Affects Versions: 1.1.1
 Environment: centos 5.5
Reporter: Armen Arsakian
  Labels: javascript

 My permanent views are stalled, get responses never arrive from couchdb utill 
 a time-out error occurs.
 Once I managed to get the following error:
 timeout {gen_server,call,[couch_query_servers,{get_proc,javascript}]} 
 I found the following similar issue
 https://issues.apache.org/jira/browse/COUCHDB-1246
  but this was closed at 18/8/2011.
 I guess this is not fixed yet, I  installed couchDB from source which I 
 downloaded today, version: 1.1.1
 Is this issue still open.
 In my opensuse I have not any problem with views. version 1.1.0
 but in production machine a centos 5.5 i get the above error. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1320) OAuth authentication doesn't work with VHost entry

2011-11-03 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13143042#comment-13143042
 ] 

Filipe Manana commented on COUCHDB-1320:


Thanks Martin. While not an expert on this field, it looks good.
Do you think you can add tests? (share/www/script/test/oauth.js)

 OAuth authentication doesn't work with VHost entry
 --

 Key: COUCHDB-1320
 URL: https://issues.apache.org/jira/browse/COUCHDB-1320
 Project: CouchDB
  Issue Type: Bug
  Components: HTTP Interface
Affects Versions: 1.1
 Environment: Ubuntu
Reporter: Martin Higham

 If you have a vhost entry that modifies the path (such as my host.com = 
 /mainDB/_design/main/_rewrite ) trying to authenticate a request to this host 
 using OAuth fails.
 couch_httpd_oauth uses the modified path rather than the original 
 x-couchdb-vhost-path when calculating the signature.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1327) CLONE - remote to local replication fails when using a proxy

2011-11-01 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13141082#comment-13141082
 ] 

Filipe Manana commented on COUCHDB-1327:


Eliseo, the replication proxy feature had bugs in the Apache CouchDB 1.0.1 
release. They were corrected in 1.0.2.

Also, as Jason said, you're also hitting a bug where the source database has a 
document with an empty ID which makes it crash the replicator. The only Apache 
release with a fix for this is 1.1.1 (you should use it, as it fixes many other 
issues).

Try 1.1.1 and reopen this ticket if the proxy feature is still not working fine.

 CLONE - remote to local replication fails when using a proxy
 

 Key: COUCHDB-1327
 URL: https://issues.apache.org/jira/browse/COUCHDB-1327
 Project: CouchDB
  Issue Type: Bug
  Components: Replication
Affects Versions: 1.1.2
Reporter: Eliseo Soto

 The following is failing for me:
 curl -X POST -H Content-Type:application/json 
 http://localhost:5984/_replicate -d 
 '{source:http://isaacs.iriscouch.com/registry/;, target:registry, 
 proxy: http://wwwgate0.myproxy.com:1080}'
 This is the error:
 {error:json_encode,reason:{bad_term,{nocatch,{invalid_json,
 I have no clue about what's wrong, I can curl 
 http://isaacs.iriscouch.com/registry/ directly and it works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1326) scanty error message when writing to uri_file fails

2011-11-01 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13141093#comment-13141093
 ] 

Filipe Manana commented on COUCHDB-1326:


Thanks Rogutės

However the following will not work:

+Error2 - throw(io:format(Failed to write uri_file ~s, error ~p~n,
+  [UriFile, Error2]))

You'll want to call io_lib:format instead, which returns an IOList. Then you 
should call lists:flatten over the result of io_lib:format to make sure we get 
a more human readable string.
Wanna give it a 2nd try? :)

 scanty error message when writing to uri_file fails
 ---

 Key: COUCHDB-1326
 URL: https://issues.apache.org/jira/browse/COUCHDB-1326
 Project: CouchDB
  Issue Type: Bug
Reporter: Rogutės Sparnuotos
Priority: Minor
 Attachments: 
 0001-A-more-useful-error-when-writing-to-uri_file-fails.patch


 CouchDB crashes when it fails writing to uri_file (set from the [couchdb] 
 section). The error message is too vague to understand what's going on: 
 {{badmatch,{error,enoent}}. At least the filename should be mentioned.
 P.S.
 I'd say it's all Erlang's fault, because file:write_file() doesn't report the 
 file name on errors.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1326) scanty error message when writing to uri_file fails

2011-11-01 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13141504#comment-13141504
 ] 

Filipe Manana commented on COUCHDB-1326:


We do, check the function couch_httpd:error_info/2.

For this particular case, which happens on startup, the code is a bit repeated. 
However when handling such an error for a request, we have common code.

Examples from COUCHDB-966

$ curl -H 'X-Couch-Persist: true' -X PUT 
http://localhost:5984/_config/couchdb/delayed_commits -d 'false' 
{error:file_permission_error,reason:/home/fdmanana/git/hub/couchdb/etc/couchdb/local_dev.ini}
 

For a database file for which we don't have read permission: 

$ curl http://localhost:5984/abc 
{error:file_permission_error,reason:/home/fdmanana/git/hub/couchdb/tmp/lib/abc.couch}
 

Thanks for the report and patch :)

 scanty error message when writing to uri_file fails
 ---

 Key: COUCHDB-1326
 URL: https://issues.apache.org/jira/browse/COUCHDB-1326
 Project: CouchDB
  Issue Type: Bug
Reporter: Rogutės Sparnuotos
Priority: Minor
 Attachments: 
 0001-A-more-useful-error-when-writing-to-uri_file-fails.patch, 
 0001-A-more-useful-error-when-writing-to-uri_file-fails.patch, 
 COUCHDB-1326-fdmanana.patch


 CouchDB crashes when it fails writing to uri_file (set from the [couchdb] 
 section). The error message is too vague to understand what's going on: 
 {{badmatch,{error,enoent}}. At least the filename should be mentioned.
 P.S.
 I'd say it's all Erlang's fault, because file:write_file() doesn't report the 
 file name on errors.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1326) scanty error message when writing to uri_file fails

2011-11-01 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13141520#comment-13141520
 ] 

Filipe Manana commented on COUCHDB-1326:


Applied to master and the 1.2.x branch. Thanks again.

 scanty error message when writing to uri_file fails
 ---

 Key: COUCHDB-1326
 URL: https://issues.apache.org/jira/browse/COUCHDB-1326
 Project: CouchDB
  Issue Type: Bug
Reporter: Rogutės Sparnuotos
Priority: Minor
 Attachments: 
 0001-A-more-useful-error-when-writing-to-uri_file-fails.patch, 
 0001-A-more-useful-error-when-writing-to-uri_file-fails.patch, 
 COUCHDB-1326-fdmanana.patch


 CouchDB crashes when it fails writing to uri_file (set from the [couchdb] 
 section). The error message is too vague to understand what's going on: 
 {{badmatch,{error,enoent}}. At least the filename should be mentioned.
 P.S.
 I'd say it's all Erlang's fault, because file:write_file() doesn't report the 
 file name on errors.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1323) couch_replicator

2011-10-31 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13140095#comment-13140095
 ] 

Filipe Manana commented on COUCHDB-1323:


All tests are now passing for me.
Just noticed that src/couch_replicator/ebin/couch_replicator.app should be 
added to .gitignore as wel, since it's modified after every build.

+1 to go to master

 couch_replicator
 

 Key: COUCHDB-1323
 URL: https://issues.apache.org/jira/browse/COUCHDB-1323
 Project: CouchDB
  Issue Type: Improvement
  Components: Replication
Affects Versions: 2.0, 1.3
Reporter: Benoit Chesneau
  Labels: replication
 Attachments: 
 0001-move-couchdb-replication-to-a-standalone-application.patch, 
 0001-move-couchdb-replication-to-a-standalone-application.patch, 
 0001-move-couchdb-replication-to-a-standalone-application.patch, 
 0001-move-couchdb-replication-to-a-standalone-application.patch


 patch to move couch replication modules in in a standalone couch_replicator 
 application. It's also available on my github:
 https://github.com/benoitc/couchdb/compare/master...couch_replicator

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1323) couch_replicator

2011-10-30 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13139625#comment-13139625
 ] 

Filipe Manana commented on COUCHDB-1323:


Thanks Benoit.

I tested the second patch, but test 002 is failing:

fdmanana 14:00:20 ~/git/hub/couchdb4 (master) ./test/etap/run -v 
src/couch_replicator/test/002-replication-compact.t 
src/couch_replicator/test/002-replication-compact.t .. 
# Current time local 2011-10-30 14:00:25
# Using etap version 0.3.4
1..376
Apache CouchDB 0.0.0 (LogLevel=info) is starting.
Apache CouchDB has started. Time to relax.
[info] [0.2.0] Apache CouchDB has started on http://127.0.0.1:59045/
ok 1  - Source database is idle before starting replication
ok 2  - Target database is idle before starting replication
# Test died abnormally: {'EXIT',
   {noproc,
{gen_server,call,
 [couch_rep_sup,
  {start_child,
   {e0ce0ffa71ae1915f953eb8d2f93e348+continuous,
{gen_server,start_link,
 [couch_replicator,
  {rep,
   {e0ce0ffa71ae1915f953eb8d2f93e348,


Good work :)

 couch_replicator
 

 Key: COUCHDB-1323
 URL: https://issues.apache.org/jira/browse/COUCHDB-1323
 Project: CouchDB
  Issue Type: Improvement
  Components: Replication
Affects Versions: 2.0, 1.3
Reporter: Benoit Chesneau
  Labels: replication
 Attachments: 
 0001-move-couchdb-replication-to-a-standalone-application.patch, 
 0001-move-couchdb-replication-to-a-standalone-application.patch


 patch to move couch replication modules in in a standalone couch_replicator 
 application. It's also available on my github:
 https://github.com/benoitc/couchdb/compare/master...couch_replicator

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1009) Make couch_stream buffer configurable

2011-10-30 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13139626#comment-13139626
 ] 

Filipe Manana commented on COUCHDB-1009:


Thanks for reminding about this Jan.

Regarding your question, no it's not needed. For each attachment we write, we 
instantiate a couch_stream gen_server, which is then closed after writing the 
attachment. Therefore once the .ini parameter is updated, it only affects 
subsequent attachment writes. I think this is perfectly fine.

 Make couch_stream buffer configurable
 -

 Key: COUCHDB-1009
 URL: https://issues.apache.org/jira/browse/COUCHDB-1009
 Project: CouchDB
  Issue Type: Improvement
  Components: Database Core
 Environment: trunk
Reporter: Filipe Manana
Assignee: Filipe Manana
Priority: Trivial
 Attachments: COUCHDB-1009-2.patch, COUCHDB-1009-rebased.patch, 
 COUCHDB-1009.patch


 The couch_stream buffer is hardcoded to 4Kb.
 This value should be configurable. Larger values can improve write and 
 specially read performance (if we write larger chunks to disk, we have higher 
 chances of reading more contiguous disk blocks afterwards). 
 I also think it's a good idea to change the default value from 4Kb to 
 something higher (64Kb for e.g.).
 Patch attached

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1323) couch_replicator

2011-10-30 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13139637#comment-13139637
 ] 

Filipe Manana commented on COUCHDB-1323:


All tests now pass except test/etap/001-load.t which needs some lines updated 
with the new module names.
From my side, after that small change, it's ready to go into master.

Thanks for doing this change Benoît.

 couch_replicator
 

 Key: COUCHDB-1323
 URL: https://issues.apache.org/jira/browse/COUCHDB-1323
 Project: CouchDB
  Issue Type: Improvement
  Components: Replication
Affects Versions: 2.0, 1.3
Reporter: Benoit Chesneau
  Labels: replication
 Attachments: 
 0001-move-couchdb-replication-to-a-standalone-application.patch, 
 0001-move-couchdb-replication-to-a-standalone-application.patch, 
 0001-move-couchdb-replication-to-a-standalone-application.patch


 patch to move couch replication modules in in a standalone couch_replicator 
 application. It's also available on my github:
 https://github.com/benoitc/couchdb/compare/master...couch_replicator

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1316) Error in the validate_doc_update function of the _users db

2011-10-29 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13139330#comment-13139330
 ] 

Filipe Manana commented on COUCHDB-1316:


Looks good to me Jan

 Error in the validate_doc_update function of the _users db
 --

 Key: COUCHDB-1316
 URL: https://issues.apache.org/jira/browse/COUCHDB-1316
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core
Affects Versions: 1.1
Reporter: Daniel Truemper
Assignee: Filipe Manana
Priority: Trivial

 Hi!
 In the validate_doc_update method of the _users database is a small error. On 
 the one hand it seems that the `roles` attribute of the user doc is not 
 required:
 if (newDoc.roles  !isArray(newDoc.roles)) {
 throw({forbidden: 'doc.roles must be an array'});
 }
 On the other hand the function iterates over the roles:
 // no system roles in users db
 for (var i = 0; i  newDoc.roles.length; i++) {
 if (newDoc.roles[i][0] === '_') {
 throw({
 forbidden:
 'No system roles (starting with underscore) in users db.'
 });
 }
 }
 So, is the roles field required? If so, then throwing a real error would be 
 nice since I only get a stack trace from CouchDB. If it is not required, 
 checking it's presence before iterating over it would be necessary.
 I am kind of lost in all the new Git handling and such. Would it be 
 appropriate to open a Github Pull Request? Or should I add a patch to this 
 issue? Depending on the answer to the roles question I could provide a patch 
 since it is trivial enough for me I guess :)
 Cheers,
 Daniel

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1309) File descriptor leaks on design document update and view cleanup

2011-10-18 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13129666#comment-13129666
 ] 

Filipe Manana commented on COUCHDB-1309:


Benoit, when I asked you to provide sample code is just because I wasn't 
understand your point. Not meant for you to take it as an offence, but as a 
better/more clear way to express your ideas.

Regarding the first solution, as Jan pointed out, your notifying every index 
about every ddoc update.
To avoid this, one of the ets tables (BY_SIG maybe) should also store the ID of 
the ddoc associated to the indexer.

 File descriptor leaks on design document update and view cleanup
 

 Key: COUCHDB-1309
 URL: https://issues.apache.org/jira/browse/COUCHDB-1309
 Project: CouchDB
  Issue Type: Bug
Reporter: Filipe Manana
Assignee: Filipe Manana
 Attachments: couchdb-1309_12x.patch, couchdb-1309_trunk.patch


 If we add a design document with views defined in it, open the corresponding 
 view group (by querying one of its views for e.g.), then update the design 
 document in such a way that the view signature changes (changing a view's map 
 function code for e.g), the old view group remains open forever (unless a 
 server restart happens) and keeps it's view file reference counter active 
 forever.
 If a view cleanup request comes, the old view file is not deleted since the 
 reference counter is not dropped by the old view group, keeping the file 
 descriptor in use forever.
 This leakage is different from what is reported in COUCHDB-1129 but it's 
 somehow related.
 The attached patch, simply shutdowns a view group when the design document is 
 updated and the new view signature changes, releasing the old view file 
 descriptor (as soon as no more clients are using the old view).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1129) file descriptors sometimes not closed after compaction

2011-10-17 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13128801#comment-13128801
 ] 

Filipe Manana commented on COUCHDB-1129:


Dan, can it be that, without restarting the server, you updated the design 
document (potentially several times) and then issued a /db/_view_cleanup 
request? (COUCHDB-1309)

 file descriptors sometimes not closed after compaction
 --

 Key: COUCHDB-1129
 URL: https://issues.apache.org/jira/browse/COUCHDB-1129
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core
Affects Versions: 1.0.2
Reporter: Randall Leeds
 Fix For: 1.2


 It seems there are still cases where file descriptors are not released upon 
 compaction finishing.
 When I asked on IRC rnewson confirmed he'd seen the behavior also and the 
 last comment on 926 also suggests there are still times where this occurs.
 Someone needs to take a careful eye to any race conditions we might have 
 between opening the file and subscribing to the compaction notification.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1309) File descriptor leaks on design document update and view cleanup

2011-10-17 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13128814#comment-13128814
 ] 

Filipe Manana commented on COUCHDB-1309:


Not sure what you mean:

I'm not sure it need to be asynchronous. Couldn't we test it here : 

It's synchronous to avoid getting the process' mailbox flooded in the unlikely 
case there's a high rate of updates to the same ddoc.
And the updater shouldn't be crashed. If it's running it means there are 
clients waiting for it before folding the view. Those clients should not get an 
error.

 File descriptor leaks on design document update and view cleanup
 

 Key: COUCHDB-1309
 URL: https://issues.apache.org/jira/browse/COUCHDB-1309
 Project: CouchDB
  Issue Type: Bug
Reporter: Filipe Manana
Assignee: Filipe Manana
 Attachments: couchdb-1309_12x.patch, couchdb-1309_trunk.patch


 If we add a design document with views defined in it, open the corresponding 
 view group (by querying one of its views for e.g.), then update the design 
 document in such a way that the view signature changes (changing a view's map 
 function code for e.g), the old view group remains open forever (unless a 
 server restart happens) and keeps it's view file reference counter active 
 forever.
 If a view cleanup request comes, the old view file is not deleted since the 
 reference counter is not dropped by the old view group, keeping the file 
 descriptor in use forever.
 This leakage is different from what is reported in COUCHDB-1129 but it's 
 somehow related.
 The attached patch, simply shutdowns a view group when the design document is 
 updated and the new view signature changes, releasing the old view file 
 descriptor (as soon as no more clients are using the old view).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1309) File descriptor leaks on design document update and view cleanup

2011-10-17 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13128936#comment-13128936
 ] 

Filipe Manana commented on COUCHDB-1309:


Benoit, what I meant by synchronous is the handle_call vs handle_cast in 
couch_index.erl.

I'm afraid you're missing some details about how the view system works. There's 
more than the updater process that depends on the design document properties 
and view signature, like couch_index_server.
Spawning a new group when the ddoc signature changes is not a new behaviour I'm 
adding - it was always like this afaik.
I'm concentrated on fixing this particular issue and not redesigning a big part 
of the view system. Feel free to do it and provide a working patch.

 File descriptor leaks on design document update and view cleanup
 

 Key: COUCHDB-1309
 URL: https://issues.apache.org/jira/browse/COUCHDB-1309
 Project: CouchDB
  Issue Type: Bug
Reporter: Filipe Manana
Assignee: Filipe Manana
 Attachments: couchdb-1309_12x.patch, couchdb-1309_trunk.patch


 If we add a design document with views defined in it, open the corresponding 
 view group (by querying one of its views for e.g.), then update the design 
 document in such a way that the view signature changes (changing a view's map 
 function code for e.g), the old view group remains open forever (unless a 
 server restart happens) and keeps it's view file reference counter active 
 forever.
 If a view cleanup request comes, the old view file is not deleted since the 
 reference counter is not dropped by the old view group, keeping the file 
 descriptor in use forever.
 This leakage is different from what is reported in COUCHDB-1129 but it's 
 somehow related.
 The attached patch, simply shutdowns a view group when the design document is 
 updated and the new view signature changes, releasing the old view file 
 descriptor (as soon as no more clients are using the old view).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1309) File descriptor leaks on design document update and view cleanup

2011-10-17 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13129025#comment-13129025
 ] 

Filipe Manana commented on COUCHDB-1309:


Yes, that's assuming the updater will run again after the design document 
update. What if it doesn't? You still end up in the current situation.

 File descriptor leaks on design document update and view cleanup
 

 Key: COUCHDB-1309
 URL: https://issues.apache.org/jira/browse/COUCHDB-1309
 Project: CouchDB
  Issue Type: Bug
Reporter: Filipe Manana
Assignee: Filipe Manana
 Attachments: couchdb-1309_12x.patch, couchdb-1309_trunk.patch


 If we add a design document with views defined in it, open the corresponding 
 view group (by querying one of its views for e.g.), then update the design 
 document in such a way that the view signature changes (changing a view's map 
 function code for e.g), the old view group remains open forever (unless a 
 server restart happens) and keeps it's view file reference counter active 
 forever.
 If a view cleanup request comes, the old view file is not deleted since the 
 reference counter is not dropped by the old view group, keeping the file 
 descriptor in use forever.
 This leakage is different from what is reported in COUCHDB-1129 but it's 
 somehow related.
 The attached patch, simply shutdowns a view group when the design document is 
 updated and the new view signature changes, releasing the old view file 
 descriptor (as soon as no more clients are using the old view).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1309) File descriptor leaks on design document update and view cleanup

2011-10-17 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13129046#comment-13129046
 ] 

Filipe Manana commented on COUCHDB-1309:


Benoit, you can't rely on the updater to detect that the ddoc changed.
Maybe I wasn't clear enough before. Imagine the following scenario:

1) Create a ddoc
2) Query one of its views
3) Update the ddoc

If all subsequent view query requests arrive after the update (3), they will 
get routed to the new view group - therefore the old one will not get its 
updater running again and will stay alive forever.
Same issue happens if when the update happens clients who get into the old 
view group are querying only with ?stale=ok.

If you're so convinced that doing it in the updater works for these 2 cases, 
please provide a working code prototype.

 File descriptor leaks on design document update and view cleanup
 

 Key: COUCHDB-1309
 URL: https://issues.apache.org/jira/browse/COUCHDB-1309
 Project: CouchDB
  Issue Type: Bug
Reporter: Filipe Manana
Assignee: Filipe Manana
 Attachments: couchdb-1309_12x.patch, couchdb-1309_trunk.patch


 If we add a design document with views defined in it, open the corresponding 
 view group (by querying one of its views for e.g.), then update the design 
 document in such a way that the view signature changes (changing a view's map 
 function code for e.g), the old view group remains open forever (unless a 
 server restart happens) and keeps it's view file reference counter active 
 forever.
 If a view cleanup request comes, the old view file is not deleted since the 
 reference counter is not dropped by the old view group, keeping the file 
 descriptor in use forever.
 This leakage is different from what is reported in COUCHDB-1129 but it's 
 somehow related.
 The attached patch, simply shutdowns a view group when the design document is 
 updated and the new view signature changes, releasing the old view file 
 descriptor (as soon as no more clients are using the old view).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1129) file descriptors sometimes not closed after compaction

2011-10-14 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13127448#comment-13127448
 ] 

Filipe Manana commented on COUCHDB-1129:


Can you test with the 1.1.x branch (upcoming 1.1.1), or anything else more 
bleeding edge like 1.2.x or trunk?

 file descriptors sometimes not closed after compaction
 --

 Key: COUCHDB-1129
 URL: https://issues.apache.org/jira/browse/COUCHDB-1129
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core
Affects Versions: 1.0.2
Reporter: Randall Leeds
 Fix For: 1.2


 It seems there are still cases where file descriptors are not released upon 
 compaction finishing.
 When I asked on IRC rnewson confirmed he'd seen the behavior also and the 
 last comment on 926 also suggests there are still times where this occurs.
 Someone needs to take a careful eye to any race conditions we might have 
 between opening the file and subscribing to the compaction notification.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1234) Named Document Replication does not replicate the _deleted revision

2011-10-07 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13123016#comment-13123016
 ] 

Filipe Manana commented on COUCHDB-1234:


Jean-Pierre, I think you're talking about Couchbase 1.1.2, which is based in 
Apache CouchDB 1.0.x.
This issue doesn't exist anymore as of Apache CouchDB 1.1.0. My understanding 
is that Couchbase is releasing a version vased on Apache CouchDB 1.1.x soon.

 Named Document Replication does not replicate the _deleted revision
 ---

 Key: COUCHDB-1234
 URL: https://issues.apache.org/jira/browse/COUCHDB-1234
 Project: CouchDB
  Issue Type: Bug
  Components: Replication
Affects Versions: 1.0.2
 Environment: CentOS 5.5
Reporter: Hans-D. Böhlau
 Fix For: 1.1


 I would like to use Named Document Replication to replicate changes on a 
 number of docs. I expect ALL changes of those docs to be replicated from 
 source-db (test1) towards the target-db (test2).
 as-is:
 
 If a document changes its revision because of a normal modification, it 
 works perfectly and fast.
 If a document (hdb1) changes its revision because of its deletion, the 
 replicator logs an error. The document in my target-database remains alive.
 couch.log: [error] [0.6676.3] Replicator: error accessing doc hdb1 at 
 http://vm-dmp-del1:5984/test1/, reason: not_found
 i expected:
 -
 ... Named Document Replication to mark a document as deleted in the 
 target-db if it has been deleted in the source-db.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1300) Array.prototype isn't working properly

2011-10-01 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13118937#comment-13118937
 ] 

Filipe Manana commented on COUCHDB-1300:


Hi, there's a reason for that, and it's exactly the same reason why foo 
instanceof Array fails when foo is created in a browser iframe and the 
previous expression evaluated in another iframe which is accessing that array.

For couch, the user document is json decoded in one javascript context and the 
map function ran against the document is executed in another context (a 
sandbox). Each context has its own Array constructor.


 Array.prototype isn't working properly
 --

 Key: COUCHDB-1300
 URL: https://issues.apache.org/jira/browse/COUCHDB-1300
 Project: CouchDB
  Issue Type: Bug
  Components: JavaScript View Server
Affects Versions: 1.1
 Environment: Ubuntu 11.04, node.couchapp, commonJS
Reporter: paul iannazzo
  Labels: javascript, patch

 ##file helpers.js in views/lib/common
 const _ = require(views/lib/underscore);
 Array.prototype.compact = function(){return _.compact(this);};
 Array.prototype.flatten = function(){return _.flatten(this);};
 //this function is called from views.someName.map
 function commonProperties(doc){
 arr = [];
 arr = arr.compact();
 log(arr);
 log(is array?);
 log(toString.call(doc.store.taxes));
 log(doc.store.taxes);
 //log(doc.store.taxes.compact());
 log(is safe array?);
 log(toString.call(safe.array(doc.store.taxes)));
 log(safe.array(doc.store.taxes));
//log(safe.array(doc.store.taxes).compact());
 log(array?);
 log(toString.call(Array(safe.array(doc.store.taxes;
 log(Array(doc.store.taxes));
 log(Array(doc.store.taxes).compact());
 ...
 ::LOG
 [info] [0.3429.0] OS Process #Port0.5316 Log :: []
 [info] [0.3429.0] OS Process #Port0.5316 Log :: is array?
 [info] [0.3429.0] OS Process #Port0.5316 Log :: [object Array]
 [info] [0.3429.0] OS Process #Port0.5316 Log :: 
 [{taxId:0,number:00,percent:8},{taxId:1,number:,percent:5},{taxId:2,number:,percent:1}]
 [info] [0.3429.0] OS Process #Port0.5316 Log :: is safe array?
 [info] [0.3429.0] OS Process #Port0.5316 Log :: [object Array]
 [info] [0.3429.0] OS Process #Port0.5316 Log :: 
 [{taxId:0,number:00,percent:8},{taxId:1,number:,percent:5},{taxId:2,number:,percent:1}]
 [info] [0.3429.0] OS Process #Port0.5316 Log :: array?
 [info] [0.3429.0] OS Process #Port0.5316 Log :: [object Array]
 [info] [0.3429.0] OS Process #Port0.5316 Log :: 
 [[{taxId:0,number:00,percent:8},{taxId:1,number:,percent:5},{taxId:2,number:,percent:1}]]
 the .compact() lines that are commented out cause errors:
 [info] [0.3429.0] OS Process #Port0.5316 Log :: []
 [info] [0.3429.0] OS Process #Port0.5316 Log :: is array?
 [info] [0.3429.0] OS Process #Port0.5316 Log :: [object Array]
 [info] [0.3429.0] OS Process #Port0.5316 Log :: 
 [{taxId:0,number:00,percent:8},{taxId:1,number:,percent:5},{taxId:2,number:,percent:1}]
 [info] [0.3429.0] OS Process #Port0.5316 Log :: function raised exception 
 (new TypeError(doc.store.taxes.compact is not a function, , 29)) with 
 doc._id RT7-RT7-31-20
 the first logs show that doc.store.taxes is an array, why do i need to use 
 Array() on it in order to use the prototype functions?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira