[jira] [Resolved] (COUCHDB-687) Add md5 hash to _attachments properties for documents

2012-01-22 Thread Robert Newson (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Newson resolved COUCHDB-687.
---

   Resolution: Fixed
Fix Version/s: (was: 1.3)
   (was: 2.0)
   1.1

This happened in COUCHDB-1173

 Add md5 hash to _attachments properties for documents
 -

 Key: COUCHDB-687
 URL: https://issues.apache.org/jira/browse/COUCHDB-687
 Project: CouchDB
  Issue Type: Improvement
 Environment: CouchDB
Reporter: mikeal
Assignee: Filipe Manana
 Fix For: 1.1

 Attachments: couchdb-md5-in-attachment-COUCHDB-687-v2.patch, 
 couchdb-md5-in-attachment-COUCHDB-687-v3.patch, 
 couchdb-md5-in-attachment-COUCHDB-687.patch, md5.patch


 The current attachment information looks like this:
 GET /dbname/docid
 _attachments: {
   jquery-1.4.1.min.js: {
   content_type: text/javascript
   revpos: 138
   length: 70844
   stub: true
   }
 }
 If a client wanted to sync local files as attachments with a document it 
 would not currently be able to do so without keeping a local store of the 
 revpos. If this information included an md5 hash of the attachment clients 
 could compare it against a hash of the local file to see if they match.
 -Mikeal

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1342) Asynchronous file writes

2012-01-22 Thread Bob Dionne (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13190655#comment-13190655
 ] 

Bob Dionne commented on COUCHDB-1342:
-

I revisited this a bit this morning. I tried to rebase master just to see how 
far it's moved, it's not too bad, couch_db had some conflicts mostly relating 
to a new field #db.updater_fd

Given Damien's abandonment of the project I'm not sure whether we should push 
on this or not. I suppose it's worth cleaning up and using if in fact the 
improvements are substantial

 Asynchronous file writes
 

 Key: COUCHDB-1342
 URL: https://issues.apache.org/jira/browse/COUCHDB-1342
 Project: CouchDB
  Issue Type: Improvement
  Components: Database Core
Reporter: Jan Lehnardt
 Fix For: 1.3

 Attachments: COUCHDB-1342.patch


 This change updates the file module so that it can do
 asynchronous writes. Basically it replies immediately
 to process asking to write something to the file, with
 the position where the chunks will be written to the
 file, while a dedicated child process keeps collecting
 chunks and write them to the file (and batching them
 when possible). After issuing a series of write request
 to the file module, the caller can call its 'flush'
 function which will block the caller until all the
 chunks it requested to write are effectively written
 to the file.
 This maximizes the IO subsystem, as for example, while
 the updater is traversing and modifying the btrees and
 doing CPU bound tasks, the writes are happening in
 parallel.
 Originally described at http://s.apache.org/TVu
 Github Commit: 
 https://github.com/fdmanana/couchdb/commit/e82a673f119b82dddf674ac2e6233cd78c123554

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: [Windows] proposed binary build changes

2012-01-22 Thread Jeroen Janssen
Hi,

Just a quick check since I am not able to get (snappy) compression
working on windows.

Steps I did:
*) install couchdb 1.2.0a from [5] (disabled service/autostart)
*) edit local.ini add file_compression = snappy to the [couchdb] section
*) copied an old (800Mb) 1.0.1 database to var\lib\couchdb
*) start couchdb\bin\couchdb.bat
*) run compaction on the old database. still 800Mb size

additional test I did was to:
*) create a new database
*) created a 200kb document and saved
*) database file is ~200kb in size

similar results with file_compression = deflate_9

Should I do additional steps to enable compression?
Can anyone else confirm this behaviour on windows?

Thanks in advance,

Jeroen


On Tue, Jan 17, 2012 at 8:16 PM, Dave Cottlehuber d...@muse.net.nz wrote:
 Hi folks,

 Quick questions followed by a long update.

 1. Does anybody rely on the compiler that I use to build CouchDB?
 I'm considering dropping VS2008/SDK7.0 builds in favour of the newer
 VS2010/SDK7.1, mainly because I don't really have the space to build both.
 Although feel free to send me a 500GiB SSD…

 2. Unless there are strenuous objections, I'm proposing to bump the
 Windows binaries and dependencies to enable that, details below. You can
 test a variety of 1.1.1 combos if this is important to you [1].

 3. Any further volunteers for testing  voting on windows binary builds?
 - Kevin Coombes
 - Nick North
 - Ryan Ramage
 - pblakez
 - Dirkjan

 It should take ~ 1h per release  I'm not expecting a commitment from you!

 4. Feedback is hard to come by so, I'd love some use cases, success stories,
 and even the occasional abject failure for Couch on Windows.

 Some background details follow, mainly to ensure we have these
 archived on the list.

 HISTORY:
 Up until Nov 2011, the official binary release from Erlang/OTP team was
 done with VS2005 / VC++6.

 However Erlang  CouchDB require ICU, OpenSSL, wxWidgets, Mozilla
 SpiderMonkey and optionally cURL, none of which compile cleanly under
 VS2005 these days.

 So I've been rolling my own using VS2008 / SDK 7.0, which is not so
 difficult now, and has also enabled me to provide fixes for uncommitted
 upstream patches where necessary.

 To work around some of the above, I've built binaries against a single,
 very specific release of the vcredist_x86.exe package. Thanks to some
 advice from Pierre at Microsoft, I can now remove this requirement.
 This should simplify requirements for anybody who needs to repackage.
 He's also given excellent suggestions on improving the performance of
 Apache CouchDB on Windows, and sharing building the dependencies
 with projects like PHP in future. Let's hope this progresses further.

 TODAY:
 Erlang/OTP have a VS2010 set of binaries available, with current
 components, and no outstanding critical bugs as yet. R15B has error
 messages with line numbers which is an immense help for troubleshooting.

 The build scripts I use are now largely compatible between SDKs, so it's
 possible to create SDK 7.0 and 7.1 versions when I merge  document it
 properly. Thanks Wohali for the suggestion  some help along the way!

 LIBRARIES INCLUDED IN THE BUILD:
 Unless there are strenuous objections, I'm proposing to move from/to:

 Compiler:
 VS2008/ SDK 7.0          - VS2010 / SDK 7.1 compiler  Visual C++ runtime

 Erlang/OTP R15B
 - wxWidgets 2.8.11       - 2.8.12
 - OpenSSL 1.0.0e         - 0.9.8r to align with OTP team's release

 For CouchDB 1.2.0 and the future:
 - ICU 4.4.2              - 4.6.1 as it compiles cleanly under both SDKs.
 - cURL 7.23.1            - no change
 - SpiderMonkey 1.8.0-rc1 - 1.8.5

 This sets us up nicely for providing a much simpler build process,
 perhaps without needing cygwin at all, just common Windows tools.

 I don't see any issues with this in the last month of testing, YMMV!

 The artefacts used to build these ( their debug symbols) are all
 available [4], and there's a build off the 1.2.x branch including these [5].

 FUTURE:
 Feel free to add but this is where I'm thinking of going, in no particular
 order:
 - Makefile target for a trimmed OTP + Couch build  upx compression
 - MSI support (thanks in advance to Wohali!!)
 - add a GeoCouchified version with erica + kanso support built in
 - performance improvements c/- Microsoft's PGO [2]
 - simpler build process perhaps using Mozilla tool chain only
 - stabilise my glazier build scripts [3]
 - use Windows-approved locations for datafiles
 - aim for Microsoft Certification - good general CouchDB publicity
 - enable JS and ETAP test framework on Windows
 - look at 64 bit builds in June or July once OTP end has settled down
 - move at least the .sha and .md5 files onto apache.org pages to
  ensure people have a trusted source of binaries.

 Feel free to prioritise this, or add to it.

 [1]: https://www.dropbox.com/s/jeifcxpbtpo78ak/snapshots/20120117?v=l
 [2]: http://msdn.microsoft.com/en-us/library/e7k32f4k.aspx
 [3]: http://github.com/dch/glazier
 [4]: 

[jira] [Commented] (COUCHDB-1342) Asynchronous file writes

2012-01-22 Thread Bob Dionne (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13190660#comment-13190660
 ] 

Bob Dionne commented on COUCHDB-1342:
-

I should add that I'd like to get Filipe's opinion on this as this was a 
collaborative effort

 Asynchronous file writes
 

 Key: COUCHDB-1342
 URL: https://issues.apache.org/jira/browse/COUCHDB-1342
 Project: CouchDB
  Issue Type: Improvement
  Components: Database Core
Reporter: Jan Lehnardt
 Fix For: 1.3

 Attachments: COUCHDB-1342.patch


 This change updates the file module so that it can do
 asynchronous writes. Basically it replies immediately
 to process asking to write something to the file, with
 the position where the chunks will be written to the
 file, while a dedicated child process keeps collecting
 chunks and write them to the file (and batching them
 when possible). After issuing a series of write request
 to the file module, the caller can call its 'flush'
 function which will block the caller until all the
 chunks it requested to write are effectively written
 to the file.
 This maximizes the IO subsystem, as for example, while
 the updater is traversing and modifying the btrees and
 doing CPU bound tasks, the writes are happening in
 parallel.
 Originally described at http://s.apache.org/TVu
 Github Commit: 
 https://github.com/fdmanana/couchdb/commit/e82a673f119b82dddf674ac2e6233cd78c123554

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




How are key values stored in B+ tree nodes on-disk?

2012-01-22 Thread Riyad Kalla
In my ever-growing need to ask the most esoteric file-format questions
possible, I am curious if anyone can point me in the direction of the
answer here...

Background
-
Consider the _id and _seq B+ indices whose change-paths are appended to the
end of the Couch data file after every update. Each of the nodes written
have a basic structure of [key, value] where the key is the _id or _seq
value being indexed, and the value is an offset pointer to the next node
in the tree to seek to (or in the case of a leaf, an offset directly into
the datafile where the updated record lives).

General
---
Efficiently searching the keys stored in a node of a B+ tree after it has
been paged in require (let's assume) that the values are in sorted order;
then the keys can be binary-searched to find the next offset or block ID
on-disk to jump to.

To be able to efficiently jump around key values like that, they must all
be of the same size. For example, let's say we store 133 keys per node
(assume it is full). That means our first comparison after paging in that
block should be at the 67th key. The only way I can jump to that position
is to know every key value is say 16 bytes and then jump forward 67 * 16 =
1072 bytes before decoding the key from raw bytes and comparing.

Question
-
Given the massive potential for document _id values to vary in size, it
seems impossible to have a hard-coded key size that couch utilizes here.
Even something sufficiently large like 32 or 64 bytes would still run into
situations where a longer key was being indexed or a *very short* key is
being indexed and there are multiple magnitudes of wasted space in the
index.

The few solutions I've found to variable-length keys online require a
sequential processing of the keys in a node because the keys will be
written like:
[key length][key data][offset]

This is certainly doable, but less than optimal.


I am curious how CouchDB handles this?

Thank you for any pointers.

-R


[jira] [Resolved] (COUCHDB-1384) File descriptor leak if view compaction is cancelled

2012-01-22 Thread Filipe Manana (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-1384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Filipe Manana resolved COUCHDB-1384.


Resolution: Fixed

Applied to 1.2.x

 File descriptor leak if view compaction is cancelled
 

 Key: COUCHDB-1384
 URL: https://issues.apache.org/jira/browse/COUCHDB-1384
 Project: CouchDB
  Issue Type: Bug
Affects Versions: 1.2
Reporter: Filipe Manana
Priority: Critical
 Fix For: 1.2

 Attachments: 
 0001-Close-view-compaction-file-when-compaction-is-cancel.patch


 If a view compaction is canceled, the compact file's fd remains open as long 
 as the view group is alive.
 This is because the couch_file the compactor uses is spawn_linked by the view 
 group and not linked to the compactor process, therefore when the compactor 
 is shutdown the corresponding couch_file is not shutdown. The view group 
 doesn't keep track of the compact file's couch_file, so it can't explicitly 
 shutdown it either.
 This affects only the 1.2.x branch and is addressed by simply linking the 
 file process to the compactor process. Patch attached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1342) Asynchronous file writes

2012-01-22 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13190665#comment-13190665
 ] 

Filipe Manana commented on COUCHDB-1342:


I really appreciate Randall's pushing for collaboration rather than expect a 
single person to do it all or let this fall into oblivion.

I will do some updates to the branch soon as well repost some performance 
benchmarks, and instructions how to reproduce them as usual, in comparison to 
latest master (the results posted months ago don't account for many 
improvements that came after such as COUCHDB-1334).

 Asynchronous file writes
 

 Key: COUCHDB-1342
 URL: https://issues.apache.org/jira/browse/COUCHDB-1342
 Project: CouchDB
  Issue Type: Improvement
  Components: Database Core
Reporter: Jan Lehnardt
 Fix For: 1.3

 Attachments: COUCHDB-1342.patch


 This change updates the file module so that it can do
 asynchronous writes. Basically it replies immediately
 to process asking to write something to the file, with
 the position where the chunks will be written to the
 file, while a dedicated child process keeps collecting
 chunks and write them to the file (and batching them
 when possible). After issuing a series of write request
 to the file module, the caller can call its 'flush'
 function which will block the caller until all the
 chunks it requested to write are effectively written
 to the file.
 This maximizes the IO subsystem, as for example, while
 the updater is traversing and modifying the btrees and
 doing CPU bound tasks, the writes are happening in
 parallel.
 Originally described at http://s.apache.org/TVu
 Github Commit: 
 https://github.com/fdmanana/couchdb/commit/e82a673f119b82dddf674ac2e6233cd78c123554

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: [Windows] proposed binary build changes

2012-01-22 Thread Dave Cottlehuber
On 22 January 2012 13:28, Jeroen Janssen jeroen.jans...@gmail.com wrote:
 Hi,

 Just a quick check since I am not able to get (snappy) compression
 working on windows.

Hi Jeroen,

Thanks for reporting this. AFAICT the snappy_nif.dll is not being copied during
make dist or install.

IIRC you ran into this last year on master and resulted in COUCHDB-1197 fixes.
So I'm not sure what is missing on 1.2.x for this, or if its more
likely an issue with
my packaging steps.

I'll keep you posted.

BTW compression is on by default so no need to tweak ini files.

A+
Dave


Issues blocking 1.2.0 release

2012-01-22 Thread Noah Slater
Hello,

The following test fails:

./test/etap/run test/etap/242-replication-many-leaves.t


Log is:

ok 38  - Document revisions updated with 2 attachments
# Triggering replication again
[info] [0.3567.0] 127.0.0.1 - - HEAD /couch_test_rep_db_b/ 200
[info] [0.3567.0] 127.0.0.1 - - GET /couch_test_rep_db_b/ 200
[info] [0.3567.0] 127.0.0.1 - - GET
/couch_test_rep_db_b/_local/4296cc7705ed9d0c0cf63593b42a10b7 200
[info] [0.6460.0] Replication `4296cc7705ed9d0c0cf63593b42a10b7` is
using:
4 worker processes
a worker batch size of 500
20 HTTP connections
a connection timeout of 3 milliseconds
10 retries per request
socket options are: [{keepalive,true},{nodelay,false}]
source start sequence 6
[info] [0.2.0] starting new replication
`4296cc7705ed9d0c0cf63593b42a10b7` at 0.6460.0 (`couch_test_rep_db_a` - `
http://127.0.0.1:53314/couch_test_rep_db_b/`)
[info] [0.3573.0] 127.0.0.1 - - POST /couch_test_rep_db_b/_revs_diff 200
[info] [0.3569.0] 127.0.0.1 - - POST /couch_test_rep_db_b/_revs_diff 200
[info] [0.3573.0] 127.0.0.1 - - POST /couch_test_rep_db_b/_bulk_docs 201
[info] [0.3567.0] 127.0.0.1 - - POST /couch_test_rep_db_b/_revs_diff 200
[info] [0.3567.0] 127.0.0.1 - - POST /couch_test_rep_db_b/_bulk_docs 201
[info] [0.6468.0] Retrying POST request to
http://127.0.0.1:53314/couch_test_rep_db_b/_bulk_docs in 0.25 seconds due
to error req_timedout
[info] [0.3573.0] 127.0.0.1 - - POST /couch_test_rep_db_b/_bulk_docs 201
[info] [0.6468.0] Retrying POST request to
http://127.0.0.1:53314/couch_test_rep_db_b/_bulk_docs in 0.5 seconds due to
error req_timedout
[info] [0.3568.0] 127.0.0.1 - - POST /couch_test_rep_db_b/_bulk_docs 201
[info] [0.6468.0] Retrying POST request to
http://127.0.0.1:53314/couch_test_rep_db_b/_bulk_docs in 1.0 seconds due to
error req_timedout
[info] [0.3571.0] 127.0.0.1 - - POST /couch_test_rep_db_b/_bulk_docs 201
[info] [0.6468.0] Retrying POST request to
http://127.0.0.1:53314/couch_test_rep_db_b/_bulk_docs in 2.0 seconds due to
error req_timedout
[info] [0.6463.0] 127.0.0.1 - - POST /couch_test_rep_db_b/_bulk_docs 201
[info] [0.6468.0] Retrying POST request to
http://127.0.0.1:53314/couch_test_rep_db_b/_bulk_docs in 4.0 seconds due to
error req_timedout
[info] [0.8537.0] 127.0.0.1 - - POST /couch_test_rep_db_b/_bulk_docs 201
[info] [0.6468.0] Retrying POST request to
http://127.0.0.1:53314/couch_test_rep_db_b/_bulk_docs in 8.0 seconds due to
error req_timedout
[info] [0.9151.0] 127.0.0.1 - - POST /couch_test_rep_db_b/_bulk_docs 201
[info] [0.6468.0] Retrying POST request to
http://127.0.0.1:53314/couch_test_rep_db_b/_bulk_docs in 16.0 seconds due
to error req_timedout
[info] [0.9768.0] 127.0.0.1 - - POST /couch_test_rep_db_b/_bulk_docs 201
[info] [0.6468.0] Retrying POST request to
http://127.0.0.1:53314/couch_test_rep_db_b/_bulk_docs in 32.0 seconds due
to error req_timedout
[info] [0.10390.0] 127.0.0.1 - - POST /couch_test_rep_db_b/_bulk_docs 201
Bail out! Timeout waiting for replication to finish
escript: exception error: bad argument
  in function  etap:diag/1
  in call from erl_eval:do_apply/5
  in call from erl_eval:exprs/5
  in call from escript:eval_exprs/5
  in call from erl_eval:local_func/5
  in call from escript:interpret/4
  in call from escript:start/1
  in call from init:start_it/1


Please advise.

Thanks,

N


Re: [Windows] proposed binary build changes

2012-01-22 Thread Noah Slater
This all sounds great, Dave.

Once we get the source distribution working for 1.2.x, I'd like to
co-ordinate with you to get the Windows build prepped at the same time. We
should aim to have you uploading the .exe, .exe.md5, and .exe.sha files to
your p.a.o space, and calling a vote on the artefacts like we do for the
source artefacts. We can then announce both distributions at the same time.

Thanks,

On Sun, Jan 22, 2012 at 2:49 PM, Dave Cottlehuber d...@muse.net.nz wrote:

 On 22 January 2012 13:28, Jeroen Janssen jeroen.jans...@gmail.com wrote:
  Hi,
 
  Just a quick check since I am not able to get (snappy) compression
  working on windows.

 Hi Jeroen,

 Thanks for reporting this. AFAICT the snappy_nif.dll is not being copied
 during
 make dist or install.

 IIRC you ran into this last year on master and resulted in COUCHDB-1197
 fixes.
 So I'm not sure what is missing on 1.2.x for this, or if its more
 likely an issue with
 my packaging steps.

 I'll keep you posted.

 BTW compression is on by default so no need to tweak ini files.

 A+
 Dave



Re: multiple repo

2012-01-22 Thread Noah Slater
As long as the source distribution can build without needing an internet
connection, I'll be happy.

On Sat, Jan 7, 2012 at 2:25 PM, Benoit Chesneau bchesn...@gmail.com wrote:

 Hi all,

 I would like to start some work on my own to test how I can merge our
 autotool build system and rebar to provide an erlang release and
 distribute couchdb more easily. In refuge I consider that all the apps
 in src/ are standalone app and each have their own history (old
 couchdb history is kept) .  To create a release, the make distdir
 collect all the deps and in one source dir and the archive is created
 from that source directory. At the end the created release is the same
 you have with couchdb and you don't need to have git installed to
 build it. Packages created (deb, rpm  osx) are based on this source
 release.

  Can we do the same in the apache project ? Ie having one git repo /
 app ? Or do we still need to handle them in the same repo ? Just
 wanted to ask to know how I should handle that or at least start the
 integration of rebar.

 Let me know,

 - benoît



Re: Issues blocking 1.2.0 release

2012-01-22 Thread Filipe David Manana
Noah, does it fail occasionally or every time for you?

I'm assuming you're with a slow machine or the machine is a bit overloaded.
Can you try with the following patch?

diff --git a/test/etap/242-replication-many-leaves.t
b/test/etap/242-replication-many-leaves.t
index d8d3eb9..ad9d180 100755
--- a/test/etap/242-replication-many-leaves.t
+++ b/test/etap/242-replication-many-leaves.t
@@ -77,6 +77,7 @@ test() -
 couch_server_sup:start_link(test_util:config_files()),
 ibrowse:start(),
 crypto:start(),
+couch_config:set(replicator, worker_processes, 1, false),

 Pairs = [
 {source_db_name(), target_db_name()},



On Sun, Jan 22, 2012 at 5:41 PM, Noah Slater nsla...@tumbolia.org wrote:
 Hello,

 The following test fails:

 ./test/etap/run test/etap/242-replication-many-leaves.t


 Log is:

 ok 38  - Document revisions updated with 2 attachments
 # Triggering replication again
 [info] [0.3567.0] 127.0.0.1 - - HEAD /couch_test_rep_db_b/ 200
 [info] [0.3567.0] 127.0.0.1 - - GET /couch_test_rep_db_b/ 200
 [info] [0.3567.0] 127.0.0.1 - - GET
 /couch_test_rep_db_b/_local/4296cc7705ed9d0c0cf63593b42a10b7 200
 [info] [0.6460.0] Replication `4296cc7705ed9d0c0cf63593b42a10b7` is
 using:
 4 worker processes
 a worker batch size of 500
 20 HTTP connections
 a connection timeout of 3 milliseconds
 10 retries per request
 socket options are: [{keepalive,true},{nodelay,false}]
 source start sequence 6
 [info] [0.2.0] starting new replication
 `4296cc7705ed9d0c0cf63593b42a10b7` at 0.6460.0 (`couch_test_rep_db_a` - `
 http://127.0.0.1:53314/couch_test_rep_db_b/`)
 [info] [0.3573.0] 127.0.0.1 - - POST /couch_test_rep_db_b/_revs_diff 200
 [info] [0.3569.0] 127.0.0.1 - - POST /couch_test_rep_db_b/_revs_diff 200
 [info] [0.3573.0] 127.0.0.1 - - POST /couch_test_rep_db_b/_bulk_docs 201
 [info] [0.3567.0] 127.0.0.1 - - POST /couch_test_rep_db_b/_revs_diff 200
 [info] [0.3567.0] 127.0.0.1 - - POST /couch_test_rep_db_b/_bulk_docs 201
 [info] [0.6468.0] Retrying POST request to
 http://127.0.0.1:53314/couch_test_rep_db_b/_bulk_docs in 0.25 seconds due
 to error req_timedout
 [info] [0.3573.0] 127.0.0.1 - - POST /couch_test_rep_db_b/_bulk_docs 201
 [info] [0.6468.0] Retrying POST request to
 http://127.0.0.1:53314/couch_test_rep_db_b/_bulk_docs in 0.5 seconds due to
 error req_timedout
 [info] [0.3568.0] 127.0.0.1 - - POST /couch_test_rep_db_b/_bulk_docs 201
 [info] [0.6468.0] Retrying POST request to
 http://127.0.0.1:53314/couch_test_rep_db_b/_bulk_docs in 1.0 seconds due to
 error req_timedout
 [info] [0.3571.0] 127.0.0.1 - - POST /couch_test_rep_db_b/_bulk_docs 201
 [info] [0.6468.0] Retrying POST request to
 http://127.0.0.1:53314/couch_test_rep_db_b/_bulk_docs in 2.0 seconds due to
 error req_timedout
 [info] [0.6463.0] 127.0.0.1 - - POST /couch_test_rep_db_b/_bulk_docs 201
 [info] [0.6468.0] Retrying POST request to
 http://127.0.0.1:53314/couch_test_rep_db_b/_bulk_docs in 4.0 seconds due to
 error req_timedout
 [info] [0.8537.0] 127.0.0.1 - - POST /couch_test_rep_db_b/_bulk_docs 201
 [info] [0.6468.0] Retrying POST request to
 http://127.0.0.1:53314/couch_test_rep_db_b/_bulk_docs in 8.0 seconds due to
 error req_timedout
 [info] [0.9151.0] 127.0.0.1 - - POST /couch_test_rep_db_b/_bulk_docs 201
 [info] [0.6468.0] Retrying POST request to
 http://127.0.0.1:53314/couch_test_rep_db_b/_bulk_docs in 16.0 seconds due
 to error req_timedout
 [info] [0.9768.0] 127.0.0.1 - - POST /couch_test_rep_db_b/_bulk_docs 201
 [info] [0.6468.0] Retrying POST request to
 http://127.0.0.1:53314/couch_test_rep_db_b/_bulk_docs in 32.0 seconds due
 to error req_timedout
 [info] [0.10390.0] 127.0.0.1 - - POST /couch_test_rep_db_b/_bulk_docs 201
 Bail out! Timeout waiting for replication to finish
 escript: exception error: bad argument
  in function  etap:diag/1
  in call from erl_eval:do_apply/5
  in call from erl_eval:exprs/5
  in call from escript:eval_exprs/5
  in call from erl_eval:local_func/5
  in call from escript:interpret/4
  in call from escript:start/1
  in call from init:start_it/1


 Please advise.

 Thanks,

 N



-- 
Filipe David Manana,

Reasonable men adapt themselves to the world.
 Unreasonable men adapt the world to themselves.
 That's why all progress depends on unreasonable men.


Re: Issues blocking 1.2.0 release

2012-01-22 Thread Noah Slater
On Sun, Jan 22, 2012 at 6:01 PM, Filipe David Manana fdman...@apache.orgwrote:

 Noah, does it fail occasionally or every time for you?


Fails every time.


 I'm assuming you're with a slow machine or the machine is a bit overloaded.


Shouldn't be, I'm not doing anything else right now, and this is a new MBA.


 Can you try with the following patch?


Yes. Will report back.


Re: git commit: Fix SpiderMonkey version detection

2012-01-22 Thread Filipe David Manana
Paul, after this change I'm no longer able to build master (haven't
tried other branches however).

configure can't find js/jsapi.h, this is because my jsapi.h doesn't
live inside a js directory.

Ubuntu 10.10
jsapi.h full path: /usr/include/xulrunner-1.9.2.24/jsapi.h

flags passed to configure:

configure --with-js-include=/usr/include/xulrunner-1.9.2.24
--with-js-lib=/usr/lib/xulrunner-1.9.2.24

It's the only spidermonkey version I have installed.

Without this commit, configure doesn't complain and everything works fine.
Is this a local issue or something missing in the autotools config?

On Sat, Jan 21, 2012 at 10:10 PM,  dav...@apache.org wrote:
 Updated Branches:
  refs/heads/master da33e3447 - 10047e759


 Fix SpiderMonkey version detection

 Randall's last patch to only test for JSOPTION_ANONFUNFIX ended up
 reordering the test before the headers were located. This ran into
 errors in version detection. This patch reorders the header location as
 well as adds a few more default search paths when no --with-js-include
 option is specified to account for newer SpiderMonkeys that puth their
 headers into $PREFIX/include/js.


 Project: http://git-wip-us.apache.org/repos/asf/couchdb/repo
 Commit: http://git-wip-us.apache.org/repos/asf/couchdb/commit/10047e75
 Tree: http://git-wip-us.apache.org/repos/asf/couchdb/tree/10047e75
 Diff: http://git-wip-us.apache.org/repos/asf/couchdb/diff/10047e75

 Branch: refs/heads/master
 Commit: 10047e75935818e0421bdd9ac96dc21334f90e95
 Parents: da33e34
 Author: Paul Joseph Davis dav...@apache.org
 Authored: Sat Jan 21 16:08:58 2012 -0600
 Committer: Paul Joseph Davis dav...@apache.org
 Committed: Sat Jan 21 16:08:58 2012 -0600

 --
  configure.ac |   41 ++---
  1 files changed, 22 insertions(+), 19 deletions(-)
 --


 http://git-wip-us.apache.org/repos/asf/couchdb/blob/10047e75/configure.ac
 --
 diff --git a/configure.ac b/configure.ac
 index c6d564a..adfd740 100644
 --- a/configure.ac
 +++ b/configure.ac
 @@ -177,8 +177,11 @@ AS_CASE([$(uname -s)],
     [CYGWIN*], [] ,
     [*], [
     CPPFLAGS=$CPPFLAGS -I/opt/local/include
 +    CPPFLAGS=$CPPFLAGS -I/opt/local/include/js
     CPPFLAGS=$CPPFLAGS -I/usr/local/include
 +    CPPFLAGS=$CPPFLAGS -I/usr/local/include/js
     CPPFLAGS=$CPPFLAGS -I/usr/include
 +    CPPFLAGS=$CPPFLAGS -I/usr/include/js
     LDFLAGS=$LDFLAGS -L/opt/local/lib
     LDFLAGS=$LDFLAGS -L/usr/local/lib
  ])
 @@ -203,6 +206,17 @@ AS_CASE([$(uname -s)],

  AM_CONDITIONAL([WINDOWS], [test x$IS_WINDOWS = xTRUE])

 +AC_CHECK_HEADER([jsapi.h], [], [
 +    AC_CHECK_HEADER([js/jsapi.h],
 +        [
 +        CPPFLAGS=$CPPFLAGS -I$JS_INCLUDE/js
 +        ],
 +        [
 +            AC_MSG_ERROR([Could not find the jsapi header.
 +
 +Are the Mozilla SpiderMonkey headers installed?])
 +        ])])
 +
  OLD_LIBS=$LIBS
  LIBS=$JS_LIBS $LIBS
  OLD_CPPFLAGS=$CPPFLAGS
 @@ -247,6 +261,14 @@ AC_CHECK_LIB([$JS_LIB_BASE], 
 [JS_GetStringCharsAndLength],

  # Else, hope that 1.7.0 works

 +# Deal with JSScript - JSObject - JSScript switcheroo
 +
 +AC_CHECK_TYPE([JSScript*],
 +    [AC_DEFINE([JSSCRIPT_TYPE], [JSScript*], [Use JSObject* for scripts])],
 +    [AC_DEFINE([JSSCRIPT_TYPE], [JSObject*], [Use JSScript* for scripts])],
 +    [[#include jsapi.h]]
 +)
 +
  AC_DEFINE([COUCHJS_NAME], [couchjs], [CouchJS executable name.])

  if test x${IS_WINDOWS} = xTRUE; then
 @@ -298,25 +320,6 @@ fi
  JS_LIBS=-l$JS_LIB_BASE -lm $JS_LIBS
  AC_SUBST(JS_LIBS)

 -AC_CHECK_HEADER([jsapi.h], [], [
 -    AC_CHECK_HEADER([js/jsapi.h],
 -        [
 -        CPPFLAGS=$CPPFLAGS -I$JS_INCLUDE/js
 -        ],
 -        [
 -            AC_MSG_ERROR([Could not find the jsapi header.
 -
 -Are the Mozilla SpiderMonkey headers installed?])
 -        ])])
 -
 -# Deal with JSScript - JSObject - JSScript switcheroo
 -
 -AC_CHECK_TYPE([JSScript*],
 -    [AC_DEFINE([JSSCRIPT_TYPE], [JSScript*], [Use JSObject* for scripts])],
 -    [AC_DEFINE([JSSCRIPT_TYPE], [JSObject*], [Use JSScript* for scripts])],
 -    [[#include jsapi.h]]
 -)
 -
  LIBS=$OLD_LIBS
  CPPFLAGS=$OLD_CPPFLAGS





-- 
Filipe David Manana,

Reasonable men adapt themselves to the world.
 Unreasonable men adapt the world to themselves.
 That's why all progress depends on unreasonable men.


Re: git commit: ignore .*

2012-01-22 Thread Noah Slater
Ah, okay. Thanks.

On Tue, Jan 3, 2012 at 5:26 PM, Randall Leeds randall.le...@gmail.comwrote:

 On Tue, Jan 3, 2012 at 08:03, Paul Davis paul.joseph.da...@gmail.com
 wrote:
  There was a discussion about it for editor temp files and the like. No
  one objected and it was added quickly cause Gavin said quick make a
  commit to test buildbot.
 
  I haven't seen any objections so I'd go ahead and remove the entries
  it made obsolete.

 I don't see any obsolete entries.
 Most of the entries at the top are star dot something, but this one
 is dot anything, meaning dotfiles (beginning with the dot).
 This is the only line in .gitignore starting with a dot.

 
  On Sun, Jan 1, 2012 at 5:27 PM, Noah Slater nsla...@tumbolia.org
 wrote:
  Why was this added? I see it's still there. Was it a temporary commit?
 If
  we want this in properly, then the first lot of entries could be
 removed.
 
  On Wed, Dec 7, 2011 at 10:33 PM, rand...@apache.org wrote:
 
  Updated Branches:
   refs/heads/master 5be3eb3b5 - 1aafbf4b3
 
 
  ignore .*
 
  Great excuse to help gmacdonald check JIRA integration, too.
  Fix TEST-1
 
 
  Project: http://git-wip-us.apache.org/repos/asf/couchdb/repo
  Commit: http://git-wip-us.apache.org/repos/asf/couchdb/commit/1aafbf4b
  Tree: http://git-wip-us.apache.org/repos/asf/couchdb/tree/1aafbf4b
  Diff: http://git-wip-us.apache.org/repos/asf/couchdb/diff/1aafbf4b
 
  Branch: refs/heads/master
  Commit: 1aafbf4b33884b4b63ca7a2790a071fe346b2cc4
  Parents: 5be3eb3
  Author: Randall Leeds rand...@apache.org
  Authored: Wed Dec 7 14:32:24 2011 -0800
  Committer: Randall Leeds rand...@apache.org
  Committed: Wed Dec 7 14:32:27 2011 -0800
 
  --
   .gitignore |2 ++
   1 files changed, 2 insertions(+), 0 deletions(-)
  --
 
 
 
 http://git-wip-us.apache.org/repos/asf/couchdb/blob/1aafbf4b/.gitignore
  --
  diff --git a/.gitignore b/.gitignore
  index f0ebdef..a24f0a1 100644
  --- a/.gitignore
  +++ b/.gitignore
  @@ -15,6 +15,8 @@ configure
   autom4te.cache
   build-aux
   *.diff
  +!.gitignore
  +.*
 
   # ./configure
 
 
 



Re: Unique instance IDs?

2012-01-22 Thread Noah Slater
Sorry to bump this old thread, but just going through my backlog.

With regard to URLs, I think there is some confusion about the purpose of a
URL here.

If I write a a cool essay, say, and I stick that up at
nslater.org/my-cool-essay, then I can link to it from other places on the
web using that address. I might also want to put my cool essay on Dropbox,
or post it to Tumblr, or send it in an email. Now my cool essay has lots of
URLs. Each one of them perfectly valid. I don't have to go and edit the
original copy at nslater.org/my-cool-essay, because I am making copies of
it. My cool essay is completely unaware of the URLs that are being used to
point to it. And it doesn't care that many URLs point to it.

Yes, URLs can be used as identifiers. But when you do this, you tie the
thing you're naming to the place you're hosting it. Sometimes that is
useful, other times it will cripple you. There is nothing about URLs that
requires you to do this. I would hazard a guess that 99% of URLs are
de-coupled from the things they point to. WebArch is much more robust when
the identity of the object is de-coupled from the URL. Look at Atom, the ID
element is supposed to be a URL, but they recommend a non-dereferencable
format, precisely to decouple posts from the location you happen to be
hosting them this month.

Hey, if we're gonna use URLs, maybe we want to go down the same route?

http://en.wikipedia.org/wiki/Tag_URI


At this point, I'm not sure what they buy us over UUIDs.

Thoughts?

Thanks,

N


Re: Issues blocking 1.2.0 release

2012-01-22 Thread Noah Slater
No change, still fails.

On Sun, Jan 22, 2012 at 6:08 PM, Noah Slater nsla...@tumbolia.org wrote:


 On Sun, Jan 22, 2012 at 6:01 PM, Filipe David Manana 
 fdman...@apache.orgwrote:

 Noah, does it fail occasionally or every time for you?


 Fails every time.


 I'm assuming you're with a slow machine or the machine is a bit
 overloaded.


 Shouldn't be, I'm not doing anything else right now, and this is a new MBA.


 Can you try with the following patch?


 Yes. Will report back.




Re: git commit: Fix SpiderMonkey version detection

2012-01-22 Thread Bob Dionne
same issue here, jsapi.h isn't in a js directory. MBA running Lion

On Jan 22, 2012, at 1:15 PM, Filipe David Manana wrote:

 Paul, after this change I'm no longer able to build master (haven't
 tried other branches however).
 
 configure can't find js/jsapi.h, this is because my jsapi.h doesn't
 live inside a js directory.
 
 Ubuntu 10.10
 jsapi.h full path: /usr/include/xulrunner-1.9.2.24/jsapi.h
 
 flags passed to configure:
 
 configure --with-js-include=/usr/include/xulrunner-1.9.2.24
 --with-js-lib=/usr/lib/xulrunner-1.9.2.24
 
 It's the only spidermonkey version I have installed.
 
 Without this commit, configure doesn't complain and everything works fine.
 Is this a local issue or something missing in the autotools config?
 
 On Sat, Jan 21, 2012 at 10:10 PM,  dav...@apache.org wrote:
 Updated Branches:
  refs/heads/master da33e3447 - 10047e759
 
 
 Fix SpiderMonkey version detection
 
 Randall's last patch to only test for JSOPTION_ANONFUNFIX ended up
 reordering the test before the headers were located. This ran into
 errors in version detection. This patch reorders the header location as
 well as adds a few more default search paths when no --with-js-include
 option is specified to account for newer SpiderMonkeys that puth their
 headers into $PREFIX/include/js.
 
 
 Project: http://git-wip-us.apache.org/repos/asf/couchdb/repo
 Commit: http://git-wip-us.apache.org/repos/asf/couchdb/commit/10047e75
 Tree: http://git-wip-us.apache.org/repos/asf/couchdb/tree/10047e75
 Diff: http://git-wip-us.apache.org/repos/asf/couchdb/diff/10047e75
 
 Branch: refs/heads/master
 Commit: 10047e75935818e0421bdd9ac96dc21334f90e95
 Parents: da33e34
 Author: Paul Joseph Davis dav...@apache.org
 Authored: Sat Jan 21 16:08:58 2012 -0600
 Committer: Paul Joseph Davis dav...@apache.org
 Committed: Sat Jan 21 16:08:58 2012 -0600
 
 --
  configure.ac |   41 ++---
  1 files changed, 22 insertions(+), 19 deletions(-)
 --
 
 
 http://git-wip-us.apache.org/repos/asf/couchdb/blob/10047e75/configure.ac
 --
 diff --git a/configure.ac b/configure.ac
 index c6d564a..adfd740 100644
 --- a/configure.ac
 +++ b/configure.ac
 @@ -177,8 +177,11 @@ AS_CASE([$(uname -s)],
 [CYGWIN*], [] ,
 [*], [
 CPPFLAGS=$CPPFLAGS -I/opt/local/include
 +CPPFLAGS=$CPPFLAGS -I/opt/local/include/js
 CPPFLAGS=$CPPFLAGS -I/usr/local/include
 +CPPFLAGS=$CPPFLAGS -I/usr/local/include/js
 CPPFLAGS=$CPPFLAGS -I/usr/include
 +CPPFLAGS=$CPPFLAGS -I/usr/include/js
 LDFLAGS=$LDFLAGS -L/opt/local/lib
 LDFLAGS=$LDFLAGS -L/usr/local/lib
  ])
 @@ -203,6 +206,17 @@ AS_CASE([$(uname -s)],
 
  AM_CONDITIONAL([WINDOWS], [test x$IS_WINDOWS = xTRUE])
 
 +AC_CHECK_HEADER([jsapi.h], [], [
 +AC_CHECK_HEADER([js/jsapi.h],
 +[
 +CPPFLAGS=$CPPFLAGS -I$JS_INCLUDE/js
 +],
 +[
 +AC_MSG_ERROR([Could not find the jsapi header.
 +
 +Are the Mozilla SpiderMonkey headers installed?])
 +])])
 +
  OLD_LIBS=$LIBS
  LIBS=$JS_LIBS $LIBS
  OLD_CPPFLAGS=$CPPFLAGS
 @@ -247,6 +261,14 @@ AC_CHECK_LIB([$JS_LIB_BASE], 
 [JS_GetStringCharsAndLength],
 
  # Else, hope that 1.7.0 works
 
 +# Deal with JSScript - JSObject - JSScript switcheroo
 +
 +AC_CHECK_TYPE([JSScript*],
 +[AC_DEFINE([JSSCRIPT_TYPE], [JSScript*], [Use JSObject* for scripts])],
 +[AC_DEFINE([JSSCRIPT_TYPE], [JSObject*], [Use JSScript* for scripts])],
 +[[#include jsapi.h]]
 +)
 +
  AC_DEFINE([COUCHJS_NAME], [couchjs], [CouchJS executable name.])
 
  if test x${IS_WINDOWS} = xTRUE; then
 @@ -298,25 +320,6 @@ fi
  JS_LIBS=-l$JS_LIB_BASE -lm $JS_LIBS
  AC_SUBST(JS_LIBS)
 
 -AC_CHECK_HEADER([jsapi.h], [], [
 -AC_CHECK_HEADER([js/jsapi.h],
 -[
 -CPPFLAGS=$CPPFLAGS -I$JS_INCLUDE/js
 -],
 -[
 -AC_MSG_ERROR([Could not find the jsapi header.
 -
 -Are the Mozilla SpiderMonkey headers installed?])
 -])])
 -
 -# Deal with JSScript - JSObject - JSScript switcheroo
 -
 -AC_CHECK_TYPE([JSScript*],
 -[AC_DEFINE([JSSCRIPT_TYPE], [JSScript*], [Use JSObject* for scripts])],
 -[AC_DEFINE([JSSCRIPT_TYPE], [JSObject*], [Use JSScript* for scripts])],
 -[[#include jsapi.h]]
 -)
 -
  LIBS=$OLD_LIBS
  CPPFLAGS=$OLD_CPPFLAGS
 
 
 
 
 
 -- 
 Filipe David Manana,
 
 Reasonable men adapt themselves to the world.
  Unreasonable men adapt the world to themselves.
  That's why all progress depends on unreasonable men.



Re: Do we need 2 entry points for the replication?

2012-01-22 Thread Sam Bisbee
On Thu, Jan 19, 2012 at 11:46 PM, Jason Smith j...@iriscouch.com wrote:
 On Fri, Jan 20, 2012 at 9:04 AM, Randall Leeds randall.le...@gmail.com 
 wrote:
 Exposing features as manipulations to normal documents makes CouchDB's
 API simpler and more orthogonal.

 On the other hand, exposing features via special-purpose APIs hides
 the implementation and frees us to change how it works under the hood.

 Thank you for not saying under the covers. Activity under the
 hood--or bonnet--hardly resembles activity under the covers.

 All agree that state is stored in a database. So the question is, have
 a database and an API defined as changes to it (perhaps via _show,
 _list, and _update); or, have a database and an API defined otherwise.
 Either way, you have to bite the bullet and make a breaking change; so
 is hiding the implementation a different matter?

+1 for hiding the implementation. Who says that those other API
endpoints that you would use to manage the replication db will be
around in a year?

We protect ourselves from the future by hiding the implementation
details, thereby not making the same mistake twice.

Cheers,

--
Sam Bisbee


Re: Issues blocking 1.2.0 release

2012-01-22 Thread Filipe David Manana
On Sun, Jan 22, 2012 at 6:47 PM, Noah Slater nsla...@tumbolia.org wrote:
 No change, still fails.

Noah, to try to find out if it's due to slowness of the machine or
some other issue, do you think you can try to increase the following
timeout in the test?

diff --git a/test/etap/242-replication-many-leaves.t
b/test/etap/242-replication-many-leaves.t
index d8d3eb9..737cd31 100755
--- a/test/etap/242-replication-many-leaves.t
+++ b/test/etap/242-replication-many-leaves.t
@@ -287,6 +287,6 @@ replicate(Source, Target) -
 receive
 {'DOWN', MonRef, process, Pid, Reason} -
 etap:is(Reason, normal, Replication finished successfully)
-after 30 -
+after 90 -
 etap:bail(Timeout waiting for replication to finish)
 end.


 On Sun, Jan 22, 2012 at 6:08 PM, Noah Slater nsla...@tumbolia.org wrote:


 On Sun, Jan 22, 2012 at 6:01 PM, Filipe David Manana 
 fdman...@apache.orgwrote:

 Noah, does it fail occasionally or every time for you?


 Fails every time.


 I'm assuming you're with a slow machine or the machine is a bit
 overloaded.


 Shouldn't be, I'm not doing anything else right now, and this is a new MBA.


 Can you try with the following patch?


 Yes. Will report back.





-- 
Filipe David Manana,

Reasonable men adapt themselves to the world.
 Unreasonable men adapt the world to themselves.
 That's why all progress depends on unreasonable men.


Re: Issues blocking 1.2.0 release

2012-01-22 Thread Noah Slater
OVAR 9000! (Testing now...)

On Sun, Jan 22, 2012 at 6:56 PM, Filipe David Manana fdman...@apache.orgwrote:

 On Sun, Jan 22, 2012 at 6:47 PM, Noah Slater nsla...@tumbolia.org wrote:
  No change, still fails.

 Noah, to try to find out if it's due to slowness of the machine or
 some other issue, do you think you can try to increase the following
 timeout in the test?

 diff --git a/test/etap/242-replication-many-leaves.t
 b/test/etap/242-replication-many-leaves.t
 index d8d3eb9..737cd31 100755
 --- a/test/etap/242-replication-many-leaves.t
 +++ b/test/etap/242-replication-many-leaves.t
 @@ -287,6 +287,6 @@ replicate(Source, Target) -
 receive
 {'DOWN', MonRef, process, Pid, Reason} -
 etap:is(Reason, normal, Replication finished successfully)
 -after 30 -
 +after 90 -
 etap:bail(Timeout waiting for replication to finish)
 end.

 
  On Sun, Jan 22, 2012 at 6:08 PM, Noah Slater nsla...@tumbolia.org
 wrote:
 
 
  On Sun, Jan 22, 2012 at 6:01 PM, Filipe David Manana 
 fdman...@apache.orgwrote:
 
  Noah, does it fail occasionally or every time for you?
 
 
  Fails every time.
 
 
  I'm assuming you're with a slow machine or the machine is a bit
  overloaded.
 
 
  Shouldn't be, I'm not doing anything else right now, and this is a new
 MBA.
 
 
  Can you try with the following patch?
 
 
  Yes. Will report back.
 
 



 --
 Filipe David Manana,

 Reasonable men adapt themselves to the world.
  Unreasonable men adapt the world to themselves.
  That's why all progress depends on unreasonable men.



Re: git commit: Fix SpiderMonkey version detection

2012-01-22 Thread Dave Cottlehuber
On 22 January 2012 19:15, Filipe David Manana fdman...@apache.org wrote:
 Paul, after this change I'm no longer able to build master (haven't
 tried other branches however).

Ditto.

 configure can't find js/jsapi.h, this is because my jsapi.h doesn't
 live inside a js directory.

 Ubuntu 10.10
 jsapi.h full path: /usr/include/xulrunner-1.9.2.24/jsapi.h

 flags passed to configure:

 configure --with-js-include=/usr/include/xulrunner-1.9.2.24
 --with-js-lib=/usr/lib/xulrunner-1.9.2.24

Windows::
/relax/js-1.8.5/js/src/dist/include/jsapi.h

maas@sendai /relax/couchdb
$ more /relax/bin/couchdb_config_js185.sh
#!/bin/sh
COUCH_TOP=`pwd`
export COUCH_TOP

./configure \
--with-js-lib=/relax/js-1.8.5/js/src/dist/lib \
--with-js-include=/relax/js-1.8.5/js/src/dist/include \
...

A+
Dave


Re: Do we need 2 entry points for the replication?

2012-01-22 Thread Dave Cottlehuber
On 19 January 2012 22:56, Paul Davis paul.joseph.da...@gmail.com wrote:
 On Thu, Jan 19, 2012 at 3:52 PM, Robert Newson rnew...@apache.org wrote:
 I would prefer to see a single /_replicate entrypoint, with, say,
 persistent:true to indicate that the replication settings should be
 stored. We would also need an API to list all persistent replication
 tasks and one to delete them. Which would look a lot like the
 _replicator database, though much more controlled (no public passwords
 for those jobs that require auth).

+1 while I understand *why* we have different APIs at the moment, it
is definitely
confusing for people.

 I think it's too late, though. There's work on master to fix the
 issues with _replicator now (and the similar ones in _user). While I
 don't like the approach, it does solve the problem.


 We can break it eventually and I think we should consider it sooner
 rather than later.

As Damien observed, sometimes its not until you tried already that you
understand the requirements better.

ATM we are concerned that the short-term user impact of reverting an
API decision is greater than the damage done long-term by having
something that *every* future user will struggle with.

Either we release new APIs more frequently, perhaps with an experimental
tag, to get real-world feedback, or we need to batch these things up
for a larger
2.0 release where we revert a lot of existing functionality.

Are there other options that help get features to users quickly, and enable
tidying up in future?

 Bottom line: It's my opinion that _replicator (and _user) were wrongly
 exposed as full-blooded databases when all  we needed to use was the
 database format (and carefully curate API endpoints). But, alas, that
 train has sailed.


 I seem to recall someone else with a similar opinion even when these
 things were being designed. ;)

 Also, what kind of crazy sailing trains do you brits have over there
 and how do I get a ticket to ride on one?

http://farm4.staticflickr.com/3008/5840702274_bd17fe8dee_z.jpg

A+
Dave


Re: git commit: Fix SpiderMonkey version detection

2012-01-22 Thread Paul Davis
Dave pasted me part of his. I realized that I moved the header check
above the spot where we assign the JS_CPPFLAGS which was dumb of me.
Pushed a fix in 572b561adbf852e08c7397519070f299d0b401e4 to master.
Soon as I have confirmation it's correct I'll backport to 1.2.x and
1.1.x

On Sun, Jan 22, 2012 at 2:38 PM, Paul Davis paul.joseph.da...@gmail.com wrote:
 Most odd since I only added three directories to search for.

 Can you all post me a config.log so I can see the compiler options being used?

 On Sun, Jan 22, 2012 at 1:56 PM, Dave Cottlehuber d...@muse.net.nz wrote:
 On 22 January 2012 19:15, Filipe David Manana fdman...@apache.org wrote:
 Paul, after this change I'm no longer able to build master (haven't
 tried other branches however).

 Ditto.

 configure can't find js/jsapi.h, this is because my jsapi.h doesn't
 live inside a js directory.

 Ubuntu 10.10
 jsapi.h full path: /usr/include/xulrunner-1.9.2.24/jsapi.h

 flags passed to configure:

 configure --with-js-include=/usr/include/xulrunner-1.9.2.24
 --with-js-lib=/usr/lib/xulrunner-1.9.2.24

 Windows::
 /relax/js-1.8.5/js/src/dist/include/jsapi.h

 maas@sendai /relax/couchdb
 $ more /relax/bin/couchdb_config_js185.sh
 #!/bin/sh
 COUCH_TOP=`pwd`
 export COUCH_TOP

 ./configure \
 --with-js-lib=/relax/js-1.8.5/js/src/dist/lib \
 --with-js-include=/relax/js-1.8.5/js/src/dist/include \
 ...

 A+
 Dave


Re: git commit: Fix SpiderMonkey version detection

2012-01-22 Thread Paul Davis
Dave confirmed. Backported to 1.1.x and 1.2.x. Sorry for the dumb
mistake on that one.

On Sun, Jan 22, 2012 at 2:47 PM, Paul Davis paul.joseph.da...@gmail.com wrote:
 Dave pasted me part of his. I realized that I moved the header check
 above the spot where we assign the JS_CPPFLAGS which was dumb of me.
 Pushed a fix in 572b561adbf852e08c7397519070f299d0b401e4 to master.
 Soon as I have confirmation it's correct I'll backport to 1.2.x and
 1.1.x

 On Sun, Jan 22, 2012 at 2:38 PM, Paul Davis paul.joseph.da...@gmail.com 
 wrote:
 Most odd since I only added three directories to search for.

 Can you all post me a config.log so I can see the compiler options being 
 used?

 On Sun, Jan 22, 2012 at 1:56 PM, Dave Cottlehuber d...@muse.net.nz wrote:
 On 22 January 2012 19:15, Filipe David Manana fdman...@apache.org wrote:
 Paul, after this change I'm no longer able to build master (haven't
 tried other branches however).

 Ditto.

 configure can't find js/jsapi.h, this is because my jsapi.h doesn't
 live inside a js directory.

 Ubuntu 10.10
 jsapi.h full path: /usr/include/xulrunner-1.9.2.24/jsapi.h

 flags passed to configure:

 configure --with-js-include=/usr/include/xulrunner-1.9.2.24
 --with-js-lib=/usr/lib/xulrunner-1.9.2.24

 Windows::
 /relax/js-1.8.5/js/src/dist/include/jsapi.h

 maas@sendai /relax/couchdb
 $ more /relax/bin/couchdb_config_js185.sh
 #!/bin/sh
 COUCH_TOP=`pwd`
 export COUCH_TOP

 ./configure \
 --with-js-lib=/relax/js-1.8.5/js/src/dist/lib \
 --with-js-include=/relax/js-1.8.5/js/src/dist/include \
 ...

 A+
 Dave


Re: git commit: Remove dead _all_docs code

2012-01-22 Thread Randall Leeds
Oh that's pretty. Thanks, Bob.

On Sun, Jan 22, 2012 at 06:25,  rnew...@apache.org wrote:
 Updated Branches:
  refs/heads/master 6dba2e911 - d59cdd71b


 Remove dead _all_docs code


 Project: http://git-wip-us.apache.org/repos/asf/couchdb/repo
 Commit: http://git-wip-us.apache.org/repos/asf/couchdb/commit/d59cdd71
 Tree: http://git-wip-us.apache.org/repos/asf/couchdb/tree/d59cdd71
 Diff: http://git-wip-us.apache.org/repos/asf/couchdb/diff/d59cdd71

 Branch: refs/heads/master
 Commit: d59cdd71b356a454eff36b52bca0c212b2f03984
 Parents: 6dba2e9
 Author: Robert Newson rnew...@apache.org
 Authored: Sun Jan 22 14:07:49 2012 +
 Committer: Robert Newson rnew...@apache.org
 Committed: Sun Jan 22 14:07:49 2012 +

 --
  src/couchdb/couch_httpd_db.erl |  143 ---
  1 files changed, 0 insertions(+), 143 deletions(-)
 --


 http://git-wip-us.apache.org/repos/asf/couchdb/blob/d59cdd71/src/couchdb/couch_httpd_db.erl
 --
 diff --git a/src/couchdb/couch_httpd_db.erl b/src/couchdb/couch_httpd_db.erl
 index 1bcfeff..f669643 100644
 --- a/src/couchdb/couch_httpd_db.erl
 +++ b/src/couchdb/couch_httpd_db.erl
 @@ -340,26 +340,6 @@ 
 db_req(#httpd{method='POST',path_parts=[_,_purge]}=Req, Db) -
  db_req(#httpd{path_parts=[_,_purge]}=Req, _Db) -
     send_method_not_allowed(Req, POST);

 -db_req(#httpd{method='GET',path_parts=[_,_all_docs]}=Req, Db) -
 -    Keys = couch_httpd:qs_json_value(Req, keys, nil),
 -    all_docs_view(Req, Db, Keys);
 -
 -db_req(#httpd{method='POST',path_parts=[_,_all_docs]}=Req, Db) -
 -    couch_httpd:validate_ctype(Req, application/json),
 -    {Fields} = couch_httpd:json_body_obj(Req),
 -    case couch_util:get_value(keys, Fields, nil) of
 -    nil -
 -        ?LOG_DEBUG(POST to _all_docs with no keys member., []),
 -        all_docs_view(Req, Db, nil);
 -    Keys when is_list(Keys) -
 -        all_docs_view(Req, Db, Keys);
 -    _ -
 -        throw({bad_request, `keys` member must be a array.})
 -    end;
 -
 -db_req(#httpd{path_parts=[_,_all_docs]}=Req, _Db) -
 -    send_method_not_allowed(Req, GET,HEAD,POST);
 -
  db_req(#httpd{method='POST',path_parts=[_,_missing_revs]}=Req, Db) -
     {JsonDocIdRevs} = couch_httpd:json_body_obj(Req),
     JsonDocIdRevs2 = [{Id, [couch_doc:parse_rev(RevStr) || RevStr - 
 RevStrs]} || {Id, RevStrs} - JsonDocIdRevs],
 @@ -458,129 +438,6 @@ db_req(#httpd{path_parts=[_, DocId]}=Req, Db) -
  db_req(#httpd{path_parts=[_, DocId | FileNameParts]}=Req, Db) -
     db_attachment_req(Req, Db, DocId, FileNameParts).

 -all_docs_view(Req, Db, Keys) -
 -    case couch_db:is_system_db(Db) of
 -    true -
 -        case (catch couch_db:check_is_admin(Db)) of
 -        ok -
 -            do_all_docs_view(Req, Db, Keys);
 -        _ -
 -            throw({forbidden, Only admins can access _all_docs,
 -                 of system databases.})
 -        end;
 -    false -
 -        do_all_docs_view(Req, Db, Keys)
 -    end.
 -
 -do_all_docs_view(Req, Db, Keys) -
 -    RawCollator = fun(A, B) - A  B end,
 -    #view_query_args{
 -        start_key = StartKey,
 -        start_docid = StartDocId,
 -        end_key = EndKey,
 -        end_docid = EndDocId,
 -        limit = Limit,
 -        skip = SkipCount,
 -        direction = Dir,
 -        inclusive_end = Inclusive
 -    } = QueryArgs
 -      = couch_httpd_view:parse_view_params(Req, Keys, map, RawCollator),
 -    {ok, Info} = couch_db:get_db_info(Db),
 -    CurrentEtag = couch_httpd:make_etag(Info),
 -    couch_httpd:etag_respond(Req, CurrentEtag, fun() -
 -
 -        TotalRowCount = couch_util:get_value(doc_count, Info),
 -        StartId = if is_binary(StartKey) - StartKey;
 -        true - StartDocId
 -        end,
 -        EndId = if is_binary(EndKey) - EndKey;
 -        true - EndDocId
 -        end,
 -        FoldAccInit = {Limit, SkipCount, undefined, []},
 -        UpdateSeq = couch_db:get_update_seq(Db),
 -        JsonParams = case couch_httpd:qs_value(Req, update_seq) of
 -        true -
 -            [{update_seq, UpdateSeq}];
 -        _Else -
 -            []
 -        end,
 -        case Keys of
 -        nil -
 -            FoldlFun = couch_httpd_view:make_view_fold_fun(Req, QueryArgs, 
 CurrentEtag, Db, UpdateSeq,
 -                TotalRowCount, #view_fold_helper_funs{
 -                    reduce_count = fun couch_db:enum_docs_reduce_to_count/1,
 -                    send_row = fun all_docs_send_json_view_row/6
 -                }),
 -            AdapterFun = fun(#full_doc_info{id=Id}=FullDocInfo, Offset, Acc) 
 -
 -                case couch_doc:to_doc_info(FullDocInfo) of
 -                #doc_info{revs=[#rev_info{deleted=false}|_]} = DocInfo -
 -                    FoldlFun({{Id, Id}, DocInfo}, Offset, Acc);
 -                #doc_info{revs=[#rev_info{deleted=true}|_]} -
 

Re: [Windows] proposed binary build changes

2012-01-22 Thread Dave Cottlehuber
On 22 January 2012 15:49, Dave Cottlehuber d...@muse.net.nz wrote:
 On 22 January 2012 13:28, Jeroen Janssen jeroen.jans...@gmail.com wrote:
 Hi,

 Just a quick check since I am not able to get (snappy) compression
 working on windows.

 Hi Jeroen,

 Thanks for reporting this. AFAICT the snappy_nif.dll is not being copied 
 during
 make dist or install.

 IIRC you ran into this last year on master and resulted in COUCHDB-1197 fixes.
 So I'm not sure what is missing on 1.2.x for this, or if its more
 likely an issue with
 my packaging steps.

 I'll keep you posted.

 BTW compression is on by default so no need to tweak ini files.

 A+
 Dave

Hi Jeroen,

While I look for why this is not working in the build,  can you copy
snappy_nif.dll [1] into $COUCH/lib/snappy-1.0.3/priv/ and then test
with snappy_test.erl courtesy of Filipe [2]?

3 pwd().
C:/couch/1.2.0a-8d83b39-git_otp_R15B_SDK7.1/bin
ok
4 c(/couch/snappy_tests).
{ok,snappy_tests}
5 snappy_tests:test().
  All 2 tests passed.
ok

Your higher-level compression tests should be fine.

[1]: https://www.dropbox.com/s/jeifcxpbtpo78ak/snapshots/20120116?v=l
[2] 
https://github.com/fdmanana/snappy-erlang-nif/blob/master/test/snappy_tests.erl

A+
Dave


Re: Issues blocking 1.2.0 release

2012-01-22 Thread Filipe David Manana
On Sun, Jan 22, 2012 at 7:20 PM, Noah Slater nsla...@tumbolia.org wrote:
 Works. How do we proceed?

For how much time does the test runs? On 2 different physical
machines, it takes about 1 minute and 10 seconds for me.

Perhaps some manual replication tests could confirm if there's
something wrong with the codebase, your environment or if simply
increasing the timeout is not alarming.


 On Sun, Jan 22, 2012 at 7:05 PM, Noah Slater nsla...@tumbolia.org wrote:

 OVAR 9000! (Testing now...)


 On Sun, Jan 22, 2012 at 6:56 PM, Filipe David Manana 
 fdman...@apache.orgwrote:

 On Sun, Jan 22, 2012 at 6:47 PM, Noah Slater nsla...@tumbolia.org
 wrote:
  No change, still fails.

 Noah, to try to find out if it's due to slowness of the machine or
 some other issue, do you think you can try to increase the following
 timeout in the test?

 diff --git a/test/etap/242-replication-many-leaves.t
 b/test/etap/242-replication-many-leaves.t
 index d8d3eb9..737cd31 100755
 --- a/test/etap/242-replication-many-leaves.t
 +++ b/test/etap/242-replication-many-leaves.t
 @@ -287,6 +287,6 @@ replicate(Source, Target) -
     receive
     {'DOWN', MonRef, process, Pid, Reason} -
         etap:is(Reason, normal, Replication finished successfully)
 -    after 30 -
 +    after 90 -
         etap:bail(Timeout waiting for replication to finish)
     end.

 
  On Sun, Jan 22, 2012 at 6:08 PM, Noah Slater nsla...@tumbolia.org
 wrote:
 
 
  On Sun, Jan 22, 2012 at 6:01 PM, Filipe David Manana 
 fdman...@apache.orgwrote:
 
  Noah, does it fail occasionally or every time for you?
 
 
  Fails every time.
 
 
  I'm assuming you're with a slow machine or the machine is a bit
  overloaded.
 
 
  Shouldn't be, I'm not doing anything else right now, and this is a new
 MBA.
 
 
  Can you try with the following patch?
 
 
  Yes. Will report back.
 
 



 --
 Filipe David Manana,

 Reasonable men adapt themselves to the world.
  Unreasonable men adapt the world to themselves.
  That's why all progress depends on unreasonable men.






-- 
Filipe David Manana,

Reasonable men adapt themselves to the world.
 Unreasonable men adapt the world to themselves.
 That's why all progress depends on unreasonable men.


[jira] [Commented] (COUCHDB-1342) Asynchronous file writes

2012-01-22 Thread Filipe Manana (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13190821#comment-13190821
 ] 

Filipe Manana commented on COUCHDB-1342:


Made a few more tests comparing latest master against COUCHDB-1342 branch 
(after merging master into it).

Database writes
=

* 1Kb documents 
(https://github.com/fdmanana/basho_bench_couch/blob/master/couch_docs/doc_1kb.json)

http://graphs.mikeal.couchone.com/#/graph/7c13e2bdebfcd17aab424e68f225fe9a

* 2Kb documents 
(https://github.com/fdmanana/basho_bench_couch/blob/master/couch_docs/doc_2kb.json)

http://graphs.mikeal.couchone.com/#/graph/7c13e2bdebfcd17aab424e68f2261504

* 11Kb documents
(https://github.com/fdmanana/basho_bench_couch/blob/master/couch_docs/doc_11kb.json)

http://graphs.mikeal.couchone.com/#/graph/7c13e2bdebfcd17aab424e68f2262e98


View indexer
==

Test database:  http://fdmanana.iriscouch.com/_utils/many_docs

* master

$ echo 3  /proc/sys/vm/drop_caches
$ time curl http://localhost:5984/many_docs/_design/test/_view/test1
{rows:[
{key:null,value:2000}
]}

real29m42.041s
user0m0.016s
sys 0m0.036s


* master + patch (branch COUCHDB-1342)

$ echo 3  /proc/sys/vm/drop_caches
$ time curl http://localhost:5984/many_docs/_design/test/_view/test1
{rows:[
{key:null,value:2000}
]}

real26m13.112s
user0m0.008s
sys 0m0.036s

Before COUCHDB-1334, and possibly the refactored indexer as well, the 
difference used to be more significant (like in the results presented in the 
dev mail http://s.apache.org/TVu).

 Asynchronous file writes
 

 Key: COUCHDB-1342
 URL: https://issues.apache.org/jira/browse/COUCHDB-1342
 Project: CouchDB
  Issue Type: Improvement
  Components: Database Core
Reporter: Jan Lehnardt
 Fix For: 1.3

 Attachments: COUCHDB-1342.patch


 This change updates the file module so that it can do
 asynchronous writes. Basically it replies immediately
 to process asking to write something to the file, with
 the position where the chunks will be written to the
 file, while a dedicated child process keeps collecting
 chunks and write them to the file (and batching them
 when possible). After issuing a series of write request
 to the file module, the caller can call its 'flush'
 function which will block the caller until all the
 chunks it requested to write are effectively written
 to the file.
 This maximizes the IO subsystem, as for example, while
 the updater is traversing and modifying the btrees and
 doing CPU bound tasks, the writes are happening in
 parallel.
 Originally described at http://s.apache.org/TVu
 Github Commit: 
 https://github.com/fdmanana/couchdb/commit/e82a673f119b82dddf674ac2e6233cd78c123554

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Issues blocking 1.2.0 release

2012-01-22 Thread Filipe David Manana
Noah, was able to reproduce your issue by tweaking the test to create
more leaf revisions for a document:

diff --git a/test/etap/242-replication-many-leaves.t
b/test/etap/242-replication-many-leaves.t
index d8d3eb9..4eb4765 100755
--- a/test/etap/242-replication-many-leaves.t
+++ b/test/etap/242-replication-many-leaves.t
@@ -56,7 +56,7 @@ doc_ids() -

 doc_num_conflicts(doc1) - 10;
 doc_num_conflicts(doc2) - 100;
-doc_num_conflicts(doc3) - 286.
+doc_num_conflicts(doc3) - 500.


 main(_) -


With that change, I get exactly the same timeout as you get when the
test runs the push replication. It turns out that some _bulk_docs
requests are taking more than 30 seconds (default replication
connection timeout) therefore the replication request retry messages.
Verified this by timing the _bulk_docs handler to log the time it
takes:

diff --git a/src/couchdb/couch_httpd_db.erl b/src/couchdb/couch_httpd_db.erl
index d7ecb4a..442571d 100644
--- a/src/couchdb/couch_httpd_db.erl
+++ b/src/couchdb/couch_httpd_db.erl
@@ -297,6 +297,7 @@
db_req(#httpd{path_parts=[_,_ensure_full_commit]}=Req, _Db) -
 send_method_not_allowed(Req, POST);

 db_req(#httpd{method='POST',path_parts=[_,_bulk_docs]}=Req, Db) -
+T0 = now(),
 couch_stats_collector:increment({httpd, bulk_requests}),
 couch_httpd:validate_ctype(Req, application/json),
 {JsonProps} = couch_httpd:json_body_obj(Req),
@@ -357,7 +358,9 @@
db_req(#httpd{method='POST',path_parts=[_,_bulk_docs]}=Req, Db)
-
 {ok, Errors} = couch_db:update_docs(Db, Docs, Options,
replicated_changes),
 ErrorsJson =
 lists:map(fun update_doc_result_to_json/1, Errors),
-send_json(Req, 201, ErrorsJson)
+Rr = send_json(Req, 201, ErrorsJson),
+?LOG_ERROR(BULK DOCS took ~p ms~n,
[timer:now_diff(now(), T0) / 1000]),
+Rr
 end
 end;
 db_req(#httpd{path_parts=[_,_bulk_docs]}=Req, _Db) -


I was getting _bulk_docs response times after 50 seconds.

This convinces me there's nothing wrong with the codebase, the
timeouts just needs to be increased:

diff --git a/test/etap/242-replication-many-leaves.t
b/test/etap/242-replication-many-leaves.t
index d8d3eb9..6508112 100755
--- a/test/etap/242-replication-many-leaves.t
+++ b/test/etap/242-replication-many-leaves.t
@@ -77,6 +77,7 @@ test() -
 couch_server_sup:start_link(test_util:config_files()),
 ibrowse:start(),
 crypto:start(),
+couch_config:set(replicator, connection_timeout, 9, false),

 Pairs = [
 {source_db_name(), target_db_name()},
@@ -287,6 +288,6 @@ replicate(Source, Target) -
 receive
 {'DOWN', MonRef, process, Pid, Reason} -
 etap:is(Reason, normal, Replication finished successfully)
-after 30 -
+after 90 -
 etap:bail(Timeout waiting for replication to finish)
 end.

Alternatively the test can be updated to create less revisions for the
document doc3. The current revisions # is 286 but for the tests'
purpose 205+ is enough, which should make it faster - 7000 (max url
length) / length(DocRevision) = 205

If it's ok for you, updating the timeouts plus reducing the # from 286
to 210 is fine for me.



On Mon, Jan 23, 2012 at 12:00 AM, Noah Slater nsla...@tumbolia.org wrote:
 I'm just the dumb QA guy.

 If you have some diagnostics you want me to run on my machine, I am happy
 to.

 On Sun, Jan 22, 2012 at 11:31 PM, Filipe David Manana
 fdman...@apache.orgwrote:

 On Sun, Jan 22, 2012 at 7:20 PM, Noah Slater nsla...@tumbolia.org wrote:
  Works. How do we proceed?

 For how much time does the test runs? On 2 different physical
 machines, it takes about 1 minute and 10 seconds for me.

 Perhaps some manual replication tests could confirm if there's
 something wrong with the codebase, your environment or if simply
 increasing the timeout is not alarming.

 
  On Sun, Jan 22, 2012 at 7:05 PM, Noah Slater nsla...@tumbolia.org
 wrote:
 
  OVAR 9000! (Testing now...)
 
 
  On Sun, Jan 22, 2012 at 6:56 PM, Filipe David Manana 
 fdman...@apache.orgwrote:
 
  On Sun, Jan 22, 2012 at 6:47 PM, Noah Slater nsla...@tumbolia.org
  wrote:
   No change, still fails.
 
  Noah, to try to find out if it's due to slowness of the machine or
  some other issue, do you think you can try to increase the following
  timeout in the test?
 
  diff --git a/test/etap/242-replication-many-leaves.t
  b/test/etap/242-replication-many-leaves.t
  index d8d3eb9..737cd31 100755
  --- a/test/etap/242-replication-many-leaves.t
  +++ b/test/etap/242-replication-many-leaves.t
  @@ -287,6 +287,6 @@ replicate(Source, Target) -
      receive
      {'DOWN', MonRef, process, Pid, Reason} -
          etap:is(Reason, normal, Replication finished successfully)
  -    after 30 -
  +    after 90 -
          etap:bail(Timeout waiting for replication to finish)
      end.
 
  
   On Sun, Jan 22, 2012 at 6:08 PM, Noah Slater nsla...@tumbolia.org
  wrote:
  
  
   On Sun, Jan 22, 2012 at 6:01 PM, Filipe 

[jira] [Created] (COUCHDB-1387) couch_index_server:reset_indexes/2 does not use the correct utility function

2012-01-22 Thread Jason Smith (Created) (JIRA)
couch_index_server:reset_indexes/2 does not use the correct utility function


 Key: COUCHDB-1387
 URL: https://issues.apache.org/jira/browse/COUCHDB-1387
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core
Affects Versions: 1.3
Reporter: Jason Smith
Priority: Trivial
 Attachments: 
0001-Use-the-correct-utility-function-to-get-the-index-di.patch

It looks like couch_index_util:index_dir(Module, DbName) is the new way to get 
the path to the .db_design/ directory.

Passing an empty string as the module gives the desired result. So why not 
use that?

1 couch_index_util:index_dir(, mydb).
/Users/jhs/src/iris/couchdb/tmp/lib/.mydb_design

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (COUCHDB-1387) couch_index_server:reset_indexes/2 does not use the correct utility function

2012-01-22 Thread Jason Smith (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-1387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Smith updated COUCHDB-1387:
-

Attachment: 0001-Use-the-correct-utility-function-to-get-the-index-di.patch

Attached one-liner fixes this. The same is at 
https://github.com/jhs/couchdb/tree/COUCHDB-1387

 couch_index_server:reset_indexes/2 does not use the correct utility function
 

 Key: COUCHDB-1387
 URL: https://issues.apache.org/jira/browse/COUCHDB-1387
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core
Affects Versions: 1.3
Reporter: Jason Smith
Priority: Trivial
 Attachments: 
 0001-Use-the-correct-utility-function-to-get-the-index-di.patch


 It looks like couch_index_util:index_dir(Module, DbName) is the new way to 
 get the path to the .db_design/ directory.
 Passing an empty string as the module gives the desired result. So why not 
 use that?
 1 couch_index_util:index_dir(, mydb).
 /Users/jhs/src/iris/couchdb/tmp/lib/.mydb_design

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Custom Erlang compiler frontend

2012-01-22 Thread Paul Davis
I spent a few hours today writing a custom frontend to the Erlang
compiler to see what would happen if we built beam files in parallel
in a single Erlang VM. I've got it building CouchDB although its not
super well tested. Some times for building:

couch_erlc: 0m19.481s
master: 1m11.445s
master -j4: 0m51.648s

Each build was run with:

$ git clean -fxd  ./bootstrap  ./configure  time make

Except for master -j4 which just adds -j4 to the make command.

It doesn't pass make distcheck yet but I figured I'd throw it out
there in case anyone was interested. I'm not entirely convinced its
worth the amount of code, but just in case anyone was interested in
what about we'd get from using Rebar this should be a pretty close
approximation for build times.

Code is up here:

https://github.com/davisp/couchdb/tree/couch_erlc


Re: Issues blocking 1.2.0 release

2012-01-22 Thread Noah Slater
Yep, that seems reasonable. Let me know when I can test again. :)

On Mon, Jan 23, 2012 at 1:34 AM, Filipe David Manana fdman...@apache.orgwrote:

 Noah, was able to reproduce your issue by tweaking the test to create
 more leaf revisions for a document:

 diff --git a/test/etap/242-replication-many-leaves.t
 b/test/etap/242-replication-many-leaves.t
 index d8d3eb9..4eb4765 100755
 --- a/test/etap/242-replication-many-leaves.t
 +++ b/test/etap/242-replication-many-leaves.t
 @@ -56,7 +56,7 @@ doc_ids() -

  doc_num_conflicts(doc1) - 10;
  doc_num_conflicts(doc2) - 100;
 -doc_num_conflicts(doc3) - 286.
 +doc_num_conflicts(doc3) - 500.


  main(_) -


 With that change, I get exactly the same timeout as you get when the
 test runs the push replication. It turns out that some _bulk_docs
 requests are taking more than 30 seconds (default replication
 connection timeout) therefore the replication request retry messages.
 Verified this by timing the _bulk_docs handler to log the time it
 takes:

 diff --git a/src/couchdb/couch_httpd_db.erl
 b/src/couchdb/couch_httpd_db.erl
 index d7ecb4a..442571d 100644
 --- a/src/couchdb/couch_httpd_db.erl
 +++ b/src/couchdb/couch_httpd_db.erl
 @@ -297,6 +297,7 @@
 db_req(#httpd{path_parts=[_,_ensure_full_commit]}=Req, _Db) -
 send_method_not_allowed(Req, POST);

  db_req(#httpd{method='POST',path_parts=[_,_bulk_docs]}=Req, Db) -
 +T0 = now(),
 couch_stats_collector:increment({httpd, bulk_requests}),
 couch_httpd:validate_ctype(Req, application/json),
 {JsonProps} = couch_httpd:json_body_obj(Req),
 @@ -357,7 +358,9 @@
 db_req(#httpd{method='POST',path_parts=[_,_bulk_docs]}=Req, Db)
 -
 {ok, Errors} = couch_db:update_docs(Db, Docs, Options,
 replicated_changes),
 ErrorsJson =
 lists:map(fun update_doc_result_to_json/1, Errors),
 -send_json(Req, 201, ErrorsJson)
 +Rr = send_json(Req, 201, ErrorsJson),
 +?LOG_ERROR(BULK DOCS took ~p ms~n,
 [timer:now_diff(now(), T0) / 1000]),
 +Rr
 end
 end;
  db_req(#httpd{path_parts=[_,_bulk_docs]}=Req, _Db) -


 I was getting _bulk_docs response times after 50 seconds.

 This convinces me there's nothing wrong with the codebase, the
 timeouts just needs to be increased:

 diff --git a/test/etap/242-replication-many-leaves.t
 b/test/etap/242-replication-many-leaves.t
 index d8d3eb9..6508112 100755
 --- a/test/etap/242-replication-many-leaves.t
 +++ b/test/etap/242-replication-many-leaves.t
 @@ -77,6 +77,7 @@ test() -
 couch_server_sup:start_link(test_util:config_files()),
 ibrowse:start(),
 crypto:start(),
 +couch_config:set(replicator, connection_timeout, 9, false),

 Pairs = [
 {source_db_name(), target_db_name()},
 @@ -287,6 +288,6 @@ replicate(Source, Target) -
  receive
 {'DOWN', MonRef, process, Pid, Reason} -
 etap:is(Reason, normal, Replication finished successfully)
 -after 30 -
 +after 90 -
 etap:bail(Timeout waiting for replication to finish)
 end.

 Alternatively the test can be updated to create less revisions for the
 document doc3. The current revisions # is 286 but for the tests'
 purpose 205+ is enough, which should make it faster - 7000 (max url
 length) / length(DocRevision) = 205

 If it's ok for you, updating the timeouts plus reducing the # from 286
 to 210 is fine for me.



 On Mon, Jan 23, 2012 at 12:00 AM, Noah Slater nsla...@tumbolia.org
 wrote:
  I'm just the dumb QA guy.
 
  If you have some diagnostics you want me to run on my machine, I am happy
  to.
 
  On Sun, Jan 22, 2012 at 11:31 PM, Filipe David Manana
  fdman...@apache.orgwrote:
 
  On Sun, Jan 22, 2012 at 7:20 PM, Noah Slater nsla...@tumbolia.org
 wrote:
   Works. How do we proceed?
 
  For how much time does the test runs? On 2 different physical
  machines, it takes about 1 minute and 10 seconds for me.
 
  Perhaps some manual replication tests could confirm if there's
  something wrong with the codebase, your environment or if simply
  increasing the timeout is not alarming.
 
  
   On Sun, Jan 22, 2012 at 7:05 PM, Noah Slater nsla...@tumbolia.org
  wrote:
  
   OVAR 9000! (Testing now...)
  
  
   On Sun, Jan 22, 2012 at 6:56 PM, Filipe David Manana 
  fdman...@apache.orgwrote:
  
   On Sun, Jan 22, 2012 at 6:47 PM, Noah Slater nsla...@tumbolia.org
   wrote:
No change, still fails.
  
   Noah, to try to find out if it's due to slowness of the machine or
   some other issue, do you think you can try to increase the following
   timeout in the test?
  
   diff --git a/test/etap/242-replication-many-leaves.t
   b/test/etap/242-replication-many-leaves.t
   index d8d3eb9..737cd31 100755
   --- a/test/etap/242-replication-many-leaves.t
   +++ b/test/etap/242-replication-many-leaves.t
   @@ -287,6 +287,6 @@ replicate(Source, Target) -
   receive
   {'DOWN', MonRef, process, Pid, Reason} -
   etap:is(Reason, normal, Replication finished 

Re: [Windows] proposed binary build changes

2012-01-22 Thread Jeroen Janssen
Hi,

It seems to be ok, since I get:
  All 2 tests passed.

I also tried compaction again, but it didn't compress the 800Mb .couch
file, so I guess something else might be needed aswell?

Best regards,

Jeroen

On Sun, Jan 22, 2012 at 11:53 PM, Dave Cottlehuber d...@muse.net.nz wrote:
 On 22 January 2012 15:49, Dave Cottlehuber d...@muse.net.nz wrote:
 On 22 January 2012 13:28, Jeroen Janssen jeroen.jans...@gmail.com wrote:
 Hi,

 Just a quick check since I am not able to get (snappy) compression
 working on windows.

 Hi Jeroen,

 Thanks for reporting this. AFAICT the snappy_nif.dll is not being copied 
 during
 make dist or install.

 IIRC you ran into this last year on master and resulted in COUCHDB-1197 
 fixes.
 So I'm not sure what is missing on 1.2.x for this, or if its more
 likely an issue with
 my packaging steps.

 I'll keep you posted.

 BTW compression is on by default so no need to tweak ini files.

 A+
 Dave

 Hi Jeroen,

 While I look for why this is not working in the build,  can you copy
 snappy_nif.dll [1] into $COUCH/lib/snappy-1.0.3/priv/ and then test
 with snappy_test.erl courtesy of Filipe [2]?

 3 pwd().
 C:/couch/1.2.0a-8d83b39-git_otp_R15B_SDK7.1/bin
 ok
 4 c(/couch/snappy_tests).
 {ok,snappy_tests}
 5 snappy_tests:test().
  All 2 tests passed.
 ok

 Your higher-level compression tests should be fine.

 [1]: https://www.dropbox.com/s/jeifcxpbtpo78ak/snapshots/20120116?v=l
 [2] 
 https://github.com/fdmanana/snappy-erlang-nif/blob/master/test/snappy_tests.erl

 A+
 Dave