[jira] [Updated] (COUCHDB-1426) error while building with 2 spidermonkey installed

2012-03-12 Thread Benoit Chesneau (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-1426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoit Chesneau updated COUCHDB-1426:
-

Attachment: 0001-close-COUCHDB-1426.patch

iterration over last Paul's patch . Some typos were preventing the CFLAGS to be 
correct.

> error while building with 2 spidermonkey installed
> --
>
> Key: COUCHDB-1426
> URL: https://issues.apache.org/jira/browse/COUCHDB-1426
> Project: CouchDB
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.1.1, 1.2, 1.3
>Reporter: Benoit Chesneau
>Assignee: Benoit Chesneau
>Priority: Critical
> Attachments: 0001-close-COUCHDB-1426.patch, 
> 0001-fix-build-with-custom-path-close-COUCHDB-1426.patch, 
> 0001-fix-build-with-custom-path-close-COUCHDB-1426.patch, 
> 0001-fix-build-with-custom-path-close-COUCHDB-1426.patch, 
> 0001-fix-build-with-custom-path-close-COUCHDB-1426.patch, 
> 0001-fix-build-with-custom-path-close-COUCHDB-1426.patch, COUCHDB-1426.patch
>
>
> Context:
> To bench the differences between different versions of couchdb I had to test 
> against spidermonkey 1.7 and 1.8.5 . 1.8.5 is installed globally in 
> /usr/local  while the 1.7 version is installed on a temporary path. 
> Problem:
> Using --witth-js-include & --with-js-lib configure options aren't enough to 
> use the 1.7 version it still want to use spidermonkey 1.8.5 . Removing 
> js-config from the path doesn't change anything.  I had to uninstall 
> spidermonkey 1.8.5 to have these setting working.
> Error result:
> $ ./configure 
> --with-erlang=/Users/benoitc/local/otp-r14b04/lib/erlang/usr/include 
> --with-js-include=/Users/benoitc/local/js-1.7.0/include 
> --with-js-lib=/Users/benoitc/local/js-1.7.0/lib64
> checking for a BSD-compatible install... /usr/bin/install -c
> checking whether build environment is sane... yes
> checking for a thread-safe mkdir -p... build-aux/install-sh -c -d
> checking for gawk... no
> checking for mawk... no
> checking for nawk... no
> checking for awk... awk
> checking whether make sets $(MAKE)... yes
> checking for gcc... gcc
> checking for C compiler default output file name... a.out
> checking whether the C compiler works... yes
> checking whether we are cross compiling... no
> checking for suffix of executables... 
> checking for suffix of object files... o
> checking whether we are using the GNU C compiler... yes
> checking whether gcc accepts -g... yes
> checking for gcc option to accept ISO C89... none needed
> checking for style of include used by make... GNU
> checking dependency style of gcc... gcc3
> checking build system type... i386-apple-darwin11.3.0
> checking host system type... i386-apple-darwin11.3.0
> checking for a sed that does not truncate output... /usr/bin/sed
> checking for grep that handles long lines and -e... /usr/bin/grep
> checking for egrep... /usr/bin/grep -E
> checking for fgrep... /usr/bin/grep -F
> checking for ld used by gcc... 
> /usr/llvm-gcc-4.2/libexec/gcc/i686-apple-darwin11/4.2.1/ld
> checking if the linker 
> (/usr/llvm-gcc-4.2/libexec/gcc/i686-apple-darwin11/4.2.1/ld) is GNU ld... no
> checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm
> checking the name lister (/usr/bin/nm) interface... BSD nm
> checking whether ln -s works... yes
> checking the maximum length of command line arguments... 196608
> checking whether the shell understands some XSI constructs... yes
> checking whether the shell understands "+="... yes
> checking for /usr/llvm-gcc-4.2/libexec/gcc/i686-apple-darwin11/4.2.1/ld 
> option to reload object files... -r
> checking how to recognize dependent libraries... pass_all
> checking for ar... ar
> checking for strip... strip
> checking for ranlib... ranlib
> checking command to parse /usr/bin/nm output from gcc object... ok
> checking for dsymutil... dsymutil
> checking for nmedit... nmedit
> checking for lipo... lipo
> checking for otool... otool
> checking for otool64... no
> checking for -single_module linker flag... yes
> checking for -exported_symbols_list linker flag... yes
> checking how to run the C preprocessor... gcc -E
> checking for ANSI C header files... yes
> checking for sys/types.h... yes
> checking for sys/stat.h... yes
> checking for stdlib.h... yes
> checking for string.h... yes
> checking for memory.h... yes
> checking for strings.h... yes
> checking for inttypes.h... yes
> checking for stdint.h... yes
> checking for unistd.h... yes
> checking for dlfcn.h... yes
> checking for objdir... .libs
> checking if gcc supports -fno-rtti -fno-exceptions... no
> checking for gcc option to produce PIC... -fno-common -DPIC
> checking if gcc PIC flag -fno-common -DPIC works... yes
> checking if gcc static flag -static works... no
> checking if gcc supports -c -o file.o... yes
> checking if gcc su

[jira] [Commented] (COUCHDB-1436) Sometimes a newly created document does not appear in the database although operation for its creating returns "ok"=true

2012-03-12 Thread Dave Cottlehuber (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13227698#comment-13227698
 ] 

Dave Cottlehuber commented on COUCHDB-1436:
---

Hi Oleg,

Thanks for reporting this. If doc1 is identical, then you're correct CouchDB 
post-deletion/compaction will not "un-delete" the revision.

Can you confirm; if so I'll close this one as a duplicate of COUCHDB-1415?

> Sometimes a newly created document does not appear in the database although 
> operation for its creating returns "ok"=true
> 
>
> Key: COUCHDB-1436
> URL: https://issues.apache.org/jira/browse/COUCHDB-1436
> Project: CouchDB
>  Issue Type: Bug
>  Components: Database Core
>Affects Versions: 1.1
>Reporter: Oleg Rostanin
>
> Sometimes after creating a document via http request a newly created document 
> does not apper in the db (both in Web gui and when requested through API) 
> althougho the response of the creation request returned ok=true,

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Update Conflict for PUT/DELETE in _replicator

2012-03-12 Thread Stefan Kögl
On Mon, Mar 12, 2012 at 2:20 PM, Robert Newson  wrote:
> I'd welcome a chance to access this database. couchdb admin level
> access is sufficient for now, email me directly at rnew...@apache.org.

For general information: The problem spontaneously disappeared when I
tried to reproduce it with the admin I created for rnewson... I'll
report back in case it happens again.


-- Stefan


[jira] [Commented] (COUCHDB-1436) Sometimes a newly created document does not appear in the database although operation for its creating returns "ok"=true

2012-03-12 Thread Marcello Nuccio (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13227575#comment-13227575
 ] 

Marcello Nuccio commented on COUCHDB-1436:
--

Isn't it the same as COUCHDB-1415 ?

> Sometimes a newly created document does not appear in the database although 
> operation for its creating returns "ok"=true
> 
>
> Key: COUCHDB-1436
> URL: https://issues.apache.org/jira/browse/COUCHDB-1436
> Project: CouchDB
>  Issue Type: Bug
>  Components: Database Core
>Affects Versions: 1.1
>Reporter: Oleg Rostanin
>
> Sometimes after creating a document via http request a newly created document 
> does not apper in the db (both in Web gui and when requested through API) 
> althougho the response of the creation request returned ok=true,

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (COUCHDB-1436) Sometimes a newly created document does not appear in the database although operation for its creating returns "ok"=true

2012-03-12 Thread Adam Kocoloski (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13227555#comment-13227555
 ] 

Adam Kocoloski commented on COUCHDB-1436:
-

What does the HTTP response look like when you submit a GET request for the 
document after performing the steps that you outlined?

> Sometimes a newly created document does not appear in the database although 
> operation for its creating returns "ok"=true
> 
>
> Key: COUCHDB-1436
> URL: https://issues.apache.org/jira/browse/COUCHDB-1436
> Project: CouchDB
>  Issue Type: Bug
>  Components: Database Core
>Affects Versions: 1.1
>Reporter: Oleg Rostanin
>
> Sometimes after creating a document via http request a newly created document 
> does not apper in the db (both in Web gui and when requested through API) 
> althougho the response of the creation request returned ok=true,

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Update Conflict for PUT/DELETE in _replicator

2012-03-12 Thread Robert Newson
Stefan,

I'd welcome a chance to access this database. couchdb admin level
access is sufficient for now, email me directly at rnew...@apache.org.

B.

On 12 March 2012 13:09, Stefan Kögl  wrote:
> On Mon, Mar 12, 2012 at 2:01 PM, Jason Smith  wrote:
>> On Mon, Mar 12, 2012 at 12:57 PM, Stefan Kögl  wrote:
>>> In the meantime I tried copying the _replicator database to another
>>> instance, where I could delete the entry without problems. However it
>>> still doesn't work on the initial instance. If one of the committers
>>> is interested, I could organize either remote access via HTTP, or
>>> shell access to the machine it is running on.
>>
>> Hm, if you copied the _replicator.couch file from the 1.2.x prerelease
>> to version 1.1 then it will not support the newer file format.
>>
>> You could replicate to it, or since replication docs have no
>> attachments, just query _all_docs, massage that into a _bulk_docs, and
>> post it to the other couch.
>
> That's not the problem. I copied the database to another 1.2.x
> CouchDB, which could read and update it correctly.
> The problem is the original 1.2.x instance (both are on the current
> 1.2.x branch, btw) where I created the entry but can not update /
> delete it anymore.
>
> -- Stefan


Re: Update Conflict for PUT/DELETE in _replicator

2012-03-12 Thread Stefan Kögl
On Mon, Mar 12, 2012 at 2:01 PM, Jason Smith  wrote:
> On Mon, Mar 12, 2012 at 12:57 PM, Stefan Kögl  wrote:
>> In the meantime I tried copying the _replicator database to another
>> instance, where I could delete the entry without problems. However it
>> still doesn't work on the initial instance. If one of the committers
>> is interested, I could organize either remote access via HTTP, or
>> shell access to the machine it is running on.
>
> Hm, if you copied the _replicator.couch file from the 1.2.x prerelease
> to version 1.1 then it will not support the newer file format.
>
> You could replicate to it, or since replication docs have no
> attachments, just query _all_docs, massage that into a _bulk_docs, and
> post it to the other couch.

That's not the problem. I copied the database to another 1.2.x
CouchDB, which could read and update it correctly.
The problem is the original 1.2.x instance (both are on the current
1.2.x branch, btw) where I created the entry but can not update /
delete it anymore.

-- Stefan


Re: Update Conflict for PUT/DELETE in _replicator

2012-03-12 Thread Jason Smith
On Mon, Mar 12, 2012 at 12:57 PM, Stefan Kögl  wrote:
> In the meantime I tried copying the _replicator database to another
> instance, where I could delete the entry without problems. However it
> still doesn't work on the initial instance. If one of the committers
> is interested, I could organize either remote access via HTTP, or
> shell access to the machine it is running on.

Hm, if you copied the _replicator.couch file from the 1.2.x prerelease
to version 1.1 then it will not support the newer file format.

You could replicate to it, or since replication docs have no
attachments, just query _all_docs, massage that into a _bulk_docs, and
post it to the other couch.

-- 
Iris Couch


Re: Update Conflict for PUT/DELETE in _replicator

2012-03-12 Thread Stefan Kögl
In the meantime I tried copying the _replicator database to another
instance, where I could delete the entry without problems. However it
still doesn't work on the initial instance. If one of the committers
is interested, I could organize either remote access via HTTP, or
shell access to the machine it is running on.

-- Stefan



On Fri, Mar 2, 2012 at 4:30 PM, Stefan Kögl  wrote:
> On Fri, Mar 2, 2012 at 4:06 PM, Jan Lehnardt  wrote:
>> I just created a replication doc under 1.1.1 and then copied the
>> _replicator.couch file to a 1.2.x. On update I the expected result
>> Robert also got ("Only the replicator can edit replication documents
>> that are in the triggered state.". a curl -X DELETE on the doc with
>> ?rev=4-abcd... (no quotes) also worked.
>
> The document was created with 1.2.x, from around the time of the second RC.
>
> I also tried with quotes and got
>
> $ curl -sv -X DELETE
> "http://stefan:*@127.0.0.1:5984/_replicator/mygpo?rev=\"131-57b4da8d3163468cb0bbf4fd30c87832\"";
> * About to connect() to 127.0.0.1 port 5984 (#0)
> *   Trying 127.0.0.1... connected
> * Connected to 127.0.0.1 (127.0.0.1) port 5984 (#0)
> * Server auth using Basic with user 'stefan'
>> DELETE /_replicator/mygpo?rev="131-57b4da8d3163468cb0bbf4fd30c87832" HTTP/1.1
>> Authorization: Basic **
>> User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k 
>> zlib/1.2.3.3 libidn/1.15
>> Host: 127.0.0.1:5984
>> Accept: */*
>>
> < HTTP/1.1 500 Internal Server Error
> < Server: CouchDB/1.2.0 (Erlang OTP/R14B04)
> < Date: Fri, 02 Mar 2012 15:18:31 GMT
> < Content-Type: text/plain; charset=utf-8
> < Content-Length: 44
> < Cache-Control: must-revalidate
> <
> {"error":"unknown_error","reason":"badarg"}
> * Connection #0 to host 127.0.0.1 left intact
> * Closing connection #0
>
> After that I also tried compacting the _replicator database, but also
> that didn't change anything.
>
>
> -- Stefan


Re: Crash of CouchDB 1.2.x

2012-03-12 Thread Jason Smith
I seem to remember that, say, ext2 had more or less constant-time unlinking.

On Mon, Mar 12, 2012 at 10:32 AM, Robert Newson  wrote:
> I can confirm that XFS is aggressive when deleting large files (other
> i/o requests are slow or blocked while it does it). It has been
> necessary to iteratively truncate a file instead of a simple 'rm' in
> production to avoid that problem. Increasing the size of extent
> preallocation ought to help considerably but I've not yet deployed
> that change. I *can* confirm that you can't 'ionice' the rm call,
> though.
>
> B.
>
> On 12 March 2012 05:00, Randall Leeds  wrote:
>> On Mar 11, 2012 7:40 PM, "Jason Smith"  wrote:
>>>
>>> On Mon, Mar 12, 2012 at 8:44 AM, Randall Leeds 
>> wrote:
>>> > I'm not sure what else you could provide after the fact. If your couch
>>> > came back online automatically, and did so quickly, I would expect to
>>> > see very long response times while the disk was busy freeing the old,
>>> > un-compacted file. We have had some fixes in the last couple releases
>>> > to address similar issues, but maybe there's something lurking still.
>>> > I've got no other ideas/leads at this time.
>>>
>>> Another long shot, but you could try a filesystem that doesn't
>>> synchronously reclaim the space, like (IIRC) XFS, btrfs, or I think
>>> ext2.
>>
>> I think you're referring to extents, which, IIRC, allow large, contiguous
>> sections if a file to be allocated and freed with less bookkeeping and,
>> therefore, fewer writes. This behavior is not any more or less synchronous.
>>
>> In my production experience, xfs does not show much benefit from this
>> because any machine which contains more than one databases which are
>> growing still results in file fragmentation that limits the gains from
>> extents.
>>
>> I suspect, but have not tried to verify, that very large RAID stripe sizes
>> that force pre allocation of larger blocks, might deliver some gains.
>>
>> I have an open ticket for a manual delete option which was designed to
>> allow deletion of trashed files to occur during low volume hours or using
>> tools like ionice.  Unfortunately, I never got a chance to experiment with
>> that set up in production, though I have seen ionice help significantly to
>> keep request latency down when doing large deletes (just not in this
>> particular use case).



-- 
Iris Couch


Re: Crash of CouchDB 1.2.x

2012-03-12 Thread Robert Newson
I can confirm that XFS is aggressive when deleting large files (other
i/o requests are slow or blocked while it does it). It has been
necessary to iteratively truncate a file instead of a simple 'rm' in
production to avoid that problem. Increasing the size of extent
preallocation ought to help considerably but I've not yet deployed
that change. I *can* confirm that you can't 'ionice' the rm call,
though.

B.

On 12 March 2012 05:00, Randall Leeds  wrote:
> On Mar 11, 2012 7:40 PM, "Jason Smith"  wrote:
>>
>> On Mon, Mar 12, 2012 at 8:44 AM, Randall Leeds 
> wrote:
>> > I'm not sure what else you could provide after the fact. If your couch
>> > came back online automatically, and did so quickly, I would expect to
>> > see very long response times while the disk was busy freeing the old,
>> > un-compacted file. We have had some fixes in the last couple releases
>> > to address similar issues, but maybe there's something lurking still.
>> > I've got no other ideas/leads at this time.
>>
>> Another long shot, but you could try a filesystem that doesn't
>> synchronously reclaim the space, like (IIRC) XFS, btrfs, or I think
>> ext2.
>
> I think you're referring to extents, which, IIRC, allow large, contiguous
> sections if a file to be allocated and freed with less bookkeeping and,
> therefore, fewer writes. This behavior is not any more or less synchronous.
>
> In my production experience, xfs does not show much benefit from this
> because any machine which contains more than one databases which are
> growing still results in file fragmentation that limits the gains from
> extents.
>
> I suspect, but have not tried to verify, that very large RAID stripe sizes
> that force pre allocation of larger blocks, might deliver some gains.
>
> I have an open ticket for a manual delete option which was designed to
> allow deletion of trashed files to occur during low volume hours or using
> tools like ionice.  Unfortunately, I never got a chance to experiment with
> that set up in production, though I have seen ionice help significantly to
> keep request latency down when doing large deletes (just not in this
> particular use case).


[jira] [Commented] (COUCHDB-1436) Sometimes a newly created document does not appear in the database although operation for its creating returns "ok"=true

2012-03-12 Thread Oleg Rostanin (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13227376#comment-13227376
 ] 

Oleg Rostanin commented on COUCHDB-1436:


I'm using C# "Relax"-Library for querying couchdb. This is a trace.

[Mon, 12 Mar 2012 08:44:24 GMT] [debug] [<0.21431.4>] 'PUT' 
/com%2Fdeere%2Frostaninnb%2Fconfig/igreen_machineconnector {1,

1} from "127.0.0.1"
Headers: [{'Authorization',"Basic YWRtaW46aUdyZWVuTUM="},
  {'Content-Length',"392"},
  {'Content-Type',"application/json; charset=utf-8"},
  {"Expect","100-continue"},
  {'Host',"localhost:5984"}]
[Mon, 12 Mar 2012 08:44:24 GMT] [debug] [<0.21431.4>] OAuth Params: []
[Mon, 12 Mar 2012 08:44:24 GMT] [info] [<0.21431.4>] 127.0.0.1 - - 'PUT' 
/com%2Fdeere%2Frostaninnb%2Fconfig/igreen_machineconnector 201

I was testing the following use case:
- I have a local document doc1 in the db1
- Currently no replication is running
- I delete the local document doc1
- I trigger compaction
- I create a new doc1
- I get "ok=true" in my Relax-Client as an answer.

Would it be helpful? Maybe I'm doing wrong things?


> Sometimes a newly created document does not appear in the database although 
> operation for its creating returns "ok"=true
> 
>
> Key: COUCHDB-1436
> URL: https://issues.apache.org/jira/browse/COUCHDB-1436
> Project: CouchDB
>  Issue Type: Bug
>  Components: Database Core
>Affects Versions: 1.1
>Reporter: Oleg Rostanin
>
> Sometimes after creating a document via http request a newly created document 
> does not apper in the db (both in Web gui and when requested through API) 
> althougho the response of the creation request returned ok=true,

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira