[jira] [Commented] (COUCHDB-1149) Documentation of list functions doesn't explain how to use the 0.9 style

2011-05-11 Thread Sebastian Cohnen (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13031646#comment-13031646
 ] 

Sebastian Cohnen commented on COUCHDB-1149:
---

I'm not totally sure, but I think the list fun API simply changed and there is 
no way to have a 0.9-style list function in later versions of CouchDB.

 Documentation of list functions doesn't explain how to use the 0.9 style
 

 Key: COUCHDB-1149
 URL: https://issues.apache.org/jira/browse/COUCHDB-1149
 Project: CouchDB
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 0.9
Reporter: James Howe
Priority: Minor
  Labels: list_function

 http://wiki.apache.org/couchdb/Formatting_with_Show_and_List#Listing_Views_with_CouchDB_0.9
  describes a style of list functions, but not where they should go in a 
 design document. Putting them under the lists key results in them being 
 invoked as the 0.10 versions.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (COUCHDB-1149) Documentation of list functions doesn't explain how to use the 0.9 style

2011-05-11 Thread Sebastian Cohnen (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13031672#comment-13031672
 ] 

Sebastian Cohnen commented on COUCHDB-1149:
---

@Robert: based upon Putting them under the lists key results in them being 
invoked as the 0.10 versions. I thought the question was how to use the old 
API in newer CouchDB versions...

 Documentation of list functions doesn't explain how to use the 0.9 style
 

 Key: COUCHDB-1149
 URL: https://issues.apache.org/jira/browse/COUCHDB-1149
 Project: CouchDB
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 0.9
Reporter: James Howe
Priority: Minor
  Labels: list_function

 http://wiki.apache.org/couchdb/Formatting_with_Show_and_List#Listing_Views_with_CouchDB_0.9
  describes a style of list functions, but not where they should go in a 
 design document. Putting them under the lists key results in them being 
 invoked as the 0.10 versions.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] Commented: (COUCHDB-1071) Integer UUIDS (1, 2, 3, ...)

2011-02-22 Thread Sebastian Cohnen (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12997679#comment-12997679
 ] 

Sebastian Cohnen commented on COUCHDB-1071:
---

The whole point of an universally unique identifier (UUID) is, that it is 
universally unique, even across servers. Sequential IDs will break, if you have 
more than one server. If you have concurrent writers, this will also cause 
problems...

If you need sequential IDs, you can just create them by yourself but I highly 
doubt that this kind of sequential integer IDs make sense for CouchDB.

 Integer UUIDS (1, 2, 3, ...)
 

 Key: COUCHDB-1071
 URL: https://issues.apache.org/jira/browse/COUCHDB-1071
 Project: CouchDB
  Issue Type: Improvement
  Components: Database Core
Affects Versions: 1.0.2
 Environment: All
Reporter: adisk
  Labels: algorithm, uuid

 Please, create integer uuids.
 What need:
 - intger keys
 - autogenerated by CouchDB
 - sequential
 Like a 1, 2, 3, 4, 5,...

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (COUCHDB-1054) Better couch_log performance

2011-02-03 Thread Sebastian Cohnen (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-1054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12990007#comment-12990007
 ] 

Sebastian Cohnen commented on COUCHDB-1054:
---

Although I don't exactly know what Filipe improved here, this is not about the 
memory consumption problems with logging on debug e.g., right?

 Better couch_log performance
 

 Key: COUCHDB-1054
 URL: https://issues.apache.org/jira/browse/COUCHDB-1054
 Project: CouchDB
  Issue Type: Improvement
Reporter: Filipe Manana
Assignee: Filipe Manana
 Attachments: COUCHDB-1054-2.patch, COUCHDB-1054-3.patch, 
 COUCHDB-1054.patch


 Building the messages to write to the console and the log file can be done 
 outside the couch_log gen_event. This significantly increases the parallelism 
 when many processes log messages. The following relaximation test graph shows 
 a throughput increase using the attached patch against current trunk:
 http://graphs.mikeal.couchone.com/#/graph/0379dbdaef29b1c0fbf03421540243f7

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (COUCHDB-993) Replication is crashing when changes feed was consumed

2010-12-29 Thread Sebastian Cohnen (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12975817#action_12975817
 ] 

Sebastian Cohnen commented on COUCHDB-993:
--

I cannot reproduce this error locally anymore. However I still see some odd 
peaks in resource usage (memory/cpu). For now it looks okay, but I already said 
to Paul on IRC couple of days ago that I'm going to perform some more extensive 
testing and profiling to figure out where and - maybe with your help - why 
these occur. I'm not sure if these usage patterns are expected or not, but I 
hope that there is some more room for improvement here.

 Replication is crashing when changes feed was consumed
 --

 Key: COUCHDB-993
 URL: https://issues.apache.org/jira/browse/COUCHDB-993
 Project: CouchDB
  Issue Type: Bug
  Components: Replication
Affects Versions: 1.0.1, 1.0.2
Reporter: Sebastian Cohnen
 Fix For: 1.0.2, 1.1

 Attachments: couchdb-993.patch


 Yesterday I hit a bug, where pull replication is dying which is resulting in 
 a {exit,{json_encode,{bad_term,0.133.0}}} error (CouchDB is trying to 
 encode a PID into JSON).
 Adam and Paul had a look at this issue yesterday and they found the 
 underlying issue: There was a missing clause catching the exit message when 
 the changes feed was consumed and ibrowse closes the HTTP connection for that 
 feed.
 Adam wrote a quick patch yesterday, which I'll append here too (applies 
 cleanly to 1.0.x at time of writing).
 (Sorry for any inaccuracy, I only understood the issue partially)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COUCHDB-968) Duplicated IDs in _all_docs

2010-12-28 Thread Sebastian Cohnen (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12975597#action_12975597
 ] 

Sebastian Cohnen commented on COUCHDB-968:
--

AFAIK there is no summary. All relevant informations are here in these comments 
(and should be IMO). There were some additional discussions on IRC but AFAIK 
Adam's patch (and Paul's merge) isn't yet committed. From what I understand the 
only part which is missing to fully fix this issue is (besides the commit to 
affected branches) to figure out how to handle views that got corrupted by this 
bug.

 Duplicated IDs in _all_docs
 ---

 Key: COUCHDB-968
 URL: https://issues.apache.org/jira/browse/COUCHDB-968
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core
Affects Versions: 0.10.1, 0.10.2, 0.11.1, 0.11.2, 1.0, 1.0.1, 1.0.2
 Environment: any
Reporter: Sebastian Cohnen
Assignee: Adam Kocoloski
Priority: Blocker
 Fix For: 0.11.3, 1.0.2, 1.1


 We have a database, which is causing serious trouble with compaction and 
 replication (huge memory and cpu usage, often causing couchdb to crash b/c 
 all system memory is exhausted). Yesterday we discovered that db/_all_docs is 
 reporting duplicated IDs (see [1]). Until a few minutes ago we thought that 
 there are only few duplicates but today I took a closer look and I found 10 
 IDs which sum up to a total of 922 duplicates. Some of them have only 1 
 duplicate, others have hundreds.
 Some facts about the database in question:
 * ~13k documents, with 3-5k revs each
 * all duplicated documents are in conflict (with 1 up to 14 conflicts)
 * compaction is run on a daily bases
 * several thousands updates per hour
 * multi-master setup with pull replication from each other
 * delayed_commits=false on all nodes
 * used couchdb versions 1.0.0 and 1.0.x (*)
 Unfortunately the database's contents are confidential and I'm not allowed to 
 publish it.
 [1]: Part of http://localhost:5984/DBNAME/_all_docs
 ...
 {id:9997,key:9997,value:{rev:6096-603c68c1fa90ac3f56cf53771337ac9f}},
 {id:,key:,value:{rev:6097-3c873ccf6875ff3c4e2c6fa264c6a180}},
 {id:,key:,value:{rev:6097-3c873ccf6875ff3c4e2c6fa264c6a180}},
 ...
 [*]
 There were two (old) servers (1.0.0) in production (already having the 
 replication and compaction issues). Then two servers (1.0.x) were added and 
 replication was set up to bring them in sync with the old production servers 
 since the two new servers were meant to replace the old ones (to update 
 node.js application code among other things).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COUCHDB-968) Duplicated IDs in _all_docs

2010-12-22 Thread Sebastian Cohnen (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12974372#action_12974372
 ] 

Sebastian Cohnen commented on COUCHDB-968:
--

forgot one more thing: during my tests I've written a small script to search 
for dupes in _all_docs: https://gist.github.com/752003

 Duplicated IDs in _all_docs
 ---

 Key: COUCHDB-968
 URL: https://issues.apache.org/jira/browse/COUCHDB-968
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core
Affects Versions: 0.10.1, 0.10.2, 0.11.1, 0.11.2, 1.0, 1.0.1, 1.0.2
 Environment: any
Reporter: Sebastian Cohnen
Assignee: Adam Kocoloski
Priority: Blocker
 Fix For: 0.11.3, 1.0.2, 1.1


 We have a database, which is causing serious trouble with compaction and 
 replication (huge memory and cpu usage, often causing couchdb to crash b/c 
 all system memory is exhausted). Yesterday we discovered that db/_all_docs is 
 reporting duplicated IDs (see [1]). Until a few minutes ago we thought that 
 there are only few duplicates but today I took a closer look and I found 10 
 IDs which sum up to a total of 922 duplicates. Some of them have only 1 
 duplicate, others have hundreds.
 Some facts about the database in question:
 * ~13k documents, with 3-5k revs each
 * all duplicated documents are in conflict (with 1 up to 14 conflicts)
 * compaction is run on a daily bases
 * several thousands updates per hour
 * multi-master setup with pull replication from each other
 * delayed_commits=false on all nodes
 * used couchdb versions 1.0.0 and 1.0.x (*)
 Unfortunately the database's contents are confidential and I'm not allowed to 
 publish it.
 [1]: Part of http://localhost:5984/DBNAME/_all_docs
 ...
 {id:9997,key:9997,value:{rev:6096-603c68c1fa90ac3f56cf53771337ac9f}},
 {id:,key:,value:{rev:6097-3c873ccf6875ff3c4e2c6fa264c6a180}},
 {id:,key:,value:{rev:6097-3c873ccf6875ff3c4e2c6fa264c6a180}},
 ...
 [*]
 There were two (old) servers (1.0.0) in production (already having the 
 replication and compaction issues). Then two servers (1.0.x) were added and 
 replication was set up to bring them in sync with the old production servers 
 since the two new servers were meant to replace the old ones (to update 
 node.js application code among other things).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COUCHDB-993) Replication is crashing when changes feed was consumed

2010-12-21 Thread Sebastian Cohnen (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12973736#action_12973736
 ] 

Sebastian Cohnen commented on COUCHDB-993:
--

Paul,

as discussed on IRC: yes, the replicator isn't dying anymore. But apparently 
I've hit another bug causing CouchDB to crash with OOME. This is likely not 
related to this reported issue and maybe just a special case for my setup.

 Replication is crashing when changes feed was consumed
 --

 Key: COUCHDB-993
 URL: https://issues.apache.org/jira/browse/COUCHDB-993
 Project: CouchDB
  Issue Type: Bug
  Components: Replication
Affects Versions: 1.0.1, 1.0.2
Reporter: Sebastian Cohnen
 Attachments: couchdb-993.patch


 Yesterday I hit a bug, where pull replication is dying which is resulting in 
 a {exit,{json_encode,{bad_term,0.133.0}}} error (CouchDB is trying to 
 encode a PID into JSON).
 Adam and Paul had a look at this issue yesterday and they found the 
 underlying issue: There was a missing clause catching the exit message when 
 the changes feed was consumed and ibrowse closes the HTTP connection for that 
 feed.
 Adam wrote a quick patch yesterday, which I'll append here too (applies 
 cleanly to 1.0.x at time of writing).
 (Sorry for any inaccuracy, I only understood the issue partially)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (COUCHDB-984) Animated spinner icon has glitch

2010-12-13 Thread Sebastian Cohnen (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian Cohnen updated COUCHDB-984:
-

Attachment: 16x16-Spinner.gif

 Animated spinner icon has glitch
 

 Key: COUCHDB-984
 URL: https://issues.apache.org/jira/browse/COUCHDB-984
 Project: CouchDB
  Issue Type: Bug
  Components: Futon
 Environment: any
Reporter: Nathan Vander Wilt
Priority: Minor
 Fix For: 1.0.2

 Attachments: 16x16-Spinner.gif

   Original Estimate: 0.5h
  Remaining Estimate: 0.5h

 Futon's progress spinner icon found in /share/couchdb/www/image/spinner.gif 
 (used when uploading files and perhaps elsewhere) suffers from the glitch 
 described at http://www.panic.com/blog/2010/10/spinner-rage/, where the fifth 
 frame of the animation flashes more darkly than the others.
 The Panic post on this issue provides a fixed version of the spinner, but it 
 is a Photoshop file:
 http://panic.com/blog/wp-content/files/16x16%20Spinner.psd.zip
 This simply needs to be re-exported by someone with a copy of Adobe's 
 software. (I have a LazyTwitter out on this, otherwise next week I can pester 
 the designers at work for a favor.)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COUCHDB-984) Animated spinner icon has glitch

2010-12-13 Thread Sebastian Cohnen (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12970858#action_12970858
 ] 

Sebastian Cohnen commented on COUCHDB-984:
--

Done :)

 Animated spinner icon has glitch
 

 Key: COUCHDB-984
 URL: https://issues.apache.org/jira/browse/COUCHDB-984
 Project: CouchDB
  Issue Type: Bug
  Components: Futon
 Environment: any
Reporter: Nathan Vander Wilt
Priority: Minor
 Fix For: 1.0.2

 Attachments: 16x16-Spinner.gif

   Original Estimate: 0.5h
  Remaining Estimate: 0.5h

 Futon's progress spinner icon found in /share/couchdb/www/image/spinner.gif 
 (used when uploading files and perhaps elsewhere) suffers from the glitch 
 described at http://www.panic.com/blog/2010/10/spinner-rage/, where the fifth 
 frame of the animation flashes more darkly than the others.
 The Panic post on this issue provides a fixed version of the spinner, but it 
 is a Photoshop file:
 http://panic.com/blog/wp-content/files/16x16%20Spinner.psd.zip
 This simply needs to be re-exported by someone with a copy of Adobe's 
 software. (I have a LazyTwitter out on this, otherwise next week I can pester 
 the designers at work for a favor.)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COUCHDB-968) Duplicated IDs in _all_docs

2010-11-30 Thread Sebastian Cohnen (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12965272#action_12965272
 ] 

Sebastian Cohnen commented on COUCHDB-968:
--

Adam, we don't have hundreds of conflict, though other documents are in 
conflict to. None has more than 14 conflicts. We have tried to apply your patch 
from COUCHDB-888 but this didn't change anything.

 Duplicated IDs in _all_docs
 ---

 Key: COUCHDB-968
 URL: https://issues.apache.org/jira/browse/COUCHDB-968
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core
Affects Versions: 0.10.1, 0.10.2, 0.11.1, 0.11.2, 1.0, 1.0.1, 1.0.2
 Environment: Ubuntu 10.04.
Reporter: Sebastian Cohnen
Priority: Blocker

 We have a database, which is causing serious trouble with compaction and 
 replication (huge memory and cpu usage, often causing couchdb to crash b/c 
 all system memory is exhausted). Yesterday we discovered that db/_all_docs is 
 reporting duplicated IDs (see [1]). Until a few minutes ago we thought that 
 there are only few duplicates but today I took a closer look and I found 10 
 IDs which sum up to a total of 922 duplicates. Some of them have only 1 
 duplicate, others have hundreds.
 Some facts about the database in question:
 * ~13k documents, with 3-5k revs each
 * all duplicated documents are in conflict (with 1 up to 14 conflicts)
 * compaction is run on a daily bases
 * several thousands updates per hour
 * multi-master setup with pull replication from each other
 * delayed_commits=false on all nodes
 * used couchdb versions 1.0.0 and 1.0.x (*)
 Unfortunately the database's contents are confidential and I'm not allowed to 
 publish it.
 [1]: Part of http://localhost:5984/DBNAME/_all_docs
 ...
 {id:9997,key:9997,value:{rev:6096-603c68c1fa90ac3f56cf53771337ac9f}},
 {id:,key:,value:{rev:6097-3c873ccf6875ff3c4e2c6fa264c6a180}},
 {id:,key:,value:{rev:6097-3c873ccf6875ff3c4e2c6fa264c6a180}},
 ...
 [*]
 There were two (old) servers (1.0.0) in production (already having the 
 replication and compaction issues). Then two servers (1.0.x) were added and 
 replication was set up to bring them in sync with the old production servers 
 since the two new servers were meant to replace the old ones (to update 
 node.js application code among other things).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (COUCHDB-968) Duplicated IDs in _all_docs

2010-11-27 Thread Sebastian Cohnen (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian Cohnen updated COUCHDB-968:
-

Description: 
We have a database, which is causing serious trouble with compaction and 
replication (huge memory and cpu usage, often causing couchdb to crash b/c all 
system memory is exhausted). Yesterday we discovered that db/_all_docs is 
reporting duplicated IDs (see [1]). Until a few minutes ago we thought that 
there are only few duplicates but today I took a closer look and I found 10 IDs 
which sum up to a total of 922 duplicates. Some of them have only 1 duplicate, 
others have hundreds.

Some facts about the database in question:
* ~13k documents, with 3-5k revs each
* all duplicated documents are in conflict (with 1 up to 14 conflicts)
* compaction is run on a daily bases
* several thousands updates per hour
* multi-master setup with pull replication from each other
* delayed_commits=false on all nodes
* used couchdb versions 1.0.0 and 1.0.x (*)

Unfortunately the database's contents are confidential and I'm not allowed to 
publish it.

[1]: Part of http://localhost:5984/DBNAME/_all_docs
...
{id:9997,key:9997,value:{rev:6096-603c68c1fa90ac3f56cf53771337ac9f}},
{id:,key:,value:{rev:6097-3c873ccf6875ff3c4e2c6fa264c6a180}},
{id:,key:,value:{rev:6097-3c873ccf6875ff3c4e2c6fa264c6a180}},
...

[*]
There were two (old) servers (1.0.0) in production (already having the 
replication and compaction issues). Then two servers (1.0.x) were added and 
replication was set up to bring them in sync with the old production servers 
since the two new servers were meant to replace the old ones (to update node.js 
application code among other things).

  was:
We have a database, which is causing serious trouble with compaction and 
replication (huge memory and cpu usage, often causing couchdb to crash b/c all 
system memory is exhausted). Yesterday we discovered that db/_all_docs is 
reporting duplicated IDs (see [1]). Until a few minutes ago we thought that 
there are only few duplicates but today I took a closer look and I found 10 IDs 
which sum up to a total of 922 duplicates. Some of them have only 1 duplicate, 
others have hundreds.

Some facts about the database in question:
* ~13k documents, with 3-5k revs each
* all duplicated documents are in conflict (with 1 up to 14 conflicts)
* compaction is run on a daily bases
* several thousands updates per hour
* multi-master setup with pull replication from each other

Unfortunately the database's contents are confidential and I'm not allowed to 
publish it.

[1]: Part of http://localhost:5984/DBNAME/_all_docs
...
{id:9997,key:9997,value:{rev:6096-603c68c1fa90ac3f56cf53771337ac9f}},
{id:,key:,value:{rev:6097-3c873ccf6875ff3c4e2c6fa264c6a180}},
{id:,key:,value:{rev:6097-3c873ccf6875ff3c4e2c6fa264c6a180}},
...


 Duplicated IDs in _all_docs
 ---

 Key: COUCHDB-968
 URL: https://issues.apache.org/jira/browse/COUCHDB-968
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core
Affects Versions: 1.0, 1.0.1, 1.0.2
 Environment: Ubuntu 10.04.
Reporter: Sebastian Cohnen

 We have a database, which is causing serious trouble with compaction and 
 replication (huge memory and cpu usage, often causing couchdb to crash b/c 
 all system memory is exhausted). Yesterday we discovered that db/_all_docs is 
 reporting duplicated IDs (see [1]). Until a few minutes ago we thought that 
 there are only few duplicates but today I took a closer look and I found 10 
 IDs which sum up to a total of 922 duplicates. Some of them have only 1 
 duplicate, others have hundreds.
 Some facts about the database in question:
 * ~13k documents, with 3-5k revs each
 * all duplicated documents are in conflict (with 1 up to 14 conflicts)
 * compaction is run on a daily bases
 * several thousands updates per hour
 * multi-master setup with pull replication from each other
 * delayed_commits=false on all nodes
 * used couchdb versions 1.0.0 and 1.0.x (*)
 Unfortunately the database's contents are confidential and I'm not allowed to 
 publish it.
 [1]: Part of http://localhost:5984/DBNAME/_all_docs
 ...
 {id:9997,key:9997,value:{rev:6096-603c68c1fa90ac3f56cf53771337ac9f}},
 {id:,key:,value:{rev:6097-3c873ccf6875ff3c4e2c6fa264c6a180}},
 {id:,key:,value:{rev:6097-3c873ccf6875ff3c4e2c6fa264c6a180}},
 ...
 [*]
 There were two (old) servers (1.0.0) in production (already having the 
 replication and compaction issues). Then two servers (1.0.x) were added and 
 replication was set up to bring them in sync with the old production servers 
 since the two new servers were meant to replace the old ones (to update 
 node.js application code among other things).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to 

[jira] Updated: (COUCHDB-968) Duplicated IDs in _all_docs

2010-11-26 Thread Sebastian Cohnen (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian Cohnen updated COUCHDB-968:
-

Affects Version/s: 1.0

 Duplicated IDs in _all_docs
 ---

 Key: COUCHDB-968
 URL: https://issues.apache.org/jira/browse/COUCHDB-968
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core
Affects Versions: 1.0, 1.0.1, 1.0.2
 Environment: Ubuntu 10.04.
Reporter: Sebastian Cohnen

 We have a database, which is causing serious trouble with compaction and 
 replication (huge memory and cpu usage, often causing couchdb to crash b/c 
 all system memory is exhausted). Yesterday we discovered that db/_all_docs is 
 reporting duplicated IDs (see [1]). Until a few minutes ago we thought that 
 there are only few duplicates but today I took a closer look and I found 10 
 IDs which sum up to a total of 922 duplicates. Some of them have only 1 
 duplicate, others have hundreds.
 Some facts about the database in question:
 * ~13k documents, with 3-5k revs each
 * compaction is run on a daily bases
 * several thousands updates per hour
 * multi-master setup with pull replication from each other
 Unfortunately the database's contents are confidential and I'm not allowed to 
 publish it.
 [1]: Part of http://localhost:5984/DBNAME/_all_docs
 ...
 {id:9997,key:9997,value:{rev:6096-603c68c1fa90ac3f56cf53771337ac9f}},
 {id:,key:,value:{rev:6097-3c873ccf6875ff3c4e2c6fa264c6a180}},
 {id:,key:,value:{rev:6097-3c873ccf6875ff3c4e2c6fa264c6a180}},
 ...

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (COUCHDB-968) Duplicated IDs in _all_docs

2010-11-26 Thread Sebastian Cohnen (JIRA)
Duplicated IDs in _all_docs
---

 Key: COUCHDB-968
 URL: https://issues.apache.org/jira/browse/COUCHDB-968
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core
Affects Versions: 1.0.1, 1.0.2
 Environment: Ubuntu 10.04.
Reporter: Sebastian Cohnen


We have a database, which is causing serious trouble with compaction and 
replication (huge memory and cpu usage, often causing couchdb to crash b/c all 
system memory is exhausted). Yesterday we discovered that db/_all_docs is 
reporting duplicated IDs (see [1]). Until a few minutes ago we thought that 
there are only few duplicates but today I took a closer look and I found 10 IDs 
which sum up to a total of 922 duplicates. Some of them have only 1 duplicate, 
others have hundreds.

Some facts about the database in question:
* ~13k documents, with 3-5k revs each
* compaction is run on a daily bases
* several thousands updates per hour
* multi-master setup with pull replication from each other

Unfortunately the database's contents are confidential and I'm not allowed to 
publish it.

[1]: Part of http://localhost:5984/DBNAME/_all_docs
...
{id:9997,key:9997,value:{rev:6096-603c68c1fa90ac3f56cf53771337ac9f}},
{id:,key:,value:{rev:6097-3c873ccf6875ff3c4e2c6fa264c6a180}},
{id:,key:,value:{rev:6097-3c873ccf6875ff3c4e2c6fa264c6a180}},
...

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (COUCHDB-289) _all_docs should support both GET and POST

2010-11-10 Thread Sebastian Cohnen (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian Cohnen updated COUCHDB-289:
-

Attachment: multiple_key_support_for_all_docs_via_get.patch

I implemented the proposed change. Need to add tests though.

PS: These are my first steps with Erlang, so bare with me :)

 _all_docs should support both GET and POST
 --

 Key: COUCHDB-289
 URL: https://issues.apache.org/jira/browse/COUCHDB-289
 Project: CouchDB
  Issue Type: Improvement
  Components: HTTP Interface
Affects Versions: 0.10
Reporter: Matt Aimonetti
 Attachments: multiple_key_support_for_all_docs_via_get.patch


 As of 0.9, if you want to query multiple documents at once and load them, you 
 have to do: 
 'POST' /my_db/_all_docs?include_docs=true and pass the document ids.
 The problem with that approach is that the requests can't be cached. Being to 
 make a GET request (with the obvious limitations) would make a lot of sense.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COUCHDB-934) inside a design doc, a broken view makes all others views fail

2010-11-04 Thread Sebastian Cohnen (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12928358#action_12928358
 ] 

Sebastian Cohnen commented on COUCHDB-934:
--

The current architecture of couchdb's view system does process views in one 
design document as a group. Therefore the view group fails if one view is 
failing. That the view is rebuild is not duo to the fix, but duo to the fact, 
that the view signature has changed. If you want to avoid this behavior, you 
can use separate design documents for your views.

I have to admit that debugging a broken view can be a bit hard. There is 
definitely room for improvement :)

 inside a design doc, a broken view makes all others views fail
 --

 Key: COUCHDB-934
 URL: https://issues.apache.org/jira/browse/COUCHDB-934
 Project: CouchDB
  Issue Type: Improvement
  Components: Database Core
Affects Versions: 1.0.1
 Environment: All
Reporter: Mickael Bailly
Priority: Minor

 If a design document got a broken view, all other views don't work anymore.
  Steps to reproduce :
  1/ create a new database
  2/ Create a design doc :
  {
_id: _design/doc1,
views: {
v1: {
map: function() {}
},
v2: {
map: thefunction() {}
}
},
language: javascript
  }
  3/ Create a doc :
  {
_id: doc1
  }
  4/ Call the v1 view
 It's annoying because :
 - we're unable to know what view fails to run
 - a fix on the broken view will result in the rebuilding of all other views 
 of the design doc.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COUCHDB-902) Attachments that have recovered from conflict do not accept attachments.

2010-09-30 Thread Sebastian Cohnen (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12916430#action_12916430
 ] 

Sebastian Cohnen commented on COUCHDB-902:
--

Could this be related to COUCHDB-885?

 Attachments that have recovered from conflict do not accept attachments.
 

 Key: COUCHDB-902
 URL: https://issues.apache.org/jira/browse/COUCHDB-902
 Project: CouchDB
  Issue Type: Bug
  Components: Database Core
 Environment: trunk
Reporter: Paul Joseph Davis
Priority: Critical
 Attachments: couchdb-902-test-case.py


 Apparently if a document has been in a conflict, they will reject requests to 
 add an attachment with a conflict error.
 I've tracked this down to couch_db_updater.erl line 501, but I'm not too 
 familiar with this part of the code so I figured I'd fill out a ticket in 
 case anyone else can go through this more quickly than me.
 Sure would be nice if I could attach a file when I create an issue...

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COUCHDB-891) Allow ?keys=[a,b] for GET to _view and _list

2010-09-21 Thread Sebastian Cohnen (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12912867#action_12912867
 ] 

Sebastian Cohnen commented on COUCHDB-891:
--

The problem with GET is that the length of the request is limited by (browser) 
implementation - RFC 2616 [1] does not specify a limit to the URI length. I 
think it's a very bad behavior if GET with keys works for X keys, but no longer 
for X+1. I'm also not sure what to win by this change...


[1] RFC 2616, 3.2.1 General Syntax:
 The HTTP protocol does not place any a priori limit on the length of
   a URI. Servers MUST be able to handle the URI of any resource they
   serve, and SHOULD be able to handle URIs of unbounded length if they
   provide GET-based forms that could generate such URIs. A server
   SHOULD return 414 (Request-URI Too Long) status if a URI is longer
   than the server can handle (see section 10.4.15).

  Note: Servers ought to be cautious about depending on URI lengths
  above 255 bytes, because some older client or proxy
  implementations might not properly support these lengths.

 Allow ?keys=[a,b] for GET to _view and _list
 

 Key: COUCHDB-891
 URL: https://issues.apache.org/jira/browse/COUCHDB-891
 Project: CouchDB
  Issue Type: New Feature
  Components: HTTP Interface
Affects Versions: 1.0.1
 Environment: -
Reporter: Michael Fellinger
Priority: Minor
 Fix For: 1.0.2


 The idea was already described back in 2008 when the POST 
 {keys:[key1,key2]} API was introduced.
 http://mail-archives.apache.org/mod_mbox/couchdb-dev/200811.mbox/%3c4910d88a.8000...@kore-nordmann.de%3e
 I'm looking at the source right now, but can't figure out how to implement 
 this at the moment, and I'd love this to be part of CouchDB proper.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COUCHDB-885) Delete document with attachment fails after replication.

2010-09-12 Thread Sebastian Cohnen (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12908518#action_12908518
 ] 

Sebastian Cohnen commented on COUCHDB-885:
--

I cannot reproduce.

But I'm not sure if I understand your steps correctly, so I've written a bash 
script (see http://www.friendpaste.com/6SrHCU1lseUURuwTJEpCpk) to execute your 
steps. Could you have a look and verify that I understood your steps correctly?

 Delete document with attachment fails after replication.
 

 Key: COUCHDB-885
 URL: https://issues.apache.org/jira/browse/COUCHDB-885
 Project: CouchDB
  Issue Type: Bug
  Components: Replication
Affects Versions: 1.0.1
 Environment: Mac OSX, Windows XP, Windows 7
Reporter: Nikolai Teofilov

 Step to reproduce the bug:
 1.  Make database test on a remote couchdb server that reside on a 
 different machine! 
 2.  Create new document:  http://remote-server:5984/test/doc;
 3.  Create database test  on the local couchdb  server.
 4.  Trigger pull replication  http://remote-server:5984/test - 
 http://localhost:5984/test
 5.  Attach a file to the replicated document on the local couchdb server.
 6.  Trigger push replication http://localhost:5984/test  - 
 http://remote-server:5984/test
 7.  Delete the replicated document that contain now the attachment on remote 
 database.
  
   This operation will delete the last revision of the document (after the 
 replication) but the previous revision of the document (before the 
 replication) still exist in the database.
 This defect appears only for replications between databases on two different 
 couchdb servers, and only for documents that were updated with a new 
 attachment.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COUCHDB-885) Delete document with attachment fails after replication.

2010-09-12 Thread Sebastian Cohnen (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12908528#action_12908528
 ] 

Sebastian Cohnen commented on COUCHDB-885:
--

okay, got an error in my script (I've updated the paste). Now I can confirm the 
behavior.

 Delete document with attachment fails after replication.
 

 Key: COUCHDB-885
 URL: https://issues.apache.org/jira/browse/COUCHDB-885
 Project: CouchDB
  Issue Type: Bug
  Components: Replication
Affects Versions: 1.0.1
 Environment: Mac OSX, Windows XP, Windows 7
Reporter: Nikolai Teofilov

 Step to reproduce the bug:
 1.  Make database test on a remote couchdb server that reside on a 
 different machine! 
 2.  Create new document:  http://remote-server:5984/test/doc;
 3.  Create database test  on the local couchdb  server.
 4.  Trigger pull replication  http://remote-server:5984/test - 
 http://localhost:5984/test
 5.  Attach a file to the replicated document on the local couchdb server.
 6.  Trigger push replication http://localhost:5984/test  - 
 http://remote-server:5984/test
 7.  Delete the replicated document that contain now the attachment on remote 
 database.
  
   This operation will delete the last revision of the document (after the 
 replication) but the previous revision of the document (before the 
 replication) still exist in the database.
 This defect appears only for replications between databases on two different 
 couchdb servers, and only for documents that were updated with a new 
 attachment.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COUCHDB-883) Wrong document returned due to incorrect URL decoding

2010-09-10 Thread Sebastian Cohnen (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12907922#action_12907922
 ] 

Sebastian Cohnen commented on COUCHDB-883:
--

RFC2396: G.2. Modifications from both RFC 1738 and RFC 1808

The plus +, dollar $, and comma , characters have been added to those in 
the reserved set, since they are treated as reserved within the query 
component.

Therefor you need to URI-encode the plus (+) character according to the RFC.

 Wrong document returned due to incorrect URL decoding
 -

 Key: COUCHDB-883
 URL: https://issues.apache.org/jira/browse/COUCHDB-883
 Project: CouchDB
  Issue Type: Bug
  Components: HTTP Interface
Affects Versions: 1.0.1
 Environment: Kubuntu 10.4, Firefox 3.6.8
Reporter: Taras Puchko

 I have two documents in my database: a b and a+b. The first can be 
 retrieved via /mydb/a%20b and the second via /mydb/a%2Bb.
 When I enter /mydb/a b in the browser it automatically encodes it so the 
 correct document is returned. But when I enter /mydb/a+b the URL is sent 
 intact since + is a valid character in a path segment according to [1]. The 
 problem is that GET /mydb/a+b makes CouchDB return the document with id a 
 b and not the intended one, which is against the URI spec .
 For an informal description of URL encoding one may refer to [2].
 [1]: http://www.ietf.org/rfc/rfc2396.txt
 [2]: 
 http://www.lunatech-research.com/archives/2009/02/03/what-every-web-developer-must-know-about-url-encoding

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COUCHDB-837) Adding stale=partial

2010-07-27 Thread Sebastian Cohnen (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12892815#action_12892815
 ] 

Sebastian Cohnen commented on COUCHDB-837:
--

My first thought was something like: stale=oksuppress_update=true (I like 
readable parameter names)

But what about keeping stale=ok in its current form for backward compatibility 
and introduce a new parameter? stale=ok is somewhat understandable (and known), 
but combining it with this new behavior feels kind of odd to me. This would 
free the mindset and you don't need to construct a new parameter in addition 
to stale=ok or a new value for the stale param. And no, I don't have a good 
idea for a name for this case :)

 Adding stale=partial
 

 Key: COUCHDB-837
 URL: https://issues.apache.org/jira/browse/COUCHDB-837
 Project: CouchDB
  Issue Type: Improvement
 Environment: all released and unreleased versions
Reporter: Filipe Manana
Assignee: Filipe Manana
 Attachments: stale_partial.patch


 Inspired by Matthias' latest post, at 
 http://www.paperplanes.de/2010/7/26/10_annoying_things_about_couchdb.html, 
 section Views are updated on read access, I added a new value to the 
 stale option named partial (possibly we need to find a better name).
 It behaves exactly like stale=ok but after replying to the client, it 
 triggers a view update in the background.
 Patch attached.
 If no one disagrees this isn't a good feature, or suggest a better parameter 
 value name, I'll commit.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COUCHDB-831) badarity

2010-07-22 Thread Sebastian Cohnen (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12891151#action_12891151
 ] 

Sebastian Cohnen commented on COUCHDB-831:
--

You need to query this view with  curl 
http://127.0.0.1:5984/argh/_design/foo/_views/foo (since your view's name is 
foo not bar). Although I'd like to suggest to give a slightly more readable 
(and understandable) error message here. What about View 
_design/foo/_views/bar not found? Returning a 404 would also be more 
appropriate, too.

 badarity
 

 Key: COUCHDB-831
 URL: https://issues.apache.org/jira/browse/COUCHDB-831
 Project: CouchDB
  Issue Type: Bug
Affects Versions: 1.0
 Environment: mac os x
Reporter: Harry Vangberg

 I have an empty database with nothing but this design document:
 {
_id: _design/foo,
_rev: 1-19b6ac05cd5e878bbe8193c3fbce57bb,
language: javascript,
views: {
foo: {
map: function(doc) {emit(1,2);}
}
}
 }
 Which fails miserably. 
 $ curl http://127.0.0.1:5984/argh/_design/foo/_views/bar
 [Thu, 22 Jul 2010 13:27:53 GMT] [error] [0.1015.0] Uncaught error in HTTP 
 request: {error,
  {badarity,
   {#Funcouch_httpd_db.5.100501499,
[{httpd,
  {mochiweb_request,#Port0.2266,'GET',
   /argh/_design/foo/_views/bar,
   {1,1},
   {3,
{user-agent,
 {'User-Agent',
  curl/7.19.7 
 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8l zlib/1.2.3},
 {host,
  {'Host',127.0.0.1:5984},
  {accept,{'Accept',*/*},nil,nil},
  nil},
 nil}}},
  127.0.0.1,'GET',
  [argh,_design,foo,
   _views,bar],
  {dict,5,16,16,8,80,48,
   {[],[],[],[],[],[],[],[],[],[],[],[],[],
[],[],[]},
   {{[[_design|
   #Funcouch_httpd.8.61263750]],
 [],
 [[_view_cleanup|
   #Funcouch_httpd.8.61263750]],
 [],[],[],[],[],
 [[_compact|
   #Funcouch_httpd.8.61263750]],
 [],[],
 [[_temp_view|
   #Funcouch_httpd.8.61263750]],
 [[_changes|
   #Funcouch_httpd.8.61263750]],
 [],[],[]}}},
  {user_ctx,null,
   [_admin],
   {couch_httpd_auth, 
 default_authentication_handler}},
  undefined,
  {dict,6,16,16,8,80,48,
   {[],[],[],[],[],[],[],[],[],[],[],[],[],
[],[],[]},
   {{[],
 [[_show|
   #Funcouch_httpd.10.132977763]],
 [[_info|
   #Funcouch_httpd.10.132977763],
  [_list|
   #Funcouch_httpd.10.132977763]],
 [[_update|
   #Funcouch_httpd.10.132977763]],
 [],[],[],[],[],
 [[_rewrite|
   #Funcouch_httpd.10.132977763]],
 [],[],[],
 [[_view|
   #Funcouch_httpd.10.132977763]],
 [],[]}}},
  undefined,#Funcouch_httpd.6.96187723,
  {dict,13,16,16,8,80,48,
   

[jira] Commented: (COUCHDB-831) badarity

2010-07-22 Thread Sebastian Cohnen (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12891155#action_12891155
 ] 

Sebastian Cohnen commented on COUCHDB-831:
--

Ignore my comment, pls :)

 badarity
 

 Key: COUCHDB-831
 URL: https://issues.apache.org/jira/browse/COUCHDB-831
 Project: CouchDB
  Issue Type: Bug
Affects Versions: 1.0
 Environment: mac os x
Reporter: Harry Vangberg

 I have an empty database with nothing but this design document:
 {
_id: _design/foo,
_rev: 1-19b6ac05cd5e878bbe8193c3fbce57bb,
language: javascript,
views: {
foo: {
map: function(doc) {emit(1,2);}
}
}
 }
 Which fails miserably. 
 $ curl http://127.0.0.1:5984/argh/_design/foo/_views/bar
 [Thu, 22 Jul 2010 13:27:53 GMT] [error] [0.1015.0] Uncaught error in HTTP 
 request: {error,
  {badarity,
   {#Funcouch_httpd_db.5.100501499,
[{httpd,
  {mochiweb_request,#Port0.2266,'GET',
   /argh/_design/foo/_views/bar,
   {1,1},
   {3,
{user-agent,
 {'User-Agent',
  curl/7.19.7 
 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8l zlib/1.2.3},
 {host,
  {'Host',127.0.0.1:5984},
  {accept,{'Accept',*/*},nil,nil},
  nil},
 nil}}},
  127.0.0.1,'GET',
  [argh,_design,foo,
   _views,bar],
  {dict,5,16,16,8,80,48,
   {[],[],[],[],[],[],[],[],[],[],[],[],[],
[],[],[]},
   {{[[_design|
   #Funcouch_httpd.8.61263750]],
 [],
 [[_view_cleanup|
   #Funcouch_httpd.8.61263750]],
 [],[],[],[],[],
 [[_compact|
   #Funcouch_httpd.8.61263750]],
 [],[],
 [[_temp_view|
   #Funcouch_httpd.8.61263750]],
 [[_changes|
   #Funcouch_httpd.8.61263750]],
 [],[],[]}}},
  {user_ctx,null,
   [_admin],
   {couch_httpd_auth, 
 default_authentication_handler}},
  undefined,
  {dict,6,16,16,8,80,48,
   {[],[],[],[],[],[],[],[],[],[],[],[],[],
[],[],[]},
   {{[],
 [[_show|
   #Funcouch_httpd.10.132977763]],
 [[_info|
   #Funcouch_httpd.10.132977763],
  [_list|
   #Funcouch_httpd.10.132977763]],
 [[_update|
   #Funcouch_httpd.10.132977763]],
 [],[],[],[],[],
 [[_rewrite|
   #Funcouch_httpd.10.132977763]],
 [],[],[],
 [[_view|
   #Funcouch_httpd.10.132977763]],
 [],[]}}},
  undefined,#Funcouch_httpd.6.96187723,
  {dict,13,16,16,8,80,48,
   {[],[],[],[],[],[],[],[],[],[],[],[],[],
[],[],[]},
   {{[[_restart|
   #Funcouch_httpd.6.96187723],
  [_replicate|
   

[jira] Commented: (COUCHDB-817) Natively support coffeescript

2010-07-06 Thread Sebastian Cohnen (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12885435#action_12885435
 ] 

Sebastian Cohnen commented on COUCHDB-817:
--

Maybe you can extend couchapp to do some preprocessing and compiling your 
coffeescripts to js, before they get pushed to CouchDB. I don't see Native 
support for coffeescript support in the near future :)

 Natively support coffeescript
 -

 Key: COUCHDB-817
 URL: https://issues.apache.org/jira/browse/COUCHDB-817
 Project: CouchDB
  Issue Type: New Feature
Reporter: Matt Parker

 i'd love to be able to put coffeescript map and reduce function/files 
 directly into my ddoc, instead of having to compile them first.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COUCHDB-812) implement randomization in views resultset

2010-06-28 Thread Sebastian Cohnen (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12883209#action_12883209
 ] 

Sebastian Cohnen commented on COUCHDB-812:
--

It strongly depends on your particular use-case.

But if you really need to fetch large sets of purely random stuff, this is 
always going to be a huge performance penalty, since you cannot utilize indexes 
and stuff. If you want to avoid network overhead, you can use external 
processes [1] to do that for you. You could query your external process via 
couchdb, let the process build the set and send it back to you in one chunk.


[1] http://wiki.apache.org/couchdb/ExternalProcesses

 implement randomization in views resultset
 --

 Key: COUCHDB-812
 URL: https://issues.apache.org/jira/browse/COUCHDB-812
 Project: CouchDB
  Issue Type: Wish
  Components: Database Core
Affects Versions: 0.11
 Environment: CouchDB
Reporter: Mickael Bailly
Priority: Minor

 This is a proposal for a new feature in CouchDB : allow a randomization of 
 rows in a view response. We can for example add a randomize query parameter...
 This request would probably not return the same results for the same request.
 As an example :
 GET /db/_design/doc/_view/example :
 {
   ..
   rows: [
 {key: 1 ...},
 {key: 2 ...},
 {key: 3 ...}
   ]
 }
 GET /db/_design/doc/_view/example?randomize=true :
 {
   ..
   rows: [
 {key: 2 ...},
 {key: 3 ...},
 {key: 1 ...}
   ]
 }
 GET /db/_design/doc/_view/example?randomize=true :
 {
   ..
   rows: [
 {key: 1 ...},
 {key: 3 ...},
 {key: 2 ...}
   ]
 }
 This is a feature hard to implement client-side (but by reading all doc ids 
 and use client-side random function). It's implemented by the RDBMS from 
 ages, probably for the very same reasons : if we should read all the rows 
 client-side to random-select some of them, performances are awful.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COUCHDB-801) Add 'ForceReReduce=true' option to views to ensure that reduce function is called

2010-06-16 Thread Sebastian Cohnen (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12879318#action_12879318
 ] 

Sebastian Cohnen commented on COUCHDB-801:
--

I was talking about the Spidermonkeys JavaScript shell (see [1]). CLI is just 
short for command-line interface btw :) I'm pretty sure, that the JavaScript 
shell is available for windows as well. I think introducing smth. like 
forerereduce=true' is a very poor substitute for unit-testing map/reduce funs. 
And I think it's the only really useful use case I can think of at the moment.

[1] 
https://developer.mozilla.org/En/SpiderMonkey/Introduction_to_the_JavaScript_shell

 Add 'ForceReReduce=true' option to views to ensure that reduce function is 
 called
 -

 Key: COUCHDB-801
 URL: https://issues.apache.org/jira/browse/COUCHDB-801
 Project: CouchDB
  Issue Type: Improvement
 Environment: Any
Reporter: Bryan White
Priority: Minor

 AFAIK there is no way to force a Map Reduce with rereduce=true to happen, 
 other than creating an arbitrary large number of documents. So any bugs in 
 handling reduce calls with rereduce=true might not be discovered at 
 development time.
 I propose something like a 'ForceReReduce=true' option which will call Reduce 
 with rereduce=true (either once overall, or maybe once for each document) in 
 order to shake any lurking bugs out of the tree. 
 If there *is* such a feature, I apologise; I've looked but can't find it.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COUCHDB-791) Changes not written if server shutdown during delayed_commits period

2010-06-14 Thread Sebastian Cohnen (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12878589#action_12878589
 ] 

Sebastian Cohnen commented on COUCHDB-791:
--

@chris: when couchdb -d calls /_ensure_full_commit before it kills the pid, how 
to avoid on high traffic use cases that there are no more inserts/updates while 
or after _ensure_full_commit is done and BEFORE the couchdb process is killed? 
I think if there is such a mechanism, it should work reliably in all cases. 
Otherwise the current approach of killing the process and hope the best is 
already sufficient. (for scheduled maintenance there are ways to gracefully 
shut down couch, like blocking further incoming requests and wait until all 
updates are finished. but you have to hack it yourself at the moment)

 Changes not written if server shutdown during delayed_commits period
 

 Key: COUCHDB-791
 URL: https://issues.apache.org/jira/browse/COUCHDB-791
 Project: CouchDB
  Issue Type: Bug
Affects Versions: 0.11.1
 Environment: Linux (Ubuntu 10.04)
Reporter: Matt Goodall

 If the couchdb server is shutdown (couchdb -d, Ctrl+C at the console, etc) 
 during the delayed commits period then buffered updates are lost.
 Simple script to demonstrate the problem is:
 db=http://localhost:5984/scratch
 curl $db -X DELETE
 curl $db -X PUT
 curl $db -X POST -d '{}'
 /path/to/couchdb/bin/couchdb -d
 When couchdb is started again the database is empty.
 Affects 0.11.x and trunk branches.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COUCHDB-754) Improve couch_file write performance

2010-05-05 Thread Sebastian Cohnen (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12864492#action_12864492
 ] 

Sebastian Cohnen commented on COUCHDB-754:
--

sorry for being OT here, but what does this graph show on its axes? time on x 
in sec? response time in ms on the y-axis? mikeal's readme wasn't clear on this 
too and I'm just curious :)

 Improve couch_file write performance
 

 Key: COUCHDB-754
 URL: https://issues.apache.org/jira/browse/COUCHDB-754
 Project: CouchDB
  Issue Type: Improvement
 Environment: some code might be platform-specific
Reporter: Adam Kocoloski
 Fix For: 1.1

 Attachments: cheaper-appending-v2.patch, cheaper-appending.patch


 I've got a number of possible enhancements to couch_file floating around in 
 my head, wanted to write them down.
 * Use fdatasync instead of fsync.  Filipe posted a patch to the OTP file 
 driver [1] that adds a new file:datasync/1 function.  I suspect that we won't 
 see much of a performance gain from this switch because we append to the file 
 and thus need to update the file metedata anyway.  On the other hand, I'm 
 fairly certain fdatasync is always safe for our needs, so if it is ever more 
 efficient we should use it.  Obviously, we'll need to fall back to 
 file:sync/1 on platforms where the datasync function is not available.
 * Use file:pwrite/2 to batch together multiple outstanding write requests.  
 This is essentially Paul's zip_server [2].  In order to take full advantage 
 of it we need to patch couch_btree to update nodes in parallel.  Currently 
 there should only be 1 outstanding write request in a couch_file at a time, 
 so it wouldn't help at all.
 * Open the file in append mode and stop seeking to eof in user space.  We 
 never modify files (aside from truncating, which is rare enough to be handled 
 separately), so perhaps it would help with performance if we let the kernel 
 deal with the seek.  We'd still need a way to get the file size for the 
 make_blocks function.  I'm wondering if file:read_file_info(Fd) is more 
 efficient than file:position(Fd, eof) for this purpose.
 A caveat - I'm not sure if append-only files are compatible with the previous 
 enhancement.  There is no file:write/2, and I have no idea how file:pwrite 
 behaves on a file which is opened append-only.  Is the Pos ignored, or is it 
 an error?  Will have to test.
 * Use O_DSYNC instead of fsync/fdatasync.  This one is inspired by antirez' 
 recent blog post [3] and some historical discussions on pgsql-performance.  
 Basically, it seems that opening a file with O_DSYNC (or O_SYNC on Linux, 
 which is currently the same thing) and doing all synchronous writes is 
 reasonably fast.  Antirez' tests showed 250 µs delays for (tiny) synchronous 
 writes, compared to 40 ms delays for fsync and fdatasync on his ext4 system.
 At the very least, this looks to be a compelling choice for file access when 
 the server is running with delayed_commits = true.  We'd need to patch the 
 OTP file driver again, and also investigate the cross-platform support.  In 
 particular, I don't think it works on NFS.
 [1]: http://github.com/fdmanana/otp/tree/fdatasync
 [2]: http://github.com/davisp/zip_server
 [3]: http://antirez.com/post/fsync-different-thread-useless.html

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COUCHDB-749) CouchDB does not persist large values of Numbers correctly.

2010-04-24 Thread Sebastian Cohnen (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12860553#action_12860553
 ] 

Sebastian Cohnen commented on COUCHDB-749:
--

I would vote for (a). If CouchDB (or any parts of it like spidermonkey) cannot 
handle integers above a certain size, they should be rejected. if developer 
need to store bigger values, they are still free to encode them differently 
(like using strings).

 CouchDB does not persist large values of Numbers correctly.
 ---

 Key: COUCHDB-749
 URL: https://issues.apache.org/jira/browse/COUCHDB-749
 Project: CouchDB
  Issue Type: Bug
Affects Versions: 0.11
 Environment: All
Reporter: Jarrod Roberson

 All the following operations exhibit the same bug, large numbers don't get 
 persisted correctly. They get something added to them for some reason.
 9223372036854775807 == java.lang.Long.MAX_VALUE
 1: go into Futon, create a new document and create a new field and enter the 
 number 9223372036854775807, click the green check mark, the number changes to 
 9223372036854776000 even before you save it.
 2.curl -X PUT http://localhost:5984/test/longTest -d '{value: 
 9223372036854775807}', the number gets persisted as 9223372036854776000
 trying to persist System.currentTimeMilliseconds() from java causes the same 
 thing to happen occasionally.
 This seems to be a pretty serious bug if I can't trust that my data is not 
 being corrupted when submitted to the database.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COUCHDB-583) adding ?compression=(gzip|deflate) optional parameter to the attachment download API

2009-11-29 Thread Sebastian Cohnen (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12783418#action_12783418
 ] 

Sebastian Cohnen commented on COUCHDB-583:
--

One question:

Why not use Content-Encoding Headers? or is there a particular reason to expose 
this to the GET-request? see 
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.3

 adding ?compression=(gzip|deflate) optional parameter to the attachment 
 download API
 

 Key: COUCHDB-583
 URL: https://issues.apache.org/jira/browse/COUCHDB-583
 Project: CouchDB
  Issue Type: New Feature
  Components: HTTP Interface
 Environment: CouchDB trunk revision 885240
Reporter: Filipe Manana
 Attachments: jira-couchdb-583-1st-try-trunk.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 The following new feature is added in the patch following this ticket 
 creation.
 A new optional http query parameter compression is added to the attachments 
 API.
 This parameter can have one of the values:  gzip or deflate.
 When asking for an attachment (GET http request), if the query parameter 
 compression is found, CouchDB will send the attachment compressed to the 
 client (and sets the header Content-Encoding with gzip or deflate).
 Further, it adds a new config option treshold_for_chunking_comp_responses 
 (httpd section) that specifies an attachment length threshold. If an 
 attachment has a length = than this threshold, the http response will be 
 chunked (besides compressed).
 Note that using non chunked compressed  body responses requires storing all 
 the compressed blocks in memory and then sending each one to the client. This 
 is a necessary evil, as we only know the length of the compressed body 
 after compressing all the body, and we need to set the Content-Length 
 header for non chunked responses. By sending chunked responses, we can send 
 each compressed block immediately, without accumulating all of them in memory.
 Examples:
 $ curl http://localhost:5984/testdb/testdoc1/readme.txt?compression=gzip
 $ curl http://localhost:5984/testdb/testdoc1/readme.txt?compression=deflate
 $ curl http://localhost:5984/testdb/testdoc1/readme.txt   # attachment will 
 not be compressed
 $ curl http://localhost:5984/testdb/testdoc1/readme.txt?compression=rar   # 
 will give a 500 error code
 Etap test case included.
 Feedback would be very welcome.
 cheers

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (COUCHDB-482) batch_save-Test is failing on webkit-based browsers

2009-08-23 Thread Sebastian Cohnen (JIRA)
batch_save-Test is failing on webkit-based browsers
---

 Key: COUCHDB-482
 URL: https://issues.apache.org/jira/browse/COUCHDB-482
 Project: CouchDB
  Issue Type: Bug
  Components: Test Suite
Affects Versions: 0.10
 Environment: OS X 10.5.7, using 
CouchDBX-0.10.0a-R13B01-pl3-UNSTABLE.app
Reporter: Sebastian Cohnen
Priority: Minor
 Attachments: 
0001-Fixed-batch_save-test.-Added-a-browser-based-excepti.patch

The batch_save is failing, because webkit does always set the http statusText 
to OK. This at least on OS X the case. See 
http://trac.webkit.org/browser/trunk/WebCore/platform/network/mac/ResourceResponseMac.mm#L82

I'm going to attach a patch to this issue. The patch does a simple 
user-agend-based exception. Not pretty, but it works until the webkit-guys 
fixed their problem.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (COUCHDB-482) batch_save-Test is failing on webkit-based browsers

2009-08-23 Thread Sebastian Cohnen (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian Cohnen updated COUCHDB-482:
-

Attachment: 0001-Fixed-batch_save-test.-Added-a-browser-based-excepti.patch

 batch_save-Test is failing on webkit-based browsers
 ---

 Key: COUCHDB-482
 URL: https://issues.apache.org/jira/browse/COUCHDB-482
 Project: CouchDB
  Issue Type: Bug
  Components: Test Suite
Affects Versions: 0.10
 Environment: OS X 10.5.7, using 
 CouchDBX-0.10.0a-R13B01-pl3-UNSTABLE.app
Reporter: Sebastian Cohnen
Priority: Minor
 Attachments: 
 0001-Fixed-batch_save-test.-Added-a-browser-based-excepti.patch


 The batch_save is failing, because webkit does always set the http statusText 
 to OK. This at least on OS X the case. See 
 http://trac.webkit.org/browser/trunk/WebCore/platform/network/mac/ResourceResponseMac.mm#L82
 I'm going to attach a patch to this issue. The patch does a simple 
 user-agend-based exception. Not pretty, but it works until the webkit-guys 
 fixed their problem.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (COUCHDB-483) changes-Test does not run on webkit-based browsers

2009-08-23 Thread Sebastian Cohnen (JIRA)
changes-Test does not run on webkit-based browsers
--

 Key: COUCHDB-483
 URL: https://issues.apache.org/jira/browse/COUCHDB-483
 Project: CouchDB
  Issue Type: Bug
  Components: Test Suite
Affects Versions: 0.10
 Environment: OS X 10.5.7, using 
CouchDBX-0.10.0a-R13B01-pl3-UNSTABLE.app
Reporter: Sebastian Cohnen


The changes-test fails, because of a strange, or maybe broken, behaviour of 
webkit-based browsers (testet on Safari Version 4.0.3 (5531.9) and Webkit 
nightly-build #47686). The asynchronous requests is only completed correctly, 
if no other JavaScript is running. So it is not possible at the moment to 
wait for new content from the _changes?feed=continuous-request.

The only solution to this issue I see is to skip this test on Webkit/Safari.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COUCHDB-483) changes-Test does not run on webkit-based browsers

2009-08-23 Thread Sebastian Cohnen (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12746663#action_12746663
 ] 

Sebastian Cohnen commented on COUCHDB-483:
--

I'll attach a patch disabling the affected parts of the changes-test shortly.

 changes-Test does not run on webkit-based browsers
 --

 Key: COUCHDB-483
 URL: https://issues.apache.org/jira/browse/COUCHDB-483
 Project: CouchDB
  Issue Type: Bug
  Components: Test Suite
Affects Versions: 0.10
 Environment: OS X 10.5.7, using 
 CouchDBX-0.10.0a-R13B01-pl3-UNSTABLE.app
Reporter: Sebastian Cohnen

 The changes-test fails, because of a strange, or maybe broken, behaviour of 
 webkit-based browsers (testet on Safari Version 4.0.3 (5531.9) and Webkit 
 nightly-build #47686). The asynchronous requests is only completed correctly, 
 if no other JavaScript is running. So it is not possible at the moment to 
 wait for new content from the _changes?feed=continuous-request.
 The only solution to this issue I see is to skip this test on Webkit/Safari.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (COUCHDB-483) changes-Test does not run on webkit-based browsers

2009-08-23 Thread Sebastian Cohnen (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian Cohnen updated COUCHDB-483:
-

Attachment: 0001-Fixed-changes-test.-webkit-currently-does-not-comple.patch

This patch simply disables some parts of this test to make it run properly on 
webkit.

 changes-Test does not run on webkit-based browsers
 --

 Key: COUCHDB-483
 URL: https://issues.apache.org/jira/browse/COUCHDB-483
 Project: CouchDB
  Issue Type: Bug
  Components: Test Suite
Affects Versions: 0.10
 Environment: OS X 10.5.7, using 
 CouchDBX-0.10.0a-R13B01-pl3-UNSTABLE.app
Reporter: Sebastian Cohnen
 Attachments: 
 0001-Fixed-changes-test.-webkit-currently-does-not-comple.patch


 The changes-test fails, because of a strange, or maybe broken, behaviour of 
 webkit-based browsers (testet on Safari Version 4.0.3 (5531.9) and Webkit 
 nightly-build #47686). The asynchronous requests is only completed correctly, 
 if no other JavaScript is running. So it is not possible at the moment to 
 wait for new content from the _changes?feed=continuous-request.
 The only solution to this issue I see is to skip this test on Webkit/Safari.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.