Re: multiview on github

2010-08-22 Thread Norman Barker
that should be 'couchdb should not be in version control', sorry not
used to git.

On Sun, Aug 22, 2010 at 9:22 PM, Norman Barker  wrote:
> Bob,
>
> I am testing on 1+ documents, I appreciate that we need to
> establish when a multi-process as opposed to a tbd (suggestions
> welcome) approach is required. The startkey / endkey is an issue
> though, is there a better way to test inclusion?
>
> The speed of the multiview is directly linked to the size of the
> smallest view result though, so total documents isn't a factor.
>
> I am still thinking about fti, I am testing with clucene, but the
> external handler problem is the same, how to make it stream in order.
>
> I will fix the local_dev.ini problem tomorrow, couchdb should be in
> version control.
>
> Any hints on how to test inclusion are appreciated, it will greatly
> speed up collation.
>
> thanks,
>
> Norman
>
>
>
> On Sun, Aug 22, 2010 at 4:15 PM, Robert Newson  
> wrote:
>> I'm concerned about the performance of this on non-trivial databases,
>> given the iteration of all items between startkey and endkey. I don't
>> have time to test it this week but I'd be interested to hear the time
>> it took to do a multiview on two views of, say, a million rows each
>> (especially as compared to the two normal view calls).
>>
>> I was also intrigued to see the code handles fti too, a problem I have
>> spent some time thinking about without finding a satisfactorily
>> performant solution too. I note that, as written, it doesn't appear to
>> work because the fti call (I'm assuming couchdb-lucene) will only
>> return the top N matching hits, so at best you can filter those
>> against another view (perhaps that's useful?). The trick to merging a
>> view and an fti result together would be to get the results from both
>> in the same order and step through the rows, filtering as you go.
>> Sorting in Lucene has a large memory hit so I gave up on that
>> solution.
>>
>> Finally, your patch appears to add two generated files (local_dev.ini
>> and etc/init.d/couchdb) to the branch which should be fixed (add your
>> settings to default.init.tpl.in instead).
>>
>> I should end by saying that if the problems above can be solved then
>> this would be a very useful addition to CouchDB and one that is
>> frequently requested. It might also be a model for multi-machine
>> views.
>>
>> B.
>>
>> On Sun, Aug 22, 2010 at 10:45 PM, Norman Barker  
>> wrote:
>>> I would like to take this multiview code and have it added to trunk if
>>> possible, what are the next steps?
>>>
>>> thanks,
>>>
>>> Norman
>>>
>>> On Wed, Aug 18, 2010 at 11:44 AM, Norman Barker  
>>> wrote:
 I have made

 http://github.com/normanb/couchdb

 which is a fork of the latest couchdb trunk with the multiview code
 and tests added.

 If geocouch is available then it can still be used.

 There are a couple of questions about the multiview on the user /dev
 list so I will be adding some more test cases during today.

 thanks,

 Norman

 On Tue, Aug 17, 2010 at 9:23 PM, Norman Barker  
 wrote:
> this is possible, I forked geocouch since I use it, but I have already
> separated the geocouch dependencies from the trunk.
>
> I can do this tomorrow, certainly be interested in any feedback.
>
> thanks,
>
> Norman
>
>
>
> On Tue, Aug 17, 2010 at 7:49 PM, Volker Mische  
> wrote:
>> On 08/18/2010 03:26 AM, J Chris Anderson wrote:
>>>
>>> On Aug 16, 2010, at 4:38 PM, Norman Barker wrote:
>>>
 Hi,

 I have made the changes as recommended, adding a test case
 multiview.js and also adding the userCtx to open the db.

 I have also forked geocouch and this is available here

>>>
>>> this patch seems important (especially as people are already asking for
>>> help using it on user@)
>>>
>>> to get it committed, it either must remove the dependency on GeoCouch, 
>>> or
>>> become part of CouchDB when (and if) GeoCouch becomes part of CouchDB.
>>>
>>> Is it possible / useful to make a version that doesn't use GeoCouch? And
>>> then to make the GeoCouch capabilities part GeoCouch for now?
>>>
>>> Chris
>>>
>>
>> Hi Norman,
>>
>> if the patch is ready for trunk, I'd be happy to move the GeoCouch bits 
>> to
>> GeoCouch itself (as GeoCouch isn't ready for trunk yet).
>>
>> Lately I haven't been that responsive when it comes to GeoCouch, but that
>> will change (in about a month) after holidays and FOSS4G.
>>
>> Cheers,
>>  Volker
>>
>

>>>
>>
>


Re: multiview on github

2010-08-22 Thread Norman Barker
Bob,

I am testing on 1+ documents, I appreciate that we need to
establish when a multi-process as opposed to a tbd (suggestions
welcome) approach is required. The startkey / endkey is an issue
though, is there a better way to test inclusion?

The speed of the multiview is directly linked to the size of the
smallest view result though, so total documents isn't a factor.

I am still thinking about fti, I am testing with clucene, but the
external handler problem is the same, how to make it stream in order.

I will fix the local_dev.ini problem tomorrow, couchdb should be in
version control.

Any hints on how to test inclusion are appreciated, it will greatly
speed up collation.

thanks,

Norman



On Sun, Aug 22, 2010 at 4:15 PM, Robert Newson  wrote:
> I'm concerned about the performance of this on non-trivial databases,
> given the iteration of all items between startkey and endkey. I don't
> have time to test it this week but I'd be interested to hear the time
> it took to do a multiview on two views of, say, a million rows each
> (especially as compared to the two normal view calls).
>
> I was also intrigued to see the code handles fti too, a problem I have
> spent some time thinking about without finding a satisfactorily
> performant solution too. I note that, as written, it doesn't appear to
> work because the fti call (I'm assuming couchdb-lucene) will only
> return the top N matching hits, so at best you can filter those
> against another view (perhaps that's useful?). The trick to merging a
> view and an fti result together would be to get the results from both
> in the same order and step through the rows, filtering as you go.
> Sorting in Lucene has a large memory hit so I gave up on that
> solution.
>
> Finally, your patch appears to add two generated files (local_dev.ini
> and etc/init.d/couchdb) to the branch which should be fixed (add your
> settings to default.init.tpl.in instead).
>
> I should end by saying that if the problems above can be solved then
> this would be a very useful addition to CouchDB and one that is
> frequently requested. It might also be a model for multi-machine
> views.
>
> B.
>
> On Sun, Aug 22, 2010 at 10:45 PM, Norman Barker  
> wrote:
>> I would like to take this multiview code and have it added to trunk if
>> possible, what are the next steps?
>>
>> thanks,
>>
>> Norman
>>
>> On Wed, Aug 18, 2010 at 11:44 AM, Norman Barker  
>> wrote:
>>> I have made
>>>
>>> http://github.com/normanb/couchdb
>>>
>>> which is a fork of the latest couchdb trunk with the multiview code
>>> and tests added.
>>>
>>> If geocouch is available then it can still be used.
>>>
>>> There are a couple of questions about the multiview on the user /dev
>>> list so I will be adding some more test cases during today.
>>>
>>> thanks,
>>>
>>> Norman
>>>
>>> On Tue, Aug 17, 2010 at 9:23 PM, Norman Barker  
>>> wrote:
 this is possible, I forked geocouch since I use it, but I have already
 separated the geocouch dependencies from the trunk.

 I can do this tomorrow, certainly be interested in any feedback.

 thanks,

 Norman



 On Tue, Aug 17, 2010 at 7:49 PM, Volker Mische  
 wrote:
> On 08/18/2010 03:26 AM, J Chris Anderson wrote:
>>
>> On Aug 16, 2010, at 4:38 PM, Norman Barker wrote:
>>
>>> Hi,
>>>
>>> I have made the changes as recommended, adding a test case
>>> multiview.js and also adding the userCtx to open the db.
>>>
>>> I have also forked geocouch and this is available here
>>>
>>
>> this patch seems important (especially as people are already asking for
>> help using it on user@)
>>
>> to get it committed, it either must remove the dependency on GeoCouch, or
>> become part of CouchDB when (and if) GeoCouch becomes part of CouchDB.
>>
>> Is it possible / useful to make a version that doesn't use GeoCouch? And
>> then to make the GeoCouch capabilities part GeoCouch for now?
>>
>> Chris
>>
>
> Hi Norman,
>
> if the patch is ready for trunk, I'd be happy to move the GeoCouch bits to
> GeoCouch itself (as GeoCouch isn't ready for trunk yet).
>
> Lately I haven't been that responsive when it comes to GeoCouch, but that
> will change (in about a month) after holidays and FOSS4G.
>
> Cheers,
>  Volker
>

>>>
>>
>


Re: multiview on github

2010-08-22 Thread Robert Newson
I'm concerned about the performance of this on non-trivial databases,
given the iteration of all items between startkey and endkey. I don't
have time to test it this week but I'd be interested to hear the time
it took to do a multiview on two views of, say, a million rows each
(especially as compared to the two normal view calls).

I was also intrigued to see the code handles fti too, a problem I have
spent some time thinking about without finding a satisfactorily
performant solution too. I note that, as written, it doesn't appear to
work because the fti call (I'm assuming couchdb-lucene) will only
return the top N matching hits, so at best you can filter those
against another view (perhaps that's useful?). The trick to merging a
view and an fti result together would be to get the results from both
in the same order and step through the rows, filtering as you go.
Sorting in Lucene has a large memory hit so I gave up on that
solution.

Finally, your patch appears to add two generated files (local_dev.ini
and etc/init.d/couchdb) to the branch which should be fixed (add your
settings to default.init.tpl.in instead).

I should end by saying that if the problems above can be solved then
this would be a very useful addition to CouchDB and one that is
frequently requested. It might also be a model for multi-machine
views.

B.

On Sun, Aug 22, 2010 at 10:45 PM, Norman Barker  wrote:
> I would like to take this multiview code and have it added to trunk if
> possible, what are the next steps?
>
> thanks,
>
> Norman
>
> On Wed, Aug 18, 2010 at 11:44 AM, Norman Barker  
> wrote:
>> I have made
>>
>> http://github.com/normanb/couchdb
>>
>> which is a fork of the latest couchdb trunk with the multiview code
>> and tests added.
>>
>> If geocouch is available then it can still be used.
>>
>> There are a couple of questions about the multiview on the user /dev
>> list so I will be adding some more test cases during today.
>>
>> thanks,
>>
>> Norman
>>
>> On Tue, Aug 17, 2010 at 9:23 PM, Norman Barker  
>> wrote:
>>> this is possible, I forked geocouch since I use it, but I have already
>>> separated the geocouch dependencies from the trunk.
>>>
>>> I can do this tomorrow, certainly be interested in any feedback.
>>>
>>> thanks,
>>>
>>> Norman
>>>
>>>
>>>
>>> On Tue, Aug 17, 2010 at 7:49 PM, Volker Mische  
>>> wrote:
 On 08/18/2010 03:26 AM, J Chris Anderson wrote:
>
> On Aug 16, 2010, at 4:38 PM, Norman Barker wrote:
>
>> Hi,
>>
>> I have made the changes as recommended, adding a test case
>> multiview.js and also adding the userCtx to open the db.
>>
>> I have also forked geocouch and this is available here
>>
>
> this patch seems important (especially as people are already asking for
> help using it on user@)
>
> to get it committed, it either must remove the dependency on GeoCouch, or
> become part of CouchDB when (and if) GeoCouch becomes part of CouchDB.
>
> Is it possible / useful to make a version that doesn't use GeoCouch? And
> then to make the GeoCouch capabilities part GeoCouch for now?
>
> Chris
>

 Hi Norman,

 if the patch is ready for trunk, I'd be happy to move the GeoCouch bits to
 GeoCouch itself (as GeoCouch isn't ready for trunk yet).

 Lately I haven't been that responsive when it comes to GeoCouch, but that
 will change (in about a month) after holidays and FOSS4G.

 Cheers,
  Volker

>>>
>>
>


Re: multiview on github

2010-08-22 Thread Norman Barker
I would like to take this multiview code and have it added to trunk if
possible, what are the next steps?

thanks,

Norman

On Wed, Aug 18, 2010 at 11:44 AM, Norman Barker  wrote:
> I have made
>
> http://github.com/normanb/couchdb
>
> which is a fork of the latest couchdb trunk with the multiview code
> and tests added.
>
> If geocouch is available then it can still be used.
>
> There are a couple of questions about the multiview on the user /dev
> list so I will be adding some more test cases during today.
>
> thanks,
>
> Norman
>
> On Tue, Aug 17, 2010 at 9:23 PM, Norman Barker  
> wrote:
>> this is possible, I forked geocouch since I use it, but I have already
>> separated the geocouch dependencies from the trunk.
>>
>> I can do this tomorrow, certainly be interested in any feedback.
>>
>> thanks,
>>
>> Norman
>>
>>
>>
>> On Tue, Aug 17, 2010 at 7:49 PM, Volker Mische  
>> wrote:
>>> On 08/18/2010 03:26 AM, J Chris Anderson wrote:

 On Aug 16, 2010, at 4:38 PM, Norman Barker wrote:

> Hi,
>
> I have made the changes as recommended, adding a test case
> multiview.js and also adding the userCtx to open the db.
>
> I have also forked geocouch and this is available here
>

 this patch seems important (especially as people are already asking for
 help using it on user@)

 to get it committed, it either must remove the dependency on GeoCouch, or
 become part of CouchDB when (and if) GeoCouch becomes part of CouchDB.

 Is it possible / useful to make a version that doesn't use GeoCouch? And
 then to make the GeoCouch capabilities part GeoCouch for now?

 Chris

>>>
>>> Hi Norman,
>>>
>>> if the patch is ready for trunk, I'd be happy to move the GeoCouch bits to
>>> GeoCouch itself (as GeoCouch isn't ready for trunk yet).
>>>
>>> Lately I haven't been that responsive when it comes to GeoCouch, but that
>>> will change (in about a month) after holidays and FOSS4G.
>>>
>>> Cheers,
>>>  Volker
>>>
>>
>


Re: splitting the code in different apps or rewrite httpd layer

2010-08-22 Thread Klaus Trainer
> It seems that noone except us is interrestted in that ;)

I'm indeed very interested in that. However, as I won't be able to
contribute much to the refactoring, I didn't feel like having to say
something in this regard. So, nonetheless, here are my two cents.

Recently, I've spent a few hours diving into the source code of riak and
riak_core. In doing so, I got the impression that with regard to
modularization and organizing the codebase around the abstractions, the
Riak guys are one step ahead in respect of their codebase's evolution.

Note that I've only looked at some parts of Riak's and CouchDB's
codebase, respectively. Also note that at this point in time, my
knowledge of both Riak's and CouchDB's codebase is still quite limited.

Cheers,
Klaus


On Sun, 2010-08-22 at 20:37 +0200, Benoit Chesneau wrote:
> On Fri, Aug 20, 2010 at 1:32 PM, Volker Mische  
> wrote:
> > +1 for a refactor.
> >
> > GeoCouch duplicates a lot of code. I tried to keep the names in as similar
> > (though meaningful) to the original ones as possible to see where the
> > duplicated code is.
> >
> > I would love to see that everyone who wants a new kind of indexer just need
> > to provide the data structure and all the design document handling, updater
> > (group) handling, list functions etc is done automatically.
> >
> > Cheers,
> >  Volker
> >
> It seems that noone except us is interrestted in that ;) Anyway I'm
> thinking that in case of indexer it would be very useful to have a
> generic way to add some kind of handler allowing any people to plug
> its own stuff to the system indeed.
> 
> Also refactoring would allow us to add comments to the code which
> would help to review code.
> 
> - benoit




Re: splitting the code in different apps or rewrite httpd layer

2010-08-22 Thread Mikeal Rogers
One idea that was floated at least once was to replace all the code currently 
have on top of mochiweb directly with webmachine.

This would make extensions and improvements follow already well defined 
patterns provided by webmachine.

-Mikeal

Sent from my iPhone

On Aug 20, 2010, at 2:09 AM, Benoit Chesneau  wrote:

> Hi all,
> 
> I work a lot these days around the httpd code and the more I work on
> the more I think we should refactor it to make it easier to hack and
> extend.  There is indeed a lot of code in one module (couch_httpd_db)
> and recent issue like vhost and location rewriting could be easier to
> solve if we had an http layer more organized in my opinion.
> 
> Actually we do (in 1.0.1 or trunk) :
> 
> request -> couch_httpd loop -> request_handler -> check vhost and
> eventually rewrite url -> request_int -> request_db -> request
> doc|request _design | request attachment | request global handler |
> request misc handler
> 
> with extra level : request_design -> rewrite handler|
> show|lists|update\lview ... and request_int that catch all errors and
> has the responsibility to send errors if anything happend and wasn't
> catched on other layers.
> 
> It could be easier. We could do it more resource oriented for example
> than it is. 1 module, 1 resource. Refactoring httpd code would also
> allow us to reuse more code than we do actually maybe by wrapping api.
> 
> How :
> 
> - Some times ago we started to port it using webmachine with davisp,
> but we didn't finish. Maybe it's a good time ? Or do we want to follow
> another way ?
> 
> - If we go on this refactoring it could be also a good time to split
> couchdb in different apps : couchdb-core and couchdb foe example
> (maybe couchdb-appengine ?) so we could develop independantly each
> levels and make code history cleaner.
> 
> 
> Thoughts ?
> 
> 
> - benoit


Re: splitting the code in different apps or rewrite httpd layer

2010-08-22 Thread Filipe David Manana
On Sun, Aug 22, 2010 at 7:37 PM, Benoit Chesneau wrote:

> It seems that noone except us is interrestted in that ;) Anyway I'm
> thinking that in case of indexer it would be very useful to have a
> generic way to add some kind of handler allowing any people to plug
> its own stuff to the system indeed.
>

I haven't replied before, however I'm interested in that as well.
I think right now it makes sense to split only the couch_httpd_db.erl
functions.

 Having modules with REST like names:  couch_db_resource,
couch_doc_resource, etc would sound good. I'm open to other suggestions.

>
> Also refactoring would allow us to add comments to the code which
> would help to review code.
>

+1

>
> - benoit
>



-- 
Filipe David Manana,
fdman...@gmail.com, fdman...@apache.org

"Reasonable men adapt themselves to the world.
 Unreasonable men adapt the world to themselves.
 That's why all progress depends on unreasonable men."


Re: splitting the code in different apps or rewrite httpd layer

2010-08-22 Thread Benoit Chesneau
On Fri, Aug 20, 2010 at 1:32 PM, Volker Mische  wrote:
> +1 for a refactor.
>
> GeoCouch duplicates a lot of code. I tried to keep the names in as similar
> (though meaningful) to the original ones as possible to see where the
> duplicated code is.
>
> I would love to see that everyone who wants a new kind of indexer just need
> to provide the data structure and all the design document handling, updater
> (group) handling, list functions etc is done automatically.
>
> Cheers,
>  Volker
>
It seems that noone except us is interrestted in that ;) Anyway I'm
thinking that in case of indexer it would be very useful to have a
generic way to add some kind of handler allowing any people to plug
its own stuff to the system indeed.

Also refactoring would allow us to add comments to the code which
would help to review code.

- benoit


[jira] Updated: (COUCHDB-864) multipart/related PUT's always close the connection.

2010-08-22 Thread Filipe Manana (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Filipe Manana updated COUCHDB-864:
--

Attachment: mp_doc_put_http_pipeline.patch

So, a tested patch that assure the PUT multipart/related doc API doesn't read 
more data than it needs (to avoid consuming data from a subsequent request in 
an HTTP pipeline) and to allow HTTP request with chunked transfer-encoding as 
well.
This HTTP pipeline issue is important for the replicator, since it uses this 
API very frequently.

Etap test included.

If no one has anything against this or the patch itself, I'll commit it in a 
few days.

> multipart/related PUT's always close the connection.
> 
>
> Key: COUCHDB-864
> URL: https://issues.apache.org/jira/browse/COUCHDB-864
> Project: CouchDB
>  Issue Type: Bug
>  Components: Database Core
>Reporter: Robert Newson
> Attachments: chunked.erl, mp_doc_put_http_pipeline.patch, 
> mp_pipeline.patch
>
>
> I noticed that mochiweb always closes the connection when doing a 
> multipart/related PUT (to insert the JSON document and accompanying 
> attachments in one call). Ultimately it's because we call recv(0) and not 
> recv_body, thus consuming more data than we actually process. Mochiweb 
> notices that there is unread data on the socket and closes the connection.
> This impacts replication with attachments, as I believe they go through this 
> code path (and, thus, are forever reconnecting).
> The code below demonstrates a fix for this issue but isn't good enough for 
> trunk. Adam provided the important process dictionary fix.
> ---
>  src/couchdb/couch_doc.erl  |1 +
>  src/couchdb/couch_httpd_db.erl |   13 +
>  2 files changed, 10 insertions(+), 4 deletions(-)
> diff --git a/src/couchdb/couch_doc.erl b/src/couchdb/couch_doc.erl
> index 5009f8f..f8c874b 100644
> --- a/src/couchdb/couch_doc.erl
> +++ b/src/couchdb/couch_doc.erl
> @@ -455,6 +455,7 @@ doc_from_multi_part_stream(ContentType, DataFun) ->
>  Parser ! {get_doc_bytes, self()},
>  receive 
>  {doc_bytes, DocBytes} ->
> +erlang:put(mochiweb_request_recv, true),
>  Doc = from_json_obj(?JSON_DECODE(DocBytes)),
>  % go through the attachments looking for 'follows' in the data,
>  % replace with function that reads the data from MIME stream.
> diff --git a/src/couchdb/couch_httpd_db.erl b/src/couchdb/couch_httpd_db.erl
> index b0fbe8d..eff7d67 100644
> --- a/src/couchdb/couch_httpd_db.erl
> +++ b/src/couchdb/couch_httpd_db.erl
> @@ -651,12 +651,13 @@ db_doc_req(#httpd{method='PUT'}=Req, Db, DocId) ->
>  } = parse_doc_query(Req),
>  couch_doc:validate_docid(DocId),
>  
> +Len = couch_httpd:header_value(Req,"Content-Length"),
>  Loc = absolute_uri(Req, "/" ++ ?b2l(Db#db.name) ++ "/" ++ ?b2l(DocId)),
>  RespHeaders = [{"Location", Loc}],
>  case couch_util:to_list(couch_httpd:header_value(Req, "Content-Type")) of
>  ("multipart/related;" ++ _) = ContentType ->
>  {ok, Doc0} = couch_doc:doc_from_multi_part_stream(ContentType,
> -fun() -> receive_request_data(Req) end),
> +fun() -> receive_request_data(Req, Len) end),
>  Doc = couch_doc_from_req(Req, DocId, Doc0),
>  update_doc(Req, Db, DocId, Doc, RespHeaders, UpdateType);
>  _Else ->
> @@ -775,9 +776,13 @@ send_docs_multipart(Req, Results, Options) ->
>  couch_httpd:send_chunk(Resp, <<"--">>),
>  couch_httpd:last_chunk(Resp).
>  
> -receive_request_data(Req) ->
> -{couch_httpd:recv(Req, 0), fun() -> receive_request_data(Req) end}.
> -
> +receive_request_data(Req, undefined) ->
> +receive_request_data(Req, 0);
> +receive_request_data(Req, Len) when is_list(Len)->
> +Remaining = list_to_integer(Len),
> +Bin = couch_httpd:recv(Req, Remaining),
> +{Bin, fun() -> receive_request_data(Req, Remaining - iolist_size(Bin)) 
> end}.
> +
>  update_doc_result_to_json({{Id, Rev}, Error}) ->
>  {_Code, Err, Msg} = couch_httpd:error_info(Error),
>  {[{id, Id}, {rev, couch_doc:rev_to_str(Rev)},
> -- 
> 1.7.2.2
> Umbra

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: Postgresql to couchdb NetBeans Plugins

2010-08-22 Thread till
Hey,

and I was wondering what it means when you say, "export tables". ;-)
For example, do you export the data contained in those tables and
build some views to find the data again, or what's the plan?

Just curious.

Till

On Sun, Aug 22, 2010 at 7:20 PM, aristides villarreal  wrote:
> in this moment is a basic plugins(alpha version) that export tables of
> postgresql.
> I want to expand the functionality to export mysq (tables), include views
> and support for interacting with CouchDB in the plugin.
> On Sun, Aug 22, 2010 at 11:52 AM, till  wrote:
>>
>> Would you share in English what this does (exactly)? From the blog
>> post - you analyze the structure and export them to CouchDB. Do you
>> also create views or something?
>>
>> My Spanish is a bit rusty. :-)
>>
>> Till
>>
>> On Sun, Aug 22, 2010 at 5:24 PM, aristides villarreal 
>> wrote:
>> > I developed a basic plugin for netbeans to export tables from a
>> > PostgreSQL
>> > database CouchDB
>> > More information in my blog
>> >
>> > http://avbravo.blogspot.com/2010/08/p2cnb-postgresql-to-couchdb-netbeans.html
>> > --
>> >
>> > -
>> > Member of NetBeans Dream Team
>> > http://www.netbeans.org/community/contribute/dreamteam.html
>> > http://dreamteam.netbeans.org/
>> > http://wiki.netbeans.org/wiki/view/SpanishTranslation
>> > NetBeans Community Docs Evangelist
>> >
>> > http://nb-community-docs.blogspot.com/2008/09/welcome-community-docs-evangelists.html
>> > http://avbravo.blogspot.com
>> >
>> >
>
>
>
> --
> -
> Member of NetBeans Dream Team
> http://www.netbeans.org/community/contribute/dreamteam.html
> http://dreamteam.netbeans.org/
> http://wiki.netbeans.org/wiki/view/SpanishTranslation
> NetBeans Community Docs Evangelist
> http://nb-community-docs.blogspot.com/2008/09/welcome-community-docs-evangelists.html
> http://avbravo.blogspot.com
>
>


[jira] Updated: (COUCHDB-866) cookie_auth test fail

2010-08-22 Thread christian kirkegaard (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

christian kirkegaard updated COUCHDB-866:
-

Description: 
cookie_auth test fail every time with the same error. I have cookies enabled 
but nothing seems to fix this. Ive even tried different browsers with somewhat 
same report.

firefox,
{"message":"ddoc is 
null","fileName":"http://127.0.0.1:5984/_utils/script/test/cookie_auth.js","lineNumber":41,"stack":";()@http://127.0.0.1:5984/_utils/script/test/cookie_auth.js:41\u000arun_on_modified_server([object
 Array],(function () {try {var usersDb = new CouchDB(\"test_suite_users\", 
{'X-Couch-Full-Commit': \"false\"});usersDb.deleteDb();usersDb.createDb();var 
ddoc = usersDb.open(\"_design/_auth\");T(ddoc.validate_doc_update);var password 
= \"3.141592653589\";var jasonUserDoc = CouchDB.prepareUserDoc({name: \"Jason 
Davies\", roles: [\"dev\"]}, password);T(usersDb.save(jasonUserDoc).ok);var 
checkDoc = usersDb.open(jasonUserDoc._id);T(checkDoc.name == \"Jason 
Davies\");var jchrisUserDoc = CouchDB.prepareUserDoc({name: 
\"jch...@apache.org\"}, \"funnybone\");T(usersDb.save(jchrisUserDoc).ok);var 
duplicateJchrisDoc = CouchDB.prepareUserDoc({name: \"jch...@apache.org\"}, 
\"eh, Boo-Boo?\");try {usersDb.save(duplicateJchrisDoc);T(false && \"Can't 
create duplicate user names. Should have thrown an error.\");} catch (e) 
{T(e.error == \"conflict\");T(usersDb.last_req.status == 409);}var 
underscoreUserDoc = CouchDB.prepareUserDoc({name: \"_why\"}, 
\"copperfield\");try {usersDb.save(underscoreUserDoc);T(false &&  \"Can't 
create underscore user names. Should have thrown an error.\");} catch (e) 
{T(e.error == \"forbidden\");T(usersDb.last_req.status == 403);}var badIdDoc = 
CouchDB.prepareUserDoc({name: \"foo\"}, \"bar\");badIdDoc._id = 
\"org.apache.couchdb:w00x\";try {usersDb.save(badIdDoc);T(false && \"Can't 
create malformed docids. Should have thrown an error.\");} catch (e) {T(e.error 
== \"forbidden\");T(usersDb.last_req.status == 403);}T(CouchDB.login(\"Jason 
Davies\", password).ok);T(CouchDB.session().userCtx.name == \"Jason 
Davies\");jasonUserDoc.foo = 
2;T(usersDb.save(jasonUserDoc).ok);T(CouchDB.session().userCtx.roles.indexOf(\"_admin\")
 == -1);try {usersDb.deleteDoc(jchrisUserDoc);T(false && \"Can't delete other 
users docs. Should have thrown an error.\");} catch (e) {T(e.error == 
\"forbidden\");T(usersDb.last_req.status == 403);}T(!CouchDB.login(\"Jason 
Davies\", \"2.71828\").ok);T(!CouchDB.login(\"Robert Allen Zimmerman\", 
\"d00d\").ok);T(CouchDB.session().userCtx.name != \"Jason Davies\");xhr = 
CouchDB.request(\"POST\", \"/_session?next=/\", {headers: {'Content-Type': 
\"application/x-www-form-urlencoded\"}, body: \"name=Jason%20Davies&password=\" 
+ encodeURIComponent(password)});if (xhr.status == 200) 
{T(/Welcome/.test(xhr.responseText));} else {T(xhr.status == 
302);T(xhr.getResponseHeader(\"Location\"));}T(CouchDB.login(\"jch...@apache.org\",
 \"funnybone\").ok);T(CouchDB.session().userCtx.name == 
\"jch...@apache.org\");T(CouchDB.session().userCtx.roles.length == 
0);jasonUserDoc.foo = 3;try {usersDb.save(jasonUserDoc);T(false && \"Can't 
update someone else's user doc. Should have thrown an error.\");} catch (e) 
{T(e.error == \"forbidden\");T(usersDb.last_req.status == 
403);}jchrisUserDoc.roles = [\"foo\"];try {usersDb.save(jchrisUserDoc);T(false 
&& \"Can't set roles unless you are admin. Should have thrown an error.\");} 
catch (e) {T(e.error == \"forbidden\");T(usersDb.last_req.status == 
403);}T(CouchDB.logout().ok);T(CouchDB.session().userCtx.roles[0] == 
\"_admin\");jchrisUserDoc.foo = 
[\"foo\"];T(usersDb.save(jchrisUserDoc).ok);jchrisUserDoc.roles = 
[\"_bar\"];try {usersDb.save(jchrisUserDoc);T(false && \"Can't add system roles 
to user's db. Should have thrown an error.\");} catch (e) {T(e.error == 
\"forbidden\");T(usersDb.last_req.status == 
403);}T(CouchDB.login(\"jch...@apache.org\", 
\"funnybone\").ok);T(CouchDB.session().userCtx.name == 
\"jch...@apache.org\");T(CouchDB.session().userCtx.roles.indexOf(\"_admin\") == 
-1);T(CouchDB.session().userCtx.roles.indexOf(\"foo\") != 
-1);T(CouchDB.logout().ok);T(CouchDB.session().userCtx.roles[0] == 
\"_admin\");T(CouchDB.session().userCtx.name == 
null);run_on_modified_server([{section: \"admins\", key: \"jch...@apache.org\", 
value: \"funnybone\"}], function () {T(CouchDB.login(\"jch...@apache.org\", 
\"funnybone\").ok);T(CouchDB.session().userCtx.name == 
\"jch...@apache.org\");T(CouchDB.session().userCtx.roles.indexOf(\"_admin\") != 
-1);T(CouchDB.session().userCtx.roles.indexOf(\"foo\") != -1);jchrisUserDoc = 
usersDb.open(jchrisUserDoc._id);delete jchrisUserDoc.salt;delete 
jchrisUserDoc.password_sha;T(usersDb.save(jchrisUserDoc).ok);T(CouchDB.logout().ok);T(CouchDB.login(\"jch...@apache.org\",
 \"funnybone\").ok);var s = CouchDB.session();T(s.userCtx.name == 
\"jch...@apache.org\");T(s.userCtx.

[jira] Created: (COUCHDB-866) cookie_auth test fail

2010-08-22 Thread christian kirkegaard (JIRA)
cookie_auth test fail
-

 Key: COUCHDB-866
 URL: https://issues.apache.org/jira/browse/COUCHDB-866
 Project: CouchDB
  Issue Type: Bug
  Components: Test Suite
Affects Versions: 1.0.1, 1.0
 Environment: Firefox 3.6.6 without any extensions
Mac OSX 10.6.4
Reporter: christian kirkegaard
Priority: Minor


cookie_auth test fail every time with the same error. I have cookies enabled 
but nothing seems to fix this. Ive even tried different browsers with the same 
report.

{"message":"ddoc is 
null","fileName":"http://127.0.0.1:5984/_utils/script/test/cookie_auth.js","lineNumber":41,"stack":";()@http://127.0.0.1:5984/_utils/script/test/cookie_auth.js:41\u000arun_on_modified_server([object
 Array],(function () {try {var usersDb = new CouchDB(\"test_suite_users\", 
{'X-Couch-Full-Commit': \"false\"});usersDb.deleteDb();usersDb.createDb();var 
ddoc = usersDb.open(\"_design/_auth\");T(ddoc.validate_doc_update);var password 
= \"3.141592653589\";var jasonUserDoc = CouchDB.prepareUserDoc({name: \"Jason 
Davies\", roles: [\"dev\"]}, password);T(usersDb.save(jasonUserDoc).ok);var 
checkDoc = usersDb.open(jasonUserDoc._id);T(checkDoc.name == \"Jason 
Davies\");var jchrisUserDoc = CouchDB.prepareUserDoc({name: 
\"jch...@apache.org\"}, \"funnybone\");T(usersDb.save(jchrisUserDoc).ok);var 
duplicateJchrisDoc = CouchDB.prepareUserDoc({name: \"jch...@apache.org\"}, 
\"eh, Boo-Boo?\");try {usersDb.save(duplicateJchrisDoc);T(false && \"Can't 
create duplicate user names. Should have thrown an error.\");} catch (e) 
{T(e.error == \"conflict\");T(usersDb.last_req.status == 409);}var 
underscoreUserDoc = CouchDB.prepareUserDoc({name: \"_why\"}, 
\"copperfield\");try {usersDb.save(underscoreUserDoc);T(false &&  \"Can't 
create underscore user names. Should have thrown an error.\");} catch (e) 
{T(e.error == \"forbidden\");T(usersDb.last_req.status == 403);}var badIdDoc = 
CouchDB.prepareUserDoc({name: \"foo\"}, \"bar\");badIdDoc._id = 
\"org.apache.couchdb:w00x\";try {usersDb.save(badIdDoc);T(false && \"Can't 
create malformed docids. Should have thrown an error.\");} catch (e) {T(e.error 
== \"forbidden\");T(usersDb.last_req.status == 403);}T(CouchDB.login(\"Jason 
Davies\", password).ok);T(CouchDB.session().userCtx.name == \"Jason 
Davies\");jasonUserDoc.foo = 
2;T(usersDb.save(jasonUserDoc).ok);T(CouchDB.session().userCtx.roles.indexOf(\"_admin\")
 == -1);try {usersDb.deleteDoc(jchrisUserDoc);T(false && \"Can't delete other 
users docs. Should have thrown an error.\");} catch (e) {T(e.error == 
\"forbidden\");T(usersDb.last_req.status == 403);}T(!CouchDB.login(\"Jason 
Davies\", \"2.71828\").ok);T(!CouchDB.login(\"Robert Allen Zimmerman\", 
\"d00d\").ok);T(CouchDB.session().userCtx.name != \"Jason Davies\");xhr = 
CouchDB.request(\"POST\", \"/_session?next=/\", {headers: {'Content-Type': 
\"application/x-www-form-urlencoded\"}, body: \"name=Jason%20Davies&password=\" 
+ encodeURIComponent(password)});if (xhr.status == 200) 
{T(/Welcome/.test(xhr.responseText));} else {T(xhr.status == 
302);T(xhr.getResponseHeader(\"Location\"));}T(CouchDB.login(\"jch...@apache.org\",
 \"funnybone\").ok);T(CouchDB.session().userCtx.name == 
\"jch...@apache.org\");T(CouchDB.session().userCtx.roles.length == 
0);jasonUserDoc.foo = 3;try {usersDb.save(jasonUserDoc);T(false && \"Can't 
update someone else's user doc. Should have thrown an error.\");} catch (e) 
{T(e.error == \"forbidden\");T(usersDb.last_req.status == 
403);}jchrisUserDoc.roles = [\"foo\"];try {usersDb.save(jchrisUserDoc);T(false 
&& \"Can't set roles unless you are admin. Should have thrown an error.\");} 
catch (e) {T(e.error == \"forbidden\");T(usersDb.last_req.status == 
403);}T(CouchDB.logout().ok);T(CouchDB.session().userCtx.roles[0] == 
\"_admin\");jchrisUserDoc.foo = 
[\"foo\"];T(usersDb.save(jchrisUserDoc).ok);jchrisUserDoc.roles = 
[\"_bar\"];try {usersDb.save(jchrisUserDoc);T(false && \"Can't add system roles 
to user's db. Should have thrown an error.\");} catch (e) {T(e.error == 
\"forbidden\");T(usersDb.last_req.status == 
403);}T(CouchDB.login(\"jch...@apache.org\", 
\"funnybone\").ok);T(CouchDB.session().userCtx.name == 
\"jch...@apache.org\");T(CouchDB.session().userCtx.roles.indexOf(\"_admin\") == 
-1);T(CouchDB.session().userCtx.roles.indexOf(\"foo\") != 
-1);T(CouchDB.logout().ok);T(CouchDB.session().userCtx.roles[0] == 
\"_admin\");T(CouchDB.session().userCtx.name == 
null);run_on_modified_server([{section: \"admins\", key: \"jch...@apache.org\", 
value: \"funnybone\"}], function () {T(CouchDB.login(\"jch...@apache.org\", 
\"funnybone\").ok);T(CouchDB.session().userCtx.name == 
\"jch...@apache.org\");T(CouchDB.session().userCtx.roles.indexOf(\"_admin\") != 
-1);T(CouchDB.session().userCtx.roles.indexOf(\"foo\") != -1);jchrisUserDoc = 
usersDb.open(jchrisUserDoc._id);delete jchrisUserDoc.salt;delete 
jchrisUserDoc.password_sha;T(usersDb.save(jchrisUse

Re: Postgresql to couchdb NetBeans Plugins

2010-08-22 Thread aristides villarreal
in this moment is a basic plugins(alpha version) that export tables of
postgresql.
I want to expand the functionality to export mysq (tables), include views
and support for interacting with CouchDB in the plugin.
On Sun, Aug 22, 2010 at 11:52 AM, till  wrote:

> Would you share in English what this does (exactly)? From the blog
> post - you analyze the structure and export them to CouchDB. Do you
> also create views or something?
>
> My Spanish is a bit rusty. :-)
>
> Till
>
> On Sun, Aug 22, 2010 at 5:24 PM, aristides villarreal 
> wrote:
> > I developed a basic plugin for netbeans to export tables from a
> PostgreSQL
> > database CouchDB
> > More information in my blog
> >
> http://avbravo.blogspot.com/2010/08/p2cnb-postgresql-to-couchdb-netbeans.html
> > --
> > -
> > Member of NetBeans Dream Team
> > http://www.netbeans.org/community/contribute/dreamteam.html
> > http://dreamteam.netbeans.org/
> > http://wiki.netbeans.org/wiki/view/SpanishTranslation
> > NetBeans Community Docs Evangelist
> >
> http://nb-community-docs.blogspot.com/2008/09/welcome-community-docs-evangelists.html
> > http://avbravo.blogspot.com
> >
> >
>



-- 
*-*
Member of NetBeans Dream Team
http://www.netbeans.org/community/contribute/dreamteam.html
http://dreamteam.netbeans.org/
http://wiki.netbeans.org/wiki/view/SpanishTranslation
NetBeans Community Docs Evangelist
http://nb-community-docs.blogspot.com/2008/09/welcome-community-docs-evangelists.html
http://avbravo.blogspot.com


Re: COUCHDB-864 and 1.0.x

2010-08-22 Thread Klaus Trainer
+1
As soon as there's an adequate test.
I'm quite curious about that test ;).

-Klaus


On Sun, 2010-08-22 at 14:34 +0100, Robert Newson wrote:
> All,
> 
> I committed a one-line fix (suggested by Adam) to trunk which now
> allows multipart/related PUT's to succeed without mochiweb closing the
> connection. At base, mochiweb uses a process dictionary variable to
> keep track of whether it ought to close the connection or not. Because
> we use the request from a different process, part of that handling is
> skipped, causing mochiweb to close a connection when it shouldn't. The
> fix (or, less charitably, *hack*) fixes that issue.
> 
> I've struggled to write an etap test that demonstrates the original
> problem (the bug was discovered when doing performance testing from a
> node.js-based test program) and I feel I owe one. I would like this
> patch to land on 1.0.x where I think a confirming test is mandatory.
> 
> I'll continue trying to write the test and will put it to trunk once I
> have it. In the meantime, I would like to hear from others about
> whether this should go to 1.0.x? It allows efficient and atomic
> insertion of documents with attachments, which is an important feature
> for me, though to date it is only used by the replicator.
> 
> B.




Re: Postgresql to couchdb NetBeans Plugins

2010-08-22 Thread till
Would you share in English what this does (exactly)? From the blog
post - you analyze the structure and export them to CouchDB. Do you
also create views or something?

My Spanish is a bit rusty. :-)

Till

On Sun, Aug 22, 2010 at 5:24 PM, aristides villarreal  wrote:
> I developed a basic plugin for netbeans to export tables from a PostgreSQL
> database CouchDB
> More information in my blog
> http://avbravo.blogspot.com/2010/08/p2cnb-postgresql-to-couchdb-netbeans.html
> --
> -
> Member of NetBeans Dream Team
> http://www.netbeans.org/community/contribute/dreamteam.html
> http://dreamteam.netbeans.org/
> http://wiki.netbeans.org/wiki/view/SpanishTranslation
> NetBeans Community Docs Evangelist
> http://nb-community-docs.blogspot.com/2008/09/welcome-community-docs-evangelists.html
> http://avbravo.blogspot.com
>
>


Postgresql to couchdb NetBeans Plugins

2010-08-22 Thread aristides villarreal
I developed a basic plugin for netbeans to export tables from a PostgreSQL
database CouchDB

More information in my blog
http://avbravo.blogspot.com/2010/08/p2cnb-postgresql-to-couchdb-netbeans.html
-- 
*-*
Member of NetBeans Dream Team
http://www.netbeans.org/community/contribute/dreamteam.html
http://dreamteam.netbeans.org/
http://wiki.netbeans.org/wiki/view/SpanishTranslation
NetBeans Community Docs Evangelist
http://nb-community-docs.blogspot.com/2008/09/welcome-community-docs-evangelists.html
http://avbravo.blogspot.com


[jira] Updated: (COUCHDB-864) multipart/related PUT's always close the connection.

2010-08-22 Thread Filipe Manana (JIRA)

 [ 
https://issues.apache.org/jira/browse/COUCHDB-864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Filipe Manana updated COUCHDB-864:
--

Attachment: mp_pipeline.patch

As I was discussing with Robert on IRC, the following patch is likely to fix 
the issue of HTTP pipelines and chunked transfer-encoding and identify 
transfer-encoding - where the issue is reading and discarding data from 
subsequent requests in the pipeline.

Hope Robert can test this with his node.js code and provide feedback.

Thanks Robert and Adam.

> multipart/related PUT's always close the connection.
> 
>
> Key: COUCHDB-864
> URL: https://issues.apache.org/jira/browse/COUCHDB-864
> Project: CouchDB
>  Issue Type: Bug
>  Components: Database Core
>Reporter: Robert Newson
> Attachments: chunked.erl, mp_pipeline.patch
>
>
> I noticed that mochiweb always closes the connection when doing a 
> multipart/related PUT (to insert the JSON document and accompanying 
> attachments in one call). Ultimately it's because we call recv(0) and not 
> recv_body, thus consuming more data than we actually process. Mochiweb 
> notices that there is unread data on the socket and closes the connection.
> This impacts replication with attachments, as I believe they go through this 
> code path (and, thus, are forever reconnecting).
> The code below demonstrates a fix for this issue but isn't good enough for 
> trunk. Adam provided the important process dictionary fix.
> ---
>  src/couchdb/couch_doc.erl  |1 +
>  src/couchdb/couch_httpd_db.erl |   13 +
>  2 files changed, 10 insertions(+), 4 deletions(-)
> diff --git a/src/couchdb/couch_doc.erl b/src/couchdb/couch_doc.erl
> index 5009f8f..f8c874b 100644
> --- a/src/couchdb/couch_doc.erl
> +++ b/src/couchdb/couch_doc.erl
> @@ -455,6 +455,7 @@ doc_from_multi_part_stream(ContentType, DataFun) ->
>  Parser ! {get_doc_bytes, self()},
>  receive 
>  {doc_bytes, DocBytes} ->
> +erlang:put(mochiweb_request_recv, true),
>  Doc = from_json_obj(?JSON_DECODE(DocBytes)),
>  % go through the attachments looking for 'follows' in the data,
>  % replace with function that reads the data from MIME stream.
> diff --git a/src/couchdb/couch_httpd_db.erl b/src/couchdb/couch_httpd_db.erl
> index b0fbe8d..eff7d67 100644
> --- a/src/couchdb/couch_httpd_db.erl
> +++ b/src/couchdb/couch_httpd_db.erl
> @@ -651,12 +651,13 @@ db_doc_req(#httpd{method='PUT'}=Req, Db, DocId) ->
>  } = parse_doc_query(Req),
>  couch_doc:validate_docid(DocId),
>  
> +Len = couch_httpd:header_value(Req,"Content-Length"),
>  Loc = absolute_uri(Req, "/" ++ ?b2l(Db#db.name) ++ "/" ++ ?b2l(DocId)),
>  RespHeaders = [{"Location", Loc}],
>  case couch_util:to_list(couch_httpd:header_value(Req, "Content-Type")) of
>  ("multipart/related;" ++ _) = ContentType ->
>  {ok, Doc0} = couch_doc:doc_from_multi_part_stream(ContentType,
> -fun() -> receive_request_data(Req) end),
> +fun() -> receive_request_data(Req, Len) end),
>  Doc = couch_doc_from_req(Req, DocId, Doc0),
>  update_doc(Req, Db, DocId, Doc, RespHeaders, UpdateType);
>  _Else ->
> @@ -775,9 +776,13 @@ send_docs_multipart(Req, Results, Options) ->
>  couch_httpd:send_chunk(Resp, <<"--">>),
>  couch_httpd:last_chunk(Resp).
>  
> -receive_request_data(Req) ->
> -{couch_httpd:recv(Req, 0), fun() -> receive_request_data(Req) end}.
> -
> +receive_request_data(Req, undefined) ->
> +receive_request_data(Req, 0);
> +receive_request_data(Req, Len) when is_list(Len)->
> +Remaining = list_to_integer(Len),
> +Bin = couch_httpd:recv(Req, Remaining),
> +{Bin, fun() -> receive_request_data(Req, Remaining - iolist_size(Bin)) 
> end}.
> +
>  update_doc_result_to_json({{Id, Rev}, Error}) ->
>  {_Code, Err, Msg} = couch_httpd:error_info(Error),
>  {[{id, Id}, {rev, couch_doc:rev_to_str(Rev)},
> -- 
> 1.7.2.2
> Umbra

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



COUCHDB-864 and 1.0.x

2010-08-22 Thread Robert Newson
All,

I committed a one-line fix (suggested by Adam) to trunk which now
allows multipart/related PUT's to succeed without mochiweb closing the
connection. At base, mochiweb uses a process dictionary variable to
keep track of whether it ought to close the connection or not. Because
we use the request from a different process, part of that handling is
skipped, causing mochiweb to close a connection when it shouldn't. The
fix (or, less charitably, *hack*) fixes that issue.

I've struggled to write an etap test that demonstrates the original
problem (the bug was discovered when doing performance testing from a
node.js-based test program) and I feel I owe one. I would like this
patch to land on 1.0.x where I think a confirming test is mandatory.

I'll continue trying to write the test and will put it to trunk once I
have it. In the meantime, I would like to hear from others about
whether this should go to 1.0.x? It allows efficient and atomic
insertion of documents with attachments, which is an important feature
for me, though to date it is only used by the replicator.

B.


[jira] Commented: (COUCHDB-864) multipart/related PUT's always close the connection.

2010-08-22 Thread Filipe Manana (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12901151#action_12901151
 ] 

Filipe Manana commented on COUCHDB-864:
---

That test I did was without the line that could modify should_close/0 behaviour 
(the put into the process dictionary).

What I meant to demonstrate is that that line doesn't actually change the 
behaviour, should_close/0 returns false on both cases (with and without the 
patch).


> multipart/related PUT's always close the connection.
> 
>
> Key: COUCHDB-864
> URL: https://issues.apache.org/jira/browse/COUCHDB-864
> Project: CouchDB
>  Issue Type: Bug
>  Components: Database Core
>Reporter: Robert Newson
> Attachments: chunked.erl
>
>
> I noticed that mochiweb always closes the connection when doing a 
> multipart/related PUT (to insert the JSON document and accompanying 
> attachments in one call). Ultimately it's because we call recv(0) and not 
> recv_body, thus consuming more data than we actually process. Mochiweb 
> notices that there is unread data on the socket and closes the connection.
> This impacts replication with attachments, as I believe they go through this 
> code path (and, thus, are forever reconnecting).
> The code below demonstrates a fix for this issue but isn't good enough for 
> trunk. Adam provided the important process dictionary fix.
> ---
>  src/couchdb/couch_doc.erl  |1 +
>  src/couchdb/couch_httpd_db.erl |   13 +
>  2 files changed, 10 insertions(+), 4 deletions(-)
> diff --git a/src/couchdb/couch_doc.erl b/src/couchdb/couch_doc.erl
> index 5009f8f..f8c874b 100644
> --- a/src/couchdb/couch_doc.erl
> +++ b/src/couchdb/couch_doc.erl
> @@ -455,6 +455,7 @@ doc_from_multi_part_stream(ContentType, DataFun) ->
>  Parser ! {get_doc_bytes, self()},
>  receive 
>  {doc_bytes, DocBytes} ->
> +erlang:put(mochiweb_request_recv, true),
>  Doc = from_json_obj(?JSON_DECODE(DocBytes)),
>  % go through the attachments looking for 'follows' in the data,
>  % replace with function that reads the data from MIME stream.
> diff --git a/src/couchdb/couch_httpd_db.erl b/src/couchdb/couch_httpd_db.erl
> index b0fbe8d..eff7d67 100644
> --- a/src/couchdb/couch_httpd_db.erl
> +++ b/src/couchdb/couch_httpd_db.erl
> @@ -651,12 +651,13 @@ db_doc_req(#httpd{method='PUT'}=Req, Db, DocId) ->
>  } = parse_doc_query(Req),
>  couch_doc:validate_docid(DocId),
>  
> +Len = couch_httpd:header_value(Req,"Content-Length"),
>  Loc = absolute_uri(Req, "/" ++ ?b2l(Db#db.name) ++ "/" ++ ?b2l(DocId)),
>  RespHeaders = [{"Location", Loc}],
>  case couch_util:to_list(couch_httpd:header_value(Req, "Content-Type")) of
>  ("multipart/related;" ++ _) = ContentType ->
>  {ok, Doc0} = couch_doc:doc_from_multi_part_stream(ContentType,
> -fun() -> receive_request_data(Req) end),
> +fun() -> receive_request_data(Req, Len) end),
>  Doc = couch_doc_from_req(Req, DocId, Doc0),
>  update_doc(Req, Db, DocId, Doc, RespHeaders, UpdateType);
>  _Else ->
> @@ -775,9 +776,13 @@ send_docs_multipart(Req, Results, Options) ->
>  couch_httpd:send_chunk(Resp, <<"--">>),
>  couch_httpd:last_chunk(Resp).
>  
> -receive_request_data(Req) ->
> -{couch_httpd:recv(Req, 0), fun() -> receive_request_data(Req) end}.
> -
> +receive_request_data(Req, undefined) ->
> +receive_request_data(Req, 0);
> +receive_request_data(Req, Len) when is_list(Len)->
> +Remaining = list_to_integer(Len),
> +Bin = couch_httpd:recv(Req, Remaining),
> +{Bin, fun() -> receive_request_data(Req, Remaining - iolist_size(Bin)) 
> end}.
> +
>  update_doc_result_to_json({{Id, Rev}, Error}) ->
>  {_Code, Err, Msg} = couch_httpd:error_info(Error),
>  {[{id, Id}, {rev, couch_doc:rev_to_str(Rev)},
> -- 
> 1.7.2.2
> Umbra

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COUCHDB-864) multipart/related PUT's always close the connection.

2010-08-22 Thread Robert Newson (JIRA)

[ 
https://issues.apache.org/jira/browse/COUCHDB-864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12901147#action_12901147
 ] 

Robert Newson commented on COUCHDB-864:
---

I meant to clarify that, yes, this one line change alters the return value of 
should_close(). The real question is whether you can then successfully send 
another multipart/related PUT, and I think the answer there is no for chunked 
encoding and yes for content-length (identity) encoding.

multipart/related PUT doesn't work as well as normal PUT's and I think it 
should. the receive_request_data() method is used only for these things and 
seemingly does not do as comprehensive a job as the more usual code.

I'm working on an etap test to demonstrate this problem.

> multipart/related PUT's always close the connection.
> 
>
> Key: COUCHDB-864
> URL: https://issues.apache.org/jira/browse/COUCHDB-864
> Project: CouchDB
>  Issue Type: Bug
>  Components: Database Core
>Reporter: Robert Newson
> Attachments: chunked.erl
>
>
> I noticed that mochiweb always closes the connection when doing a 
> multipart/related PUT (to insert the JSON document and accompanying 
> attachments in one call). Ultimately it's because we call recv(0) and not 
> recv_body, thus consuming more data than we actually process. Mochiweb 
> notices that there is unread data on the socket and closes the connection.
> This impacts replication with attachments, as I believe they go through this 
> code path (and, thus, are forever reconnecting).
> The code below demonstrates a fix for this issue but isn't good enough for 
> trunk. Adam provided the important process dictionary fix.
> ---
>  src/couchdb/couch_doc.erl  |1 +
>  src/couchdb/couch_httpd_db.erl |   13 +
>  2 files changed, 10 insertions(+), 4 deletions(-)
> diff --git a/src/couchdb/couch_doc.erl b/src/couchdb/couch_doc.erl
> index 5009f8f..f8c874b 100644
> --- a/src/couchdb/couch_doc.erl
> +++ b/src/couchdb/couch_doc.erl
> @@ -455,6 +455,7 @@ doc_from_multi_part_stream(ContentType, DataFun) ->
>  Parser ! {get_doc_bytes, self()},
>  receive 
>  {doc_bytes, DocBytes} ->
> +erlang:put(mochiweb_request_recv, true),
>  Doc = from_json_obj(?JSON_DECODE(DocBytes)),
>  % go through the attachments looking for 'follows' in the data,
>  % replace with function that reads the data from MIME stream.
> diff --git a/src/couchdb/couch_httpd_db.erl b/src/couchdb/couch_httpd_db.erl
> index b0fbe8d..eff7d67 100644
> --- a/src/couchdb/couch_httpd_db.erl
> +++ b/src/couchdb/couch_httpd_db.erl
> @@ -651,12 +651,13 @@ db_doc_req(#httpd{method='PUT'}=Req, Db, DocId) ->
>  } = parse_doc_query(Req),
>  couch_doc:validate_docid(DocId),
>  
> +Len = couch_httpd:header_value(Req,"Content-Length"),
>  Loc = absolute_uri(Req, "/" ++ ?b2l(Db#db.name) ++ "/" ++ ?b2l(DocId)),
>  RespHeaders = [{"Location", Loc}],
>  case couch_util:to_list(couch_httpd:header_value(Req, "Content-Type")) of
>  ("multipart/related;" ++ _) = ContentType ->
>  {ok, Doc0} = couch_doc:doc_from_multi_part_stream(ContentType,
> -fun() -> receive_request_data(Req) end),
> +fun() -> receive_request_data(Req, Len) end),
>  Doc = couch_doc_from_req(Req, DocId, Doc0),
>  update_doc(Req, Db, DocId, Doc, RespHeaders, UpdateType);
>  _Else ->
> @@ -775,9 +776,13 @@ send_docs_multipart(Req, Results, Options) ->
>  couch_httpd:send_chunk(Resp, <<"--">>),
>  couch_httpd:last_chunk(Resp).
>  
> -receive_request_data(Req) ->
> -{couch_httpd:recv(Req, 0), fun() -> receive_request_data(Req) end}.
> -
> +receive_request_data(Req, undefined) ->
> +receive_request_data(Req, 0);
> +receive_request_data(Req, Len) when is_list(Len)->
> +Remaining = list_to_integer(Len),
> +Bin = couch_httpd:recv(Req, Remaining),
> +{Bin, fun() -> receive_request_data(Req, Remaining - iolist_size(Bin)) 
> end}.
> +
>  update_doc_result_to_json({{Id, Rev}, Error}) ->
>  {_Code, Err, Msg} = couch_httpd:error_info(Error),
>  {[{id, Id}, {rev, couch_doc:rev_to_str(Rev)},
> -- 
> 1.7.2.2
> Umbra

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.