Re: [DISCUSS] Rename default branch to `main`

2020-09-16 Thread Joan Touzet




On 16/09/2020 10:57, Paul Davis wrote:

Hey all,

Here's a list of all CouchDB related repositories with a few quick 
stats and my read on their status and requirements. Can I get some 
eyeballs on this to double check before I submit a ticket to infra 
for doing our branch renaming updates?


https://gist.github.com/davisp/9de8fa167812f80356d4990e390c9351

There are a few repos with comments I had when I wasn't 100% sure on 
the status. For ease those are:


couchdb-couch-collate - I couldn't easily tell if this was still
used for Windows builds


Nope

couchdb-fauxton-server - This is an empty repo, should we have it 
deleted?


Sure

couchdb-jquery-couch - Should this be archived? Has PouchDB/nano 
replaced it?


If I recall correctly this was part of Futon and 1.x releases?


couchdb-nmo - Should this be archived?


Very old code from 2015 from Robert Kowalski to help set up 
clusters/etc. I don't know anything about it, and it appears 
unmaintained. +1 to archive



couchdb-oauth - I couldn't find this used anywhere, should we archive


I remember using this extensively! 1.x asset. As we no longer officially 
support it (or CouchDB "plugins" in this form), +1 to archive



couchdb-www - Should this be archived or included in the rename?


We already have to use asf-site branch on this, and the 'master' branch 
already says "you're on the wrong branch." Just have Infra change the 
default branch to asf-site, no need to master -> main here IMO.




Paul

On Fri, Sep 11, 2020 at 6:28 AM Glynn Bird  
wrote:


+1

Happy to help reconfigure apache/couchdb-nano if necessary after 
the switch to main


On Thu, 10 Sep 2020 at 10:40, Andy Wenk  
wrote:



strong +1

here at sum.cumo we also change the “master” branches to main

Best

Andy -- Andy Wenk Hamburg

GPG fingerprint C32E 275F BCF3 9DF6 4E55  21BD 45D3 5653 77F9 
3D29




On 9. Sep 2020, at 20:09, Joan Touzet  
wrote:


+1. Thanks for starting this, Paul. I was actually going to try
and

drive this a month or two ago, but things got busy for me.


I'd also support renaming it to 'trunk' but really don't care 
what we

pick.


The first commercial version control system I used to use, 
called that

branch "main":


https://i.ibb.co/7bMDt3c/cc-ver-tree2.gif

-Joan "yes, that's motif" Touzet


On 2020-09-09 11:40 a.m., Paul Davis wrote:
Howdy Folks! Words matter. I've just started a thread on 
merging all of the FoundationDB work into mainline 
development and thought this would be a good time to bring up
a separate discussion on renaming our default branch. 
Personally, I've got a few projects where I used `main` for 
the mainline development branch. I find it to be a fairly 
natural shift because I tab-complete everything on the 
command line. I'd be open to other suggestions but I'm also 
hoping this doesn't devolve into a bikeshed on what we end up
picking. For mechanics, what I'm thinking is that when we 
finish up the last rebase of the FoundationDB work that 
instead of actually pushing the merge/rebase button we just 
rename the branch and then change the default branch on 
GitHub and close the PR. Thoughts? Paul





Re: [VOTE] Release Apache CouchDB 3.1.1 (RC2)

2020-09-16 Thread Joan Touzet

On 16/09/2020 09:05, Bessenyei Balázs Donát wrote:

Why should we remove this? I don’t think it is controversial.


Does that mean 3.1.1 will be released with the query parameter
`buffer_response`?


Yes, it is in 3.1.1-RC2.


Nit:

3.1.1 will not be cut unless 3 +1 votes, minimum, arrive from committers, with 
no blockers by convention.


According to http://www.apache.org/legal/release-policy.html#release-approval
I thought it has to be 3 PMC members (which by the way I think is
already passed if Nick, Glynn and Joan (RM) are all +1).
Or is there a different process in CouchDB?


Sorry, correct, fumble on my part. Kinda busy. Voting process is all 
detailed here:


https://couchdb.apache.org/bylaws.html

Lazy majority PMC members have binding votes.

-Joan




Thank you,
Donat


On Tue, 15 Sep 2020 at 20:32, Joan Touzet  wrote:


FYI, Linux packages in test form are available now at

https://repo-nightly.couchdb.org/3.x/

As always, subject to removal, do not use in production, etc.

Windows build will probably land tomorrow, I'm fairly busy today.

-Joan

On 15/09/2020 08:26, Will Holley wrote:

Environment:
RHEL 8
Elixir: 1.9.1
Erlang: 20.3.8.25

CPU Architectures: amd64, ppc64le, s390x

Sig: ok
Checksums: ok
Configure, make & make check: ok
Build release, add admin & start: ok
Used Fauxton to:
- configure cluster: ok
- verify install: ok
- create dbs: ok
- create docs: ok
- create replications between dbs on same cluster: ok
- links to docs: ok

+1

On Tue, 15 Sep 2020 at 05:20, Nick Vatamaniuc  wrote:


Environment:
Ubuntu 18.04.5, x86_64
$ asdf current
  elixir 1.9.4-otp-22
  erlang 22.2.3
   $ cat /etc/apt/sources.list.d/couchdb-bintray.list
  deb https://apache.bintray.com/couchdb-deb bionic main

Sig: ok
Checksums: ok
Configure, make & make check: ok
Build release, add admin & start: ok
Used Fauxton to:
- configure cluster: ok
- verify install: ok
- create dbs: ok
- create docs: ok
- create replications between dbs on same cluster: ok
- links to docs: ok

+1

Thank you for creating the release, Joan!

-Nick

On Mon, Sep 14, 2020 at 10:15 PM Joan Touzet  wrote:


Hi everyone,

There have been no votes on this release. Are people available to try it
out? Tomorrow, I will be able to complete the binary builds and submit
my own vote.

Remember, 3.1.1 will not be cut unless 3 +1 votes, minimum, arrive from
committers, with no blockers by convention. The Mac situation needs to
be investigated. It's not clear to me that there is consensus on
blocking the release on the new request header toggle, but if there is,
I could cut a 3.1.1-RC3 _without_ that change tomorrow if consensus
materialises overnight.

Note that I will be on holiday starting end of the week for 2.5 weeks.
If 3.1.1 doesn't cut from this RC, for whatever reason, the next
opportunity I will have to turn the crank will be 5 October 2020.

-Joan

On 2020-09-11 6:53 p.m., Joan Touzet wrote:

Dear community,

I would like to propose that we release Apache CouchDB 3.1.1.

Changes since the last round:

   https://github.com/apache/couchdb/compare/3.1.1-RC1...3.1.1-RC2

Candidate release notes:

   https://docs.couchdb.org/en/latest/whatsnew/3.1.html

We encourage the whole community to download and test these release
artefacts so that any critical issues can be resolved before the

release

is made. Everyone is free to vote on this release, so dig right in!
(Only PMC members have binding votes, but they depend on community
feedback to gauge if an official release is ready to be made.)

The release artefacts we are voting on are available here:

   https://dist.apache.org/repos/dist/dev/couchdb/source/3.1.1/rc.2/

There, you will find a tarball, a GPG signature, and SHA256/SHA512
checksums.

Please follow the test procedure here:




https://cwiki.apache.org/confluence/display/COUCHDB/Testing+a+Source+Release



Please remember that "RC2" is an annotation. If the vote passes, these
artefacts will be released as Apache CouchDB 3.1.1.

Because of the weekend, this vote will remain open until 5PM ET

(UTC-4),

Tuesday, 15 September 2020.

Please cast your votes now.

Thanks,
Joan "once more unto the breech, dear friends" Touzet






Re: Jenkins "restart" access available to all committers.

2020-09-15 Thread Joan Touzet

It's OK Eric! You weren't bugging me. No apologies necessary.

I mostly just want to not be in your critical path, this is all supposed 
to be self serve.


-Joan

On 15/09/2020 14:25, Eric Avdey wrote:

Hi Joan,

I didn't realize I was bugging you, sorry about that, I'll make sure to avoid 
it in the future.

Yes, the restart button wasn't available for me right after login into Jenkins 
and appear with something of an hour of delay and a couple of logout/logins 
later.
I suspect there might be some permissions propagation happening on a background 
on a first login (I haven't logged in Jenkins for a while), so if anyone else 
will notice that it might be just a question of waiting a little and try again 
later.


Regards,
Eric



On Sep 15, 2020, at 15:00, Joan Touzet  wrote:

Hi there,

Recently, I believe Eric Avdey (eiri) said that he wasn't able to restart a 
Jenkins job.

I've checked with Infra and they state that anyone with committer access should 
be able to restart any job, any stage, or replay a job.

https://issues.apache.org/jira/browse/INFRA-20851

No committer should ever have to bug me, personally (or any PMC member) to 
restart a job.

If you still have problems after logging into Jenkins with your ASF ID, please 
reply to this thread.

Thanks,
Joan




Re: [VOTE] Release Apache CouchDB 3.1.1 (RC2)

2020-09-15 Thread Joan Touzet

FYI, Linux packages in test form are available now at

https://repo-nightly.couchdb.org/3.x/

As always, subject to removal, do not use in production, etc.

Windows build will probably land tomorrow, I'm fairly busy today.

-Joan

On 15/09/2020 08:26, Will Holley wrote:

Environment:
   RHEL 8
   Elixir: 1.9.1
   Erlang: 20.3.8.25

CPU Architectures: amd64, ppc64le, s390x

Sig: ok
Checksums: ok
Configure, make & make check: ok
Build release, add admin & start: ok
Used Fauxton to:
   - configure cluster: ok
   - verify install: ok
   - create dbs: ok
   - create docs: ok
   - create replications between dbs on same cluster: ok
   - links to docs: ok

+1

On Tue, 15 Sep 2020 at 05:20, Nick Vatamaniuc  wrote:


Environment:
   Ubuntu 18.04.5, x86_64
   $ asdf current
 elixir 1.9.4-otp-22
 erlang 22.2.3
  $ cat /etc/apt/sources.list.d/couchdb-bintray.list
 deb https://apache.bintray.com/couchdb-deb bionic main

Sig: ok
Checksums: ok
Configure, make & make check: ok
Build release, add admin & start: ok
Used Fauxton to:
   - configure cluster: ok
   - verify install: ok
   - create dbs: ok
   - create docs: ok
   - create replications between dbs on same cluster: ok
   - links to docs: ok

+1

Thank you for creating the release, Joan!

-Nick

On Mon, Sep 14, 2020 at 10:15 PM Joan Touzet  wrote:


Hi everyone,

There have been no votes on this release. Are people available to try it
out? Tomorrow, I will be able to complete the binary builds and submit
my own vote.

Remember, 3.1.1 will not be cut unless 3 +1 votes, minimum, arrive from
committers, with no blockers by convention. The Mac situation needs to
be investigated. It's not clear to me that there is consensus on
blocking the release on the new request header toggle, but if there is,
I could cut a 3.1.1-RC3 _without_ that change tomorrow if consensus
materialises overnight.

Note that I will be on holiday starting end of the week for 2.5 weeks.
If 3.1.1 doesn't cut from this RC, for whatever reason, the next
opportunity I will have to turn the crank will be 5 October 2020.

-Joan

On 2020-09-11 6:53 p.m., Joan Touzet wrote:

Dear community,

I would like to propose that we release Apache CouchDB 3.1.1.

Changes since the last round:

  https://github.com/apache/couchdb/compare/3.1.1-RC1...3.1.1-RC2

Candidate release notes:

  https://docs.couchdb.org/en/latest/whatsnew/3.1.html

We encourage the whole community to download and test these release
artefacts so that any critical issues can be resolved before the

release

is made. Everyone is free to vote on this release, so dig right in!
(Only PMC members have binding votes, but they depend on community
feedback to gauge if an official release is ready to be made.)

The release artefacts we are voting on are available here:

  https://dist.apache.org/repos/dist/dev/couchdb/source/3.1.1/rc.2/

There, you will find a tarball, a GPG signature, and SHA256/SHA512
checksums.

Please follow the test procedure here:




https://cwiki.apache.org/confluence/display/COUCHDB/Testing+a+Source+Release



Please remember that "RC2" is an annotation. If the vote passes, these
artefacts will be released as Apache CouchDB 3.1.1.

Because of the weekend, this vote will remain open until 5PM ET

(UTC-4),

Tuesday, 15 September 2020.

Please cast your votes now.

Thanks,
Joan "once more unto the breech, dear friends" Touzet






Jenkins "restart" access available to all committers.

2020-09-15 Thread Joan Touzet

Hi there,

Recently, I believe Eric Avdey (eiri) said that he wasn't able to 
restart a Jenkins job.


I've checked with Infra and they state that anyone with committer access 
should be able to restart any job, any stage, or replay a job.


https://issues.apache.org/jira/browse/INFRA-20851

No committer should ever have to bug me, personally (or any PMC member) 
to restart a job.


If you still have problems after logging into Jenkins with your ASF ID, 
please reply to this thread.


Thanks,
Joan


Re: [VOTE] Release Apache CouchDB 3.1.1 (RC2)

2020-09-15 Thread Joan Touzet

That's fine by me, would rather not have a DOA release.

A fix is going in to bypass ppc64le builds for now, then hopefully 
Jenkins will have Linux binary builds up for testing. I hope to get to a 
Windows build today, too.


-Joan

On 15/09/2020 11:44, Jan Lehnardt wrote:

Hey all,

I’ll able to look into the Mac build and binaries on Thursday. I hope nobody 
minds extending the VOTE until then.

Thanks!
Jan
—


On 15. Sep 2020, at 04:15, Joan Touzet  wrote:

Hi everyone,

There have been no votes on this release. Are people available to try it out? 
Tomorrow, I will be able to complete the binary builds and submit my own vote.

Remember, 3.1.1 will not be cut unless 3 +1 votes, minimum, arrive from 
committers, with no blockers by convention. The Mac situation needs to be 
investigated. It's not clear to me that there is consensus on blocking the 
release on the new request header toggle, but if there is, I could cut a 
3.1.1-RC3 _without_ that change tomorrow if consensus materialises overnight.

Note that I will be on holiday starting end of the week for 2.5 weeks. If 3.1.1 
doesn't cut from this RC, for whatever reason, the next opportunity I will have 
to turn the crank will be 5 October 2020.

-Joan

On 2020-09-11 6:53 p.m., Joan Touzet wrote:

Dear community,
I would like to propose that we release Apache CouchDB 3.1.1.
Changes since the last round:
 https://github.com/apache/couchdb/compare/3.1.1-RC1...3.1.1-RC2
Candidate release notes:
 https://docs.couchdb.org/en/latest/whatsnew/3.1.html
We encourage the whole community to download and test these release artefacts 
so that any critical issues can be resolved before the release is made. 
Everyone is free to vote on this release, so dig right in! (Only PMC members 
have binding votes, but they depend on community feedback to gauge if an 
official release is ready to be made.)
The release artefacts we are voting on are available here:
 https://dist.apache.org/repos/dist/dev/couchdb/source/3.1.1/rc.2/
There, you will find a tarball, a GPG signature, and SHA256/SHA512 checksums.
Please follow the test procedure here:
https://cwiki.apache.org/confluence/display/COUCHDB/Testing+a+Source+Release Please 
remember that "RC2" is an annotation. If the vote passes, these artefacts will 
be released as Apache CouchDB 3.1.1.
Because of the weekend, this vote will remain open until 5PM ET (UTC-4), 
Tuesday, 15 September 2020.
Please cast your votes now.
Thanks,
Joan "once more unto the breech, dear friends" Touzet




Re: [VOTE] Release Apache CouchDB 3.1.1 (RC2)

2020-09-14 Thread Joan Touzet

Hi everyone,

There have been no votes on this release. Are people available to try it 
out? Tomorrow, I will be able to complete the binary builds and submit 
my own vote.


Remember, 3.1.1 will not be cut unless 3 +1 votes, minimum, arrive from 
committers, with no blockers by convention. The Mac situation needs to 
be investigated. It's not clear to me that there is consensus on 
blocking the release on the new request header toggle, but if there is, 
I could cut a 3.1.1-RC3 _without_ that change tomorrow if consensus 
materialises overnight.


Note that I will be on holiday starting end of the week for 2.5 weeks. 
If 3.1.1 doesn't cut from this RC, for whatever reason, the next 
opportunity I will have to turn the crank will be 5 October 2020.


-Joan

On 2020-09-11 6:53 p.m., Joan Touzet wrote:

Dear community,

I would like to propose that we release Apache CouchDB 3.1.1.

Changes since the last round:

     https://github.com/apache/couchdb/compare/3.1.1-RC1...3.1.1-RC2

Candidate release notes:

     https://docs.couchdb.org/en/latest/whatsnew/3.1.html

We encourage the whole community to download and test these release 
artefacts so that any critical issues can be resolved before the release 
is made. Everyone is free to vote on this release, so dig right in! 
(Only PMC members have binding votes, but they depend on community 
feedback to gauge if an official release is ready to be made.)


The release artefacts we are voting on are available here:

     https://dist.apache.org/repos/dist/dev/couchdb/source/3.1.1/rc.2/

There, you will find a tarball, a GPG signature, and SHA256/SHA512 
checksums.


Please follow the test procedure here:


https://cwiki.apache.org/confluence/display/COUCHDB/Testing+a+Source+Release 



Please remember that "RC2" is an annotation. If the vote passes, these 
artefacts will be released as Apache CouchDB 3.1.1.


Because of the weekend, this vote will remain open until 5PM ET (UTC-4), 
Tuesday, 15 September 2020.


Please cast your votes now.

Thanks,
Joan "once more unto the breech, dear friends" Touzet


Re: Jenkins issues, looking for committer volunteer(s)

2020-09-14 Thread Joan Touzet
Paul informs me that IBM have discontinued all Power platform hosting at 
the level that suits us. He is following up with Adam and others to find 
a solution, but...


This directly endangers our ability to release packages and Docker 
containers on ppc64le, as this platform will not be in the regression 
suite. We've had issues on alternate platforms (such as ARM and ppc64le) 
when not performing active testing.


This is especially troubling since IBM are the primary clients for this 
platform, or rather, their customers are.


I realize this may seem harsh, but I propose to remove ppc64le from the 
packages and the couchdb top-level Docker file by end of 2020, should 
replacement machines not be made available.


Please discuss.

-Joan

On 2020-09-12 5:01 p.m., Joan Touzet wrote:

Hi Devs,

FYI per Jenkins:
 > All nodes of label ‘ppc64le’ are offline

This is one of the reasons causing our Jenkins failures on master.
(The other is our usual heisenbugs in the test suite.)

I really would like it if someone on the PMC (other than me and Paul)
would agree to help keep Jenkins running. It's my weekend and I really
don't have time to stay on top of these things. If you're a committer
we can get you access to the machines fairly readily, and Paul can help
talk you through what's necessary to keep the workers alive.

-Joan "more help is always welcome" Touzet


Re: [ANNOUNCE] Glynn Bird joins the PMC

2020-09-14 Thread Joan Touzet

Congratulations Glynn - and welcome!

-Joan

On 2020-09-14 12:22 p.m., Michelle P wrote:


Dear community,

I am delighted to announce that Glynn Bird joins the Apache CouchDB Project 
Management Committee today.

Glynn has made outstanding, sustained contributions to the project. This 
appointment is an official acknowledgement of their position within the 
community, and our trust in their ability to provide oversight for the project.

Everybody, please join me in congratulating Glynn!

On behalf of the CouchDB PMC,
Michelle



Re: [VOTE] Release Apache CouchDB 3.1.1 (RC2)

2020-09-14 Thread Joan Touzet
Can anyone else repeat this failure or look at it? macOS only or others? 
Don't see this on Linux.


I'm out for the rest of the day and if no one responds can look tomorrow.

Glynn you should be able to edit that wiki page, please do if you have 
time :) thanks!


-Joan

On 14/09/2020 06:11, Glynn Bird wrote:

macOS 10.15.6
- checksums ok
- build ok (had to use erlang@22 instead of default erlang @23 that brew
gave me)
- 3 tests failed repeatedly

chttpd_view_test:75: should_succeed_on_view_with_queries_keys...*failed*
chttpd_view_test:91:
should_succeed_on_view_with_queries_limit_skip...*failed*
chttpd_view_test:108:
should_succeed_on_view_with_multiple_queries...*failed*

I re-ran the tests in isolation and the same three failed again with:

make check apps=chttpd suites=chttpd_view_test

Note that the instructions on how to run individual tests is out-of-date in
https://cwiki.apache.org/confluence/display/COUCHDB/Testing+a+Source+Release

On Fri, 11 Sep 2020 at 23:53, Joan Touzet  wrote:


Dear community,

I would like to propose that we release Apache CouchDB 3.1.1.

Changes since the last round:

  https://github.com/apache/couchdb/compare/3.1.1-RC1...3.1.1-RC2

Candidate release notes:

  https://docs.couchdb.org/en/latest/whatsnew/3.1.html

We encourage the whole community to download and test these release
artefacts so that any critical issues can be resolved before the release
is made. Everyone is free to vote on this release, so dig right in!
(Only PMC members have binding votes, but they depend on community
feedback to gauge if an official release is ready to be made.)

The release artefacts we are voting on are available here:

  https://dist.apache.org/repos/dist/dev/couchdb/source/3.1.1/rc.2/

There, you will find a tarball, a GPG signature, and SHA256/SHA512
checksums.

Please follow the test procedure here:



https://cwiki.apache.org/confluence/display/COUCHDB/Testing+a+Source+Release

Please remember that "RC2" is an annotation. If the vote passes, these
artefacts will be released as Apache CouchDB 3.1.1.

Because of the weekend, this vote will remain open until 5PM ET (UTC-4),
Tuesday, 15 September 2020.

Please cast your votes now.

Thanks,
Joan "once more unto the breech, dear friends" Touzet





Re: Jenkins issues, looking for committer volunteer(s)

2020-09-14 Thread Joan Touzet

Start here:

https://github.com/apache/couchdb-infra-cm/

Everyone who has access today:

https://github.com/apache/couchdb-infra-cm/blob/main/roles/common/tasks/main.yml#L1-L9

Jenkins setup:

https://github.com/apache/couchdb/blob/master/build-aux/README.Jenkins

(slightly out of date but better than nothing)

-Joan

On 13/09/2020 03:57, Alessio 'Blaster' Biancalana wrote:

Hi Joan,
Do we have some documentation ready to read? I would feel safer not
volunteering for this alone but I'm glad to help for this kind of situation
:-)

Alessio

On Sat, Sep 12, 2020 at 11:01 PM Joan Touzet  wrote:


Hi Devs,

FYI per Jenkins:
  > All nodes of label ‘ppc64le’ are offline

This is one of the reasons causing our Jenkins failures on master.
(The other is our usual heisenbugs in the test suite.)

I really would like it if someone on the PMC (other than me and Paul)
would agree to help keep Jenkins running. It's my weekend and I really
don't have time to stay on top of these things. If you're a committer
we can get you access to the machines fairly readily, and Paul can help
talk you through what's necessary to keep the workers alive.

-Joan "more help is always welcome" Touzet





Re: Controlling the images used for the builds/releases

2020-09-14 Thread Joan Touzet

On 14/09/2020 11:54, Jarek Potiuk wrote:

Oh yeah. I start realizing now how herculean it is :). No worries, I am
afraid when you are back, the discussion will be just warming up :).

Speaking of the "double standard" - the main reason really comes from
licensing. When you compile something in that is GPL, your code starts to
be bound by the licence. But when you just bundle it together in a software
package - you are not.

So this is pretty much unavoidable to apply different rules to those
situations. No matter what - we have to make this distinction IMHO. But
let's see what others say on that.  I'd love to hear your thought on that,
before you head out.


Taking CouchDB, shipping *just* the compiled .beam files is possible but 
helps no one because they require the functional Erlang interpreter 
alongside them. In other words, it is not a runnable asset.


I believe you can compile Erlang against 100% non-GPL assets, but this 
is not common. How many people don't use gnulibc on Linux?


Thus, double standard, allowing access to "binary packages" only for 
those languages where the compiled asset is, on its own, sufficient to 
run the program. This is not even true for e.g. Node.JS or Python, any 
time there would be (potentially GNU) libc bindings.



J


On Mon, Sep 14, 2020 at 5:47 PM Joan Touzet  wrote:


Hi Jarek,

I'm about to head out for 3 weeks, so I'm going to miss most of this
discussion. I've done my best to leave comments in your document, but
just picking out one topic in this thread:

On 14/09/2020 02:40, Jarek Potiuk wrote:

Yeah - I see the point and to be honest, that was exactly my original
intention when I wrote the proposal. I modified it slightly to reflect

that

- I think now after preparing the proposal that the "gist" of it is

really

to introduce two kinds of convenience packages - one is the "compiled"
package (which should be far more restricted what it contains due to
limitations of licences such as GPL) and the other is simply "packaged"
software - where we put independent software or binaries in a single
"convenience" package but it does not have as far-reaching
legal/licence consequences as compiled packages.

The criteria I proposed introduce an interesting concept - the recursive
definition of "official" packages - that was the most "difficult" part
to come up with. But I believe as long as the criteria we come up with

can

be recursively applied to any binaries or reference to those binaries up

to

the end of the recursive chain of dependencies and as long as we provide
instructions on how to build those binaries by the "power" users, I

believe

it should be perfectly fine to include such binaries in "packaged"

software

without explicitly releasing all the sources for them.

So I tried to put it in the way to make it clear that the original
limitations remain in place for the "compiled" package (effectively I am
not changing any wording in the policy regarding those) but I (hope) make
it clear that other limitations and criteria apply to "packaged" software
using those modern tools like Docker/Helm but also any form of

installable

packages (like Windows installers). I've also specifically listed the
"windows installers" as an example package.


I don't like the double standard of "compiled" vs. "packaged" software.
It's hard to understand when to apply which, and creates an un-level
playing field. Not every ASF project can create both, and you're using a
different ruler for each. I realize it was your intent to avoid clouding
the water, and to apply stricter rules to one vs. the other, but I feel
this is just continuing the double-standard I previously mentioned,
albeit in a different form.

Good luck with the effort, and thanks for taking on this herculean task.

-Joan



J.


On Mon, Sep 14, 2020 at 2:57 AM Allen Wittenauer
 wrote:





On Sep 13, 2020, at 2:55 PM, Joan Touzet  wrote:

I think that any release of ASF software must have corresponding

sources

that can be use to generate those from. Even if there are some binary
files, those too should be generated from some kind of sources or
"officially released" binaries that come from some sources. I'd love

to

get

some more concrete examples of where it is not possible.


Sure, this is totally possible. I'm just saying that the amount of

source is extreme in the case where you're talking about a desktop app

that

runs in Java or Electron (Chrome as a desktop app), as two examples.


... and mostly impossible when talking about Windows containers.











Re: Controlling the images used for the builds/releases

2020-09-14 Thread Joan Touzet

Hi Jarek,

I'm about to head out for 3 weeks, so I'm going to miss most of this 
discussion. I've done my best to leave comments in your document, but 
just picking out one topic in this thread:


On 14/09/2020 02:40, Jarek Potiuk wrote:

Yeah - I see the point and to be honest, that was exactly my original
intention when I wrote the proposal. I modified it slightly to reflect that
- I think now after preparing the proposal that the "gist" of it is really
to introduce two kinds of convenience packages - one is the "compiled"
package (which should be far more restricted what it contains due to
limitations of licences such as GPL) and the other is simply "packaged"
software - where we put independent software or binaries in a single
"convenience" package but it does not have as far-reaching
legal/licence consequences as compiled packages.

The criteria I proposed introduce an interesting concept - the recursive
definition of "official" packages - that was the most "difficult" part
to come up with. But I believe as long as the criteria we come up with can
be recursively applied to any binaries or reference to those binaries up to
the end of the recursive chain of dependencies and as long as we provide
instructions on how to build those binaries by the "power" users, I believe
it should be perfectly fine to include such binaries in "packaged" software
without explicitly releasing all the sources for them.

So I tried to put it in the way to make it clear that the original
limitations remain in place for the "compiled" package (effectively I am
not changing any wording in the policy regarding those) but I (hope) make
it clear that other limitations and criteria apply to "packaged" software
using those modern tools like Docker/Helm but also any form of installable
packages (like Windows installers). I've also specifically listed the
"windows installers" as an example package.


I don't like the double standard of "compiled" vs. "packaged" software. 
It's hard to understand when to apply which, and creates an un-level 
playing field. Not every ASF project can create both, and you're using a 
different ruler for each. I realize it was your intent to avoid clouding 
the water, and to apply stricter rules to one vs. the other, but I feel 
this is just continuing the double-standard I previously mentioned, 
albeit in a different form.


Good luck with the effort, and thanks for taking on this herculean task.

-Joan



J.


On Mon, Sep 14, 2020 at 2:57 AM Allen Wittenauer
 wrote:





On Sep 13, 2020, at 2:55 PM, Joan Touzet  wrote:

I think that any release of ASF software must have corresponding sources
that can be use to generate those from. Even if there are some binary
files, those too should be generated from some kind of sources or
"officially released" binaries that come from some sources. I'd love to

get

some more concrete examples of where it is not possible.


Sure, this is totally possible. I'm just saying that the amount of

source is extreme in the case where you're talking about a desktop app that
runs in Java or Electron (Chrome as a desktop app), as two examples.


... and mostly impossible when talking about Windows containers.






Re: Controlling the images used for the builds/releases

2020-09-13 Thread Joan Touzet

On 2020-09-13 5:19 p.m., Jarek Potiuk wrote:

Can you please make an inline comment in the document? the Cwiki allows
inline comments, just select a paragraph and comment it there.  This is the
easiest way to keep it focused in the document. I  am not sure if
understand the Open-Office specific things, i'd love to understand that
though (I used Open-Office for years) :)


Done, and I expanded on this point.


I think that any release of ASF software must have corresponding sources
that can be use to generate those from. Even if there are some binary
files, those too should be generated from some kind of sources or
"officially released" binaries that come from some sources. I'd love to get
some more concrete examples of where it is not possible.


Sure, this is totally possible. I'm just saying that the amount of 
source is extreme in the case where you're talking about a desktop app 
that runs in Java or Electron (Chrome as a desktop app), as two examples.


-Joan


J.

On Sun, Sep 13, 2020 at 11:09 PM Joan Touzet  wrote:


HI Jarek,

Can you comment on one specific thing? In Proposal 1 you still leave the
text "...MUST only add binary/bytecode files". This is not possible for
convenience packages in many situations - for instance OpenOffice or
other languages - where providing a full release of a product requires a
language runtime. It has always bothered me that this text effectively
prevents redistribution of binary assets in the packages that are not
strictly speaking derived from the source code.

As you go far beyond this with the container packaging in Proposal 2, I
believe Proposal 1 needs to be modified to match. In my opinion a
suitable replacement would be something like:

"In all such cases...version number as the source release, as MUST
include only the binary/bytecode files that are necessary, via the
compiling and packaging of that source code release and its
dependencies, to produce a functional deliverable. All instructions..."

-Joan

On 2020-09-13 4:40 p.m., Jarek Potiuk wrote:

Just for your information - after a discussion in the ComDev mailing

list.

I created a proposal for Apache Software Foundation to introduce changes

to

the "ASF release policies", to make it clear and straightforward to

release

"convenience packages" in the form of "software packaging" (such as Helm
Charts and Container Images) rather than "compiled packages" as

recognised

so far by the ASF policies.

The proposal is here:


https://cwiki.apache.org/confluence/display/COMDEV/Updates+of+policies+for+the+convenience+packages


The discussion in the ComDev ASF mailing list is here:


https://lists.apache.org/thread.html/r49c3ef0a8423664c564c0c2719056662021f03b5678ef5b249892c10%40%3Cdev.community.apache.org%3E


We are going to discuss it and propose to the ASF board to vote on the
changes.

I look forward to all comments and I hope it can pave the way for the ASF
to provide a coherent approach for releasing Container Images, Helm

Charts

for all ASF projects.

On Mon, Aug 31, 2020 at 9:23 PM Jarek Potiuk 
wrote:


Just to revive this thread and let you know what we've done in Airflow.

We merged changes to our repository that allow our users to rebuild all
images if they need to -using official sources. It's not very involved

and

not a lot of code to maintain:
https://github.com/apache/airflow/pull/9650/
Next time when we release Airflow Sources including the Helm Chart, any

of

our users will be able to rebuild all the images used in charts from the
ASF-released source package.

The whole discussion ended up to be not about the Licence, but about the
content of the official ASF source package release.

I personally think this is the only way to fulfill this chapter from ASF
release policy:


http://www.apache.org/legal/release-policy.html#what-must-every-release-contain


Every ASF release must contain a source package, which must be

sufficient

for a user to build and test the release provided they have access to

the

appropriate platform and tools.



I would love to hear other thoughts about it.

J.




On Tue, Jun 23, 2020 at 11:42 PM Roman Shaposhnik 


wrote:


On Tue, Jun 23, 2020 at 2:26 AM Jarek Potiuk 


wrote:




My understanding the bigger problem is the license of the dependency

(and

their dependencies) rather than the official/unofficial status.  For

Apache

Yetus' test-patch functionality, we defaulted all of our plugins to

off

because we couldn't depend upon GPL'd binaries being available or

giving

the impression that they were required.  By doing so, it put the onus

on

the user to specifically enable features that depends upon GPL'd
functionality.  It also pretty much nukes any idea of being user

friendly.

:(



Indeed - Licensing is important, especially for source code

redistribution.

We used to have some GPL-install-on-your-own-if-you-want in the past

but

those dependencies are gone already.

Re: Controlling the images used for the builds/releases

2020-09-13 Thread Joan Touzet

HI Jarek,

Can you comment on one specific thing? In Proposal 1 you still leave the 
text "...MUST only add binary/bytecode files". This is not possible for 
convenience packages in many situations - for instance OpenOffice or 
other languages - where providing a full release of a product requires a 
language runtime. It has always bothered me that this text effectively 
prevents redistribution of binary assets in the packages that are not 
strictly speaking derived from the source code.


As you go far beyond this with the container packaging in Proposal 2, I 
believe Proposal 1 needs to be modified to match. In my opinion a 
suitable replacement would be something like:


"In all such cases...version number as the source release, as MUST 
include only the binary/bytecode files that are necessary, via the 
compiling and packaging of that source code release and its 
dependencies, to produce a functional deliverable. All instructions..."


-Joan

On 2020-09-13 4:40 p.m., Jarek Potiuk wrote:

Just for your information - after a discussion in the ComDev mailing list.
I created a proposal for Apache Software Foundation to introduce changes to
the "ASF release policies", to make it clear and straightforward to release
"convenience packages" in the form of "software packaging" (such as Helm
Charts and Container Images) rather than "compiled packages" as recognised
so far by the ASF policies.

The proposal is here:
https://cwiki.apache.org/confluence/display/COMDEV/Updates+of+policies+for+the+convenience+packages

The discussion in the ComDev ASF mailing list is here:
https://lists.apache.org/thread.html/r49c3ef0a8423664c564c0c2719056662021f03b5678ef5b249892c10%40%3Cdev.community.apache.org%3E

We are going to discuss it and propose to the ASF board to vote on the
changes.

I look forward to all comments and I hope it can pave the way for the ASF
to provide a coherent approach for releasing Container Images, Helm Charts
for all ASF projects.

On Mon, Aug 31, 2020 at 9:23 PM Jarek Potiuk 
wrote:


Just to revive this thread and let you know what we've done in Airflow.

We merged changes to our repository that allow our users to rebuild all
images if they need to -using official sources. It's not very involved and
not a lot of code to maintain:
https://github.com/apache/airflow/pull/9650/
Next time when we release Airflow Sources including the Helm Chart, any of
our users will be able to rebuild all the images used in charts from the
ASF-released source package.

The whole discussion ended up to be not about the Licence, but about the
content of the official ASF source package release.

I personally think this is the only way to fulfill this chapter from ASF
release policy:
http://www.apache.org/legal/release-policy.html#what-must-every-release-contain

Every ASF release must contain a source package, which must be sufficient

for a user to build and test the release provided they have access to the
appropriate platform and tools.



I would love to hear other thoughts about it.

J.




On Tue, Jun 23, 2020 at 11:42 PM Roman Shaposhnik 
wrote:


On Tue, Jun 23, 2020 at 2:26 AM Jarek Potiuk 
wrote:




My understanding the bigger problem is the license of the dependency

(and

their dependencies) rather than the official/unofficial status.  For

Apache

Yetus' test-patch functionality, we defaulted all of our plugins to

off

because we couldn't depend upon GPL'd binaries being available or

giving

the impression that they were required.  By doing so, it put the onus

on

the user to specifically enable features that depends upon GPL'd
functionality.  It also pretty much nukes any idea of being user

friendly.

:(



Indeed - Licensing is important, especially for source code

redistribution.

We used to have some GPL-install-on-your-own-if-you-want in the past but
those dependencies are gone already.





2) If it's not - how do we determine which images are "officially
maintained".


 Keep in mind that Docker themselves brand their images as
'official' when they actually come from Docker instead of the

organizations

that own that particular piece of software.  It just adds to the

complexity.




Not really. We actually plan to make our own Apache Airflow Docker

image as

official one. Docker has very clear guidelines on how to make images
"official" and it https://docs.docker.com/docker-hub/official_images/

and

there is quite a long iist of those:
https://github.com/docker-library/official-images/tree/master/library -
most of them maintained by the "authirs" of the image. Docker has a
dedicated team that reviews, checks those images and they encourage that
the "authors" maintain them. Quote from Docker's docs: "While it is
preferable to have upstream software authors maintaining their
corresponding Official Images, this is not a strict requirement."




3) If yes - how do we put the boundary - when image is acceptable?

Are

there any criteria we can use or/ constraints we can put on the

Jenkins issues, looking for committer volunteer(s)

2020-09-12 Thread Joan Touzet

Hi Devs,

FYI per Jenkins:
> All nodes of label ‘ppc64le’ are offline

This is one of the reasons causing our Jenkins failures on master.
(The other is our usual heisenbugs in the test suite.)

I really would like it if someone on the PMC (other than me and Paul)
would agree to help keep Jenkins running. It's my weekend and I really
don't have time to stay on top of these things. If you're a committer
we can get you access to the machines fairly readily, and Paul can help
talk you through what's necessary to keep the workers alive.

-Joan "more help is always welcome" Touzet


[VOTE] Release Apache CouchDB 3.1.1 (RC2)

2020-09-11 Thread Joan Touzet

Dear community,

I would like to propose that we release Apache CouchDB 3.1.1.

Changes since the last round:

https://github.com/apache/couchdb/compare/3.1.1-RC1...3.1.1-RC2

Candidate release notes:

https://docs.couchdb.org/en/latest/whatsnew/3.1.html

We encourage the whole community to download and test these release 
artefacts so that any critical issues can be resolved before the release 
is made. Everyone is free to vote on this release, so dig right in! 
(Only PMC members have binding votes, but they depend on community 
feedback to gauge if an official release is ready to be made.)


The release artefacts we are voting on are available here:

https://dist.apache.org/repos/dist/dev/couchdb/source/3.1.1/rc.2/

There, you will find a tarball, a GPG signature, and SHA256/SHA512 
checksums.


Please follow the test procedure here:


https://cwiki.apache.org/confluence/display/COUCHDB/Testing+a+Source+Release

Please remember that "RC2" is an annotation. If the vote passes, these 
artefacts will be released as Apache CouchDB 3.1.1.


Because of the weekend, this vote will remain open until 5PM ET (UTC-4), 
Tuesday, 15 September 2020.


Please cast your votes now.

Thanks,
Joan "once more unto the breech, dear friends" Touzet


Re: [VOTE] Release Apache CouchDB 3.1.1

2020-09-10 Thread Joan Touzet
All - this vote is CANCELLED due to a late-breaking bug found by Paul 
and fixed by Robert:


  Fix buffer_response=true (#3145) #3147

I will cut a new RC tomorrow.

On 2020-09-10 1:25 p.m., Joan Touzet wrote:

Dear community,

I would like to propose that we release Apache CouchDB 3.1.1.

Candidate release notes:

     https://docs.couchdb.org/en/latest/whatsnew/3.1.html

We encourage the whole community to download and test these release 
artefacts so that any critical issues can be resolved before the release 
is made. Everyone is free to vote on this release, so dig right in! 
(Only PMC members have binding votes, but they depend on community 
feedback to gauge if an official release is ready to be made.)


The release artefacts we are voting on are available here:

     https://dist.apache.org/repos/dist/dev/couchdb/source/3.1.1/rc.1/

There, you will find a tarball, a GPG signature, and SHA256/SHA512 
checksums.


Please follow the test procedure here:


https://cwiki.apache.org/confluence/display/COUCHDB/Testing+a+Source+Release 



Please remember that "RC1" is an annotation. If the vote passes, these 
artefacts will be released as Apache CouchDB 3.1.1.


Please cast your votes now.

Thanks,


Re: Is it time to merge prototype/fdb-layer to master?

2020-09-10 Thread Joan Touzet
Cool. I'm certain we're overlooking something but I'm too tired to think 
of it today.


FYI once the copy is done you can tell Infra to change the default 
branch for each repo on those and they will do so quickly, with no fuss.


-Joan

On 2020-09-10 4:13 p.m., Paul Davis wrote:

I should have noted, for each of the `apache/couchdb-$repo`
repositories my plan is to do a straight up copy of master -> main
with zero other changes. Once that's done we'll need to update
rebar.config.script but that should be all we need there.


On Thu, Sep 10, 2020 at 3:11 PM Paul Davis  wrote:


So I've gotten `make check` passing against a merge of master into the
`prototype/fdb-layer` branch. I ended up finding a flaky test and a
bug in a recent commit to master. I've just merged a fix for the flaky
test and Bob is working on a patch for the buffered_response feature.

Once those are both merged I'll re-run the merge and name that branch `main`.

Once that happens we'll need to work through a to-do list. Things I
know that are on that list:

1. File infra ticket to have them change our GitHub setting for the
default branch to `main`.
2. Copy branch protection rules from `master` to `main`
3. Steps 1 and 2 for all our `apache/couchdb-$repo` repositories
4. Update Jenkins config
5. Figure out FreeBSD builder situation
6. Probably other stuff
7. Eventually rename current `master` to something else so as to avoid confusion

Assuming no one objects beforehand, I'll start the ball rolling with
Infra on Monday.

Paul

On Wed, Sep 9, 2020 at 1:11 PM Joan Touzet  wrote:


Have been asking for it for a while ;) obviously +1.

Be aware that Jenkinsfile.full post-merge will probably fail because, at
the very least, the FreeBSD hosts won't have fdb and can't run docker to
containerise it. This will need some exploration to resolve but
shouldn't be a blocker.

The Jenkins setup will also need slight changes when we rename branches.
Also keep in mind other repos need the branch renaming, too. ASF Infra
can do the GitHub dance to change the name of the main branch.

-Joan "about time" Touzet

On 2020-09-09 2:05 p.m., Robert Samuel Newson wrote:

Agree that its time to get the fdb-layer work into master, that's where couchdb 
4.0 should be being created.

thanks for preserving the imported ebtree history.


On 9 Sep 2020, at 17:28, Paul Davis  wrote:

The merge on this turned out to be a lot more straightforward so I
think its probably the way to go. I've got a failing test in
couch_views_active_tasks_test but it appears to be flaky rather than a
merge error. I'll work though getting `make check` to complete and
then send another update.

https://github.com/apache/couchdb/tree/prototype/fdb-layer-final-merge
https://github.com/apache/couchdb/commit/873ccb4882f2e984c25f59ad0fd0a0677b9d4477

On Wed, Sep 9, 2020 at 10:29 AM Paul Davis  wrote:


Howdy folks!

I've just gone through a rebase of `prototype/fdb-layer` against
master. Its not quite finished because the ebtree import went wrong
during rebase due to a weirdness of the history.

I have a PR up for the rebase into master for people to look at [1].
Although the more important comparison is likely with the current
`prototype/fdb-layer` that can be found at [2].

Given the ebtree aspect, as well as the fact that I get labeled as the
committer for all commits when doing a rebase I'm also wondering if we
shouldn't turn this into a merge in this instance. I'll work up a
second branch that shows that diff as well that we could then rebase
onto master.

Regardless, I'd appreciate if we could get some eyeballs on the diff
and then finally merge this work to the default branch so its the main
line development going forward.

Paul

[1] https://github.com/apache/couchdb/pull/3137
[2] 
https://github.com/apache/couchdb/compare/prototype/fdb-layer...prototype/fdb-layer-final-rebase




[VOTE] Release Apache CouchDB 3.1.1

2020-09-10 Thread Joan Touzet

Dear community,

I would like to propose that we release Apache CouchDB 3.1.1.

Candidate release notes:

https://docs.couchdb.org/en/latest/whatsnew/3.1.html

We encourage the whole community to download and test these release 
artefacts so that any critical issues can be resolved before the release 
is made. Everyone is free to vote on this release, so dig right in! 
(Only PMC members have binding votes, but they depend on community 
feedback to gauge if an official release is ready to be made.)


The release artefacts we are voting on are available here:

https://dist.apache.org/repos/dist/dev/couchdb/source/3.1.1/rc.1/

There, you will find a tarball, a GPG signature, and SHA256/SHA512 
checksums.


Please follow the test procedure here:


https://cwiki.apache.org/confluence/display/COUCHDB/Testing+a+Source+Release

Please remember that "RC1" is an annotation. If the vote passes, these 
artefacts will be released as Apache CouchDB 3.1.1.


Please cast your votes now.

Thanks,


Re: Is it time to merge prototype/fdb-layer to master?

2020-09-09 Thread Joan Touzet

Have been asking for it for a while ;) obviously +1.

Be aware that Jenkinsfile.full post-merge will probably fail because, at 
the very least, the FreeBSD hosts won't have fdb and can't run docker to 
containerise it. This will need some exploration to resolve but 
shouldn't be a blocker.


The Jenkins setup will also need slight changes when we rename branches. 
Also keep in mind other repos need the branch renaming, too. ASF Infra 
can do the GitHub dance to change the name of the main branch.


-Joan "about time" Touzet

On 2020-09-09 2:05 p.m., Robert Samuel Newson wrote:

Agree that its time to get the fdb-layer work into master, that's where couchdb 
4.0 should be being created.

thanks for preserving the imported ebtree history.


On 9 Sep 2020, at 17:28, Paul Davis  wrote:

The merge on this turned out to be a lot more straightforward so I
think its probably the way to go. I've got a failing test in
couch_views_active_tasks_test but it appears to be flaky rather than a
merge error. I'll work though getting `make check` to complete and
then send another update.

https://github.com/apache/couchdb/tree/prototype/fdb-layer-final-merge
https://github.com/apache/couchdb/commit/873ccb4882f2e984c25f59ad0fd0a0677b9d4477

On Wed, Sep 9, 2020 at 10:29 AM Paul Davis  wrote:


Howdy folks!

I've just gone through a rebase of `prototype/fdb-layer` against
master. Its not quite finished because the ebtree import went wrong
during rebase due to a weirdness of the history.

I have a PR up for the rebase into master for people to look at [1].
Although the more important comparison is likely with the current
`prototype/fdb-layer` that can be found at [2].

Given the ebtree aspect, as well as the fact that I get labeled as the
committer for all commits when doing a rebase I'm also wondering if we
shouldn't turn this into a merge in this instance. I'll work up a
second branch that shows that diff as well that we could then rebase
onto master.

Regardless, I'd appreciate if we could get some eyeballs on the diff
and then finally merge this work to the default branch so its the main
line development going forward.

Paul

[1] https://github.com/apache/couchdb/pull/3137
[2] 
https://github.com/apache/couchdb/compare/prototype/fdb-layer...prototype/fdb-layer-final-rebase




Re: [DISCUSS] Rename default branch to `main`

2020-09-09 Thread Joan Touzet
+1. Thanks for starting this, Paul. I was actually going to try and 
drive this a month or two ago, but things got busy for me.


I'd also support renaming it to 'trunk' but really don't care what we pick.

The first commercial version control system I used to use, called that 
branch "main":


  https://i.ibb.co/7bMDt3c/cc-ver-tree2.gif

-Joan "yes, that's motif" Touzet


On 2020-09-09 11:40 a.m., Paul Davis wrote:

Howdy Folks!

Words matter. I've just started a thread on merging all of the
FoundationDB work into mainline development and thought this would be
a good time to bring up a separate discussion on renaming our default
branch.

Personally, I've got a few projects where I used `main` for the
mainline development branch. I find it to be a fairly natural shift
because I tab-complete everything on the command line. I'd be open to
other suggestions but I'm also hoping this doesn't devolve into a
bikeshed on what we end up picking.

For mechanics, what I'm thinking is that when we finish up the last
rebase of the FoundationDB work that instead of actually pushing the
merge/rebase button we just rename the branch and then change the
default branch on GitHub and close the PR.

Thoughts?

Paul



Fwd: [Jenkins] FAILURE: CouchDB » Full Platform Builds » 3.x #49

2020-09-04 Thread Joan Touzet
Can someone have a look at this persistent failure? I've restarted the 
agents on the FreeBSD box to no avail, the test still fails.


-Joan


 Forwarded Message 
Subject: [Jenkins] FAILURE: CouchDB » Full Platform Builds » 3.x #49
Date: Fri, 4 Sep 2020 18:59:49 + (UTC)
From: Apache Jenkins Server 
Reply-To: dev@couchdb.apache.org, notificati...@couchdb.apache.org
To: notificati...@couchdb.apache.org

Boo, we failed. 
https://ci-couchdb.apache.org/job/jenkins-cm1/job/FullPlatformMatrix/job/3.x/49/display/redirect


Re: Keeping our CI images live

2020-09-02 Thread Joan Touzet

Thanks, that's interesting.

I also realized I can just put the new shell script into our Jenkins and 
have that run it once a month. That's probably the lowest level of effort.


-Joan

On 01/09/2020 16:42, Alessio 'Blaster' Biancalana wrote:

Hi Joan,
Thanka for the effort in exploring these possibilities. Along with those,
we could have another choice:

https://github.blog/2020-09-01-introducing-github-container-registry/

Alessio

Il lun 31 ago 2020, 20:41 Joan Touzet  ha scritto:


On 31/08/2020 14:36, Joan Touzet wrote:

I'm also planning on filing a ticket with ASF Infra to ask if there is
an alternative, such as moving these images under the apache org
namespace at Docker Hub. (Previously they informed us we could have a
single image there only, apache/couchdb, with as many tags as we wanted.
Bah humbug.)


That ticket is here: https://issues.apache.org/jira/browse/INFRA-20795

-Joan





Re: [DISCUSS] ldap_auth donation

2020-09-01 Thread Joan Touzet

I remember this code!

Sure, let's get it out there, *as long as* someone is going to maintain 
it going forward.


I had been hoping to see more rallying behind how to use the JWT 
integration for OAuth and SAML workflows, but no one's done any 
walkthroughs / blogposts that I've seen. Putting those things in front 
of LDAP may be easier (but would be outside the scope of CouchDB...)


While we're on the topic of donation, any chance of weatherreport 
getting donated? You had brought this up a few months ago, Jay, and I'd 
love to see that in 3.2 as well.


-Joan



On 2020-09-01 3:02 p.m., Jay Doane wrote:

Greetings,

In 2015 IBM Cloudant developed an LDAP based authentication handler for
its CouchDB 2.x-based Cloudant Local offering. Since then, it has been used
in production on several large Cloudant Local deployments, accruing many
bug fixes and enhancements in the process.

Over the years, there has clearly been interest in using LDAP with CouchDB
[1], so it seems like the ldap_auth functionality might be something the
greater community could benefit from. If there are no objections from the
community or PMC, I'd be happy to open a PR in the hopes of getting it
included in CouchDB 3.2. To give an idea of what it's all about, I've
included the README.md contents below.

Thanks,
Jay

[1] https://couchdb.markmail.org/search/?q=LDAP

README.md:

# Delegating Basic and Cookie authentication and authorization to LDAP

CouchDB includes a built-in security model, with self-contained
authentication and authorization which rely on .ini files and/or user
databases to store user data, including usernames (uids), password
hashes, and user roles for accessing database resources.

For an organization which already uses an LDAP service for access
control, it may make more sense to delegate authentication and
authorization services to LDAP when accessing CouchDB. This is where
`ldap_auth` comes in.

At a high level, when `ldap_auth` is configured, it completely
replaces the default Basic and Cookie authentication and
authorization. It ignores .ini files and user databases altogether,
and attempts to validate user credentials using the configured LDAP
service. Once credentials have been authentication by the LDAP
service, `ldap_auth` determines the user's roles (which in turn
determine what the user is authorized to access) based on the LDAP
groups of which the user is a member.

## ldap_interface

This low-level interface module uses
[eldap](http://www.erlang.org/doc/man/eldap.html) to connect to,
authenticate, and search LDAP server(s) which are configured to model
user accounts and their associated CouchDB roles.

### Configuration

All configuration is done via `.ini` file, mostly in the `[ldap_auth]`
section. The following parameters can be modified, and have the
associated defaults in parentheses:

- `servers (127.0.0.1)` one or more LDAP servers; defaults to a single
host, but could be a comma separated list (note that all servers must
use the same port, a limitation of the underlying eldap library)

- `port (389)` LDAP server port for un-encrypted communication

- `ssl_port (636)` LDAP server port for encrypted communication

- `use_ssl (true)` if `true`, use TLS to encrypt traffic to LDAP
servers

- `timeout (5000)` milliseconds to wait for a response from an LDAP
server before throwing an error

- `user_base_dn (ou=users,dc=example,dc=com)` defines a directory
location to start searching for users

- `user_classes (person)` defines which `objectClass`es indicate
a particular entry as a user during search

- `user_uid_attribute (uid)` defines which attribute maps to
username

- `group_base_dn (ou=groups,dc=example,dc=com)` defines directory
location to start searching for groups

- `group_classes (posixGroup)` defines which `objectClass`es indicate
a particular entry as a group during search

- `group_member_attribute (memberUid)` defines which group attribute
maps to user Uid

- `group_role_attribute (description)` defines which group attribute
maps to a particular role

- `searcher_dn (uid=ldapsearch,ou=users,dc=example,dc=com)` defines
the DN to use when searching for users and groups

- `searcher_password (secret)` defines the password for the
`searcher_dn` above

- `user_bind_dns ([])` defines one or more base DNs into which the
authenticating user's username can be inserted as the
`user_uid_attribute`, and used to bind directly. See the "Efficiency
Considerations" section below for details

**Please note** that at least one of the above parameters must be set
inside the `[ldap_auth]` section of a .ini configuration file in order
for ldap_auth to be considered "configured", and to function as an
authentication/authorization handler. Failure to explicitly set at
least one `[ldap_auth]` parameter will result in the system using the
default basic and cookie authentication/authorization handlers instead.

### Efficiency Considerations

   `ldap_auth` effectively has 2 modes of operation, depending on

Re: [DISCUSS] Creating new deleted documents in CouchDB 4

2020-09-01 Thread Joan Touzet

Same - keep for now, choose to deprecate later.

Remember by our semver policy this would mean the earliest this endpoint 
could be removed would be CouchDB 5.0 (!)


-Joan "time keeps on slippin', slippin'..." Touzet

On 2020-09-01 4:35 p.m., Jonathan Hall wrote:

Thanks for the explanation.

I concur, I prefer compatibility, but as I'm not coding it, I'll defer the 
decision to others.

Jonathan


On Sep 1, 2020, 10:30 PM, at 10:30 PM, Paul Davis  
wrote:

Replication of deletions isn't affected due to the new_edits=false
flag like you guessed. This is purely "interactively creating a new
document that is deleted". Its a fairly minor edge case in that the
document must not exist. Any other attempt to "revive" a deleted doc
into a deleted state will fail with a conflict on 3.x.

I'm +0 for compatibility. Its not a significant amount of work to
implement the behavior and we can always deprecate it in the future to
remove the weird edge case of "document that has never existed can be
created in a deleted state".

On Tue, Sep 1, 2020 at 3:26 PM Jonathan Hall  wrote:


Isn't compatibility required to support replication of deleted

documents? Or does creation of a deleted document work with
new_edits=false?




On Sep 1, 2020, 10:16 PM, at 10:16 PM, Nick Vatamaniuc

 wrote:

Hi everyone,

While running PouchDB replication unit tests against the CouchDB 4
replicator PR branch (thanks to Garren Smith, who helped set up the
tests), we had noticed a doc update API incompatibility between
CouchDB 3.x/PouchDB and the prototype/fdb-layer branch: CouchDB
3.x/PouchDB allow creating new deleted documents and
prototype/fdb-layer branch doesn't.

For example:

$ http put $DB1/mydb/doc1 _deleted:='true' a=b
HTTP/1.1 200 OK

{
"id": "doc1",
"ok": true,
"rev": "1-ad7eb689fcae75e7a7edb57dc1f30939"
}

$ http $DB1/mydb/doc1?deleted=true
HTTP/1.1 200 OK

{
"_deleted": true,
"_id": "doc1",
"_rev": "1-ad7eb689fcae75e7a7edb57dc1f30939",
"a": "b"
}

On prototype/fdb-layer it returns a 409 conflict error

I opened a PR to make the prototype/fdb-layer branch behave the same
and keep the API compatibility, but also wanted to see what the
community thinks.

https://github.com/apache/couchdb/pull/3123

Would we want to keep compatibility with CouchDB 3.x/PouchDB or,
return a conflict (409), like the prototype/fdb-layer branch does?

My vote is for compatibility.

Thanks,
-Nick




Re: Keeping our CI images live

2020-08-31 Thread Joan Touzet

On 31/08/2020 14:36, Joan Touzet wrote:
I'm also planning on filing a ticket with ASF Infra to ask if there is 
an alternative, such as moving these images under the apache org 
namespace at Docker Hub. (Previously they informed us we could have a 
single image there only, apache/couchdb, with as many tags as we wanted. 
Bah humbug.)


That ticket is here: https://issues.apache.org/jira/browse/INFRA-20795

-Joan


Keeping our CI images live

2020-08-31 Thread Joan Touzet

(Apologies if this is a double-post.)

Some of you who work with Docker may have received an email from them 
recently, indicating they will be removing container images that have 
not been used within the past 6 months: 
https://www.docker.com/pricing/resource-consumption-updates


This isn't an issue for our apache/couchdb and couchdb images, which are 
widely in use and popular. (If people stop using 1.x, it might 
disappear, but I very much doubt that.)


However, it might affect our CI workflow for older builds. These images 
are only pulled when Jenkins needs them. And while we have the 
Dockerfiles and build instructions for these older platforms, decay of 
URLs and repositories means it may be increasingly difficult to stand up 
old environments - on the off chance someone wants to patch something in 
e.g. CouchDB 2.1.1, or try to build CouchDB 3.1.0 for Ubuntu 12.04 (good 
luck!)


To solve this I've filed this PR which basically pulls every image in 
our couchdbdev org. I'm going to run this on a cronjob once a month for 
myself:


https://github.com/apache/couchdb-docker/pull/189

I'm also planning on filing a ticket with ASF Infra to ask if there is 
an alternative, such as moving these images under the apache org 
namespace at Docker Hub. (Previously they informed us we could have a 
single image there only, apache/couchdb, with as many tags as we wanted. 
Bah humbug.)


Another option is to run our own registry, but that seems like an awful 
lot of work, and my goal has always been to reduce the amount of 
infrastructure we have to maintain ourselves.


I'll let you know what comes of the ASF Infra ticket.

-Joan "improving our bus factor" Touzet



How to act on this mailing list [was: Re: Preparing 3.1.1 release]

2020-08-27 Thread Joan Touzet
This email stuck with me overnight, and I want to address why. ermouth, 
your attitude in this email was poor, and I'd like to give you the 
opportunity to revise it.


On 2020-08-26 6:45 p.m., ermouth wrote:

The blog is controlled by the CouchDB PMC. No one outside of the PMC or

who they authorize has access to it.

This is about wordpress server where the blog lives.


Why didn't you bring this up sooner? Why wait until now? This doesn't 
give anyone the chance to address your concerns, and furthermore, comes 
across as arguing in bad faith.



The server is
maintained so impressively,


Actually, it is. It's hosted at wordpress.org. I would expect them to do 
the absolute best job of hosting WordPress, wouldn't you?



that shows default wordpress favicon for years


Because it's run at wordpress.com. So what? I don't actually know if we 
can customize the favicon there, but honestly, given they provide the 
service to us for free, I have zero objections to them using the favicon 
as a teeny tiny bit of advertising for another open source project.


How is the presence or absence of a favicon any indication of whether or 
not the server is being managed well? This is arguing in bad faith.



and responds with x-hacker header, promoting jobs aggregator.


For the company that provides us with free blog hosting.

The same thing is over at docs.couchdb.org for readthedocs.org, and no 
one's ever complained about that - arguably, that site gets more clicks 
than the blog does.



It implies an
obvious question about how reliable is the server in terms of injections
and logs protection.


Now that you know the above, do you still want to make this argument?


Also the blog pings gravatar, not good.


For its own content, yes. And I get that you don't want to leak the IP 
address of standalone CouchDBs - that is a valid concern, to which two 
options have been proposed. The absolute best way you could *HELP* 
address this is to code a fix.



If you don't want to display it, don't click on it, and the iframe won't

This is not how things are protected, and I know that you know about it.


This isn't how you treat people who run the community you claim to 
participate in. Nor is this the first time you've acted this way towards 
*volunteer developers*.


Kindly choose your words more carefully, and think ahead about how to 
make a meaningful contribution here. Complaining endlessly is not 
earning you any merit, and the tone you've taken actually does you a 
disservice. If you push this attitude any farther, you're liable to end 
up in people's killfiles / junk mail folders...or worse.


-Joan "PMC hat on" Touzet


Re: Preparing 3.1.1 release

2020-08-26 Thread Joan Touzet
A PR to disable the tab via an ini file setting would absolutely be 
merged. Why not work on one?


On 2020-08-26 6:45 p.m., ermouth wrote:

The blog is controlled by the CouchDB PMC. No one outside of the PMC or

who they authorize has access to it.

This is about wordpress server where the blog lives. The server is
maintained so impressively, that shows default wordpress favicon for years
and responds with x-hacker header, promoting jobs aggregator. It implies an
obvious question about how reliable is the server in terms of injections
and logs protection.

Also the blog pings gravatar, not good.


If you don't want to display it, don't click on it, and the iframe won't


This is not how things are protected, and I know that you know about it.

ermouth


чт, 27 авг. 2020 г. в 00:55, Joan Touzet :


At the moment, I have no plan to update Fauxton for 3.1.1.

The blog is controlled by the CouchDB PMC. No one outside of the PMC or
who they authorize has access to it.

If you don't want to display it, don't click on it, and the iframe won't
load.

-Joan

On 2020-08-26 11:57 a.m., ermouth wrote:

Is that very unsafe PR
https://github.com/apache/couchdb-fauxton/pull/1284 going
to be included into 3.1.1?

If it will, who exactly controls the wordpress site with those “news”?

ermouth


вт, 25 авг. 2020 г. в 23:45, Joan Touzet :


Hello there,

I have time to get together a 3.1.1 release now. If you have any
pressing things to get into 3.x, or anything that's on master that
should be backported, please open your PRs now.

-Joan "Labor Day! Schools are out and pools are open!" Touzet









Re: Couch DB 2.3.1 view os_process_error issue

2020-08-25 Thread Joan Touzet
Sounds like your couchjs process won't run correctly. When you run the 
couchjs binary by hand, it should give an error - perhaps a missing 
shared library? or you used to customize your query server and the 
config changed in 2.3.x:


https://docs.couchdb.org/en/stable/whatsnew/2.3.html

Read the first point under Upgrade Notes.

On 25/08/2020 03:28, Tilak Raj wrote:

Hello Couch DB team

We have a new couch db 2.3.1 install and have configured active replication 
from another couch db 2.1.1
But, when we open any of the existing views - we get : OS Process Error 
<0.13566.984> :: {os_process_error,{exit_status,1}}
Any existing views in the 2.1.1 dont time-out
Existing views in 2.3.1 are consistently timing out with above error

Thanks.



Re: Per Doc Access

2020-08-21 Thread Joan Touzet

Comments, in no particular order:

* I like that it's opt-in on a per-database level to create and maintain
  the additional indexes.

* I like that this is an MVP for the feature, one that will get more
  advanced over time.

* I guess we are putting off using maps (vs. records) until 4.x at the
  earlier?

* There's a whole lot of feedback on the RFC from the IBM core team that
  needs to get addressed before that can be merged. Most of it is
  structural, such as Garren's comments, but there are some questions
  from Mike Rhodes to which I haven't seen Jan reply yet. I don't know
  if the PR addresses those or not.

* Obviously Jan needs help on point 9, which I'll start investigating
  later today (after mid-day errands)

* If this is intended to replace db-per-user, we should immediately
  file the deprecation notice on that and prepare to remove it entirely
  in 4.x.

Great to see this move forward!

-Joan "goin' to the bank like an adult" Touzet

On 21/08/2020 08:20, Jan Lehnardt wrote:

Hi all, I‘d like to once again solicit feedback from the core team about my PR 
for per doc access control.

I know we all have a lot to do, but it’d be great to get some pointers on this, 
so I can gauge how much work it‘ll be to take over the finish line.

If it helps any, I‘d be happy to set up a video call to walk folks through the 
main parts.

I understand that a lot of Cloudant folks are focused on 4.x, but when we last 
talked, we deemed this feature important enough for 3.x, so I built that first. 
The experience from building this suggests to me that’s 4.x port should be 
fairly straightforward, and that that port should even make it easy to add the 
much desired addition of group sharing.

I’m equally happy to take silence as approval, in which case all I ask for is a 
thumbs up, at which point, I‘ll plow through the remaining todos and get this 
out asap.

Best
Jan
—


On 3. Aug 2020, at 17:29, Jan Lehnardt  wrote:

*bump* Hey all, it’d be great to get at least some cursory feedback on this.

Best
Jan
—


On 26. Jul 2020, at 20:28, Jan Lehnardt  wrote:

Hey all,
I’m happy to present the first PR worth sharing for introducing per-doc-access 
control to the 3.x codebase.
https://github.com/apache/couchdb/pull/3038
There are few odds and ends left to do, but this is in good enough shape to get 
wider review on approach and implementation so far.
My hope would be to include this in a future 3.2.0 release before embarking on 
reimplementing this for 4.x, which should be considerably simpler.
The PR and linked resources have most of the information relevant to this.
Please review, test and critique heavily, and let me know any questions you 
might have.
This concludes a couple of weeks worth of effort spread across multiple years. 
It all started with the developer summit in Boston and Adam’s initial 
presentation of this design. I hope this makes it justice.
Best
Jan
—
Professional Support for Apache CouchDB:
https://neighbourhood.ie/couchdb-support/




Re: [jira] [Commented] (COUCHDB-3384) EUnit: couch_replicator_compact_tests failure

2020-08-20 Thread Joan Touzet
I have opened https://issues.apache.org/jira/browse/INFRA-20743 on this 
JIRA spam we've been getting.


On 20/08/2020 18:03, Kelli williams (Jira) wrote:


 [ 
https://issues.apache.org/jira/browse/COUCHDB-3384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17181470#comment-17181470
 ]

Kelli williams commented on COUCHDB-3384:
-

Commit e8b2c74f81765619692ca716ae971d0f7e98 in couchdb's branch 
refs/heads/master from Jan Lukavský
[ https://gitbox.apache.org/repos/asf?p=couchdb.git;h=e8b2c74 ]

chore: increase timeout for pausing writer COUCHDB-3384

  


[best paintball mask for 
glasses|https://bestpaintballmask.net/best-paintball-mask-for-glasses/]

[social media marketing 
nyc|https://www.amraandelma.com/social-media-agency-nyc/]


  


EUnit: couch_replicator_compact_tests failure
-

 Key: COUCHDB-3384
 URL: https://issues.apache.org/jira/browse/COUCHDB-3384
 Project: CouchDB
  Issue Type: Test
  Components: Test Suite
Reporter: Joan Touzet
Priority: Major

One instance so far. Debian 8, Erlang 18.3.
{noformat}
[os_mon] cpu supervisor port (cpu_sup): Erlang has closed
 remote -> local
   couch_replicator_compact_tests:90: should_run_replication...[0.008 s] ok
   couch_replicator_compact_tests:81: should_all_processes_be_alive...ok
   couch_replicator_compact_tests:141: 
should_populate_and_compact...*failed*
in function couch_replicator_compact_tests:pause_writer/1 
(test/couch_replicator_compact_tests.erl, line 343)
in call from 
couch_replicator_compact_tests:'-should_populate_and_compact/5-fun-8-'/6 
(test/couch_replicator_compact_tests.erl, line 176)
in call from lists:foreach/2 (lists.erl, line 1337)
in call from 
couch_replicator_compact_tests:'-should_populate_and_compact/5-fun-9-'/5 
(test/couch_replicator_compact_tests.erl, line 144)
**error:{assertion_failed,[{module,couch_replicator_compact_tests},
{line,345},
{reason,"Failed to pause source database writer"}]}
   output:<<"">>
   couch_replicator_compact_tests:188: should_wait_target_in_sync...ok
   couch_replicator_compact_tests:93: 
should_ensure_replication_still_running...ok
   couch_replicator_compact_tests:135: should_cancel_replication...[0.001 
s] ok
   couch_replicator_compact_tests:221: should_compare_databases...[0.299 s] 
ok
   [done in 10.740 s]
{noformat}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)



Re: [jira] [Commented] (COUCHDB-3384) EUnit: couch_replicator_compact_tests failure

2020-08-20 Thread Joan Touzet
I have opened https://issues.apache.org/jira/browse/INFRA-20743 on this 
JIRA spam we've been getting.


On 20/08/2020 18:03, Kelli williams (Jira) wrote:


 [ 
https://issues.apache.org/jira/browse/COUCHDB-3384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17181470#comment-17181470
 ]

Kelli williams commented on COUCHDB-3384:
-

Commit e8b2c74f81765619692ca716ae971d0f7e98 in couchdb's branch 
refs/heads/master from Jan Lukavský
[ https://gitbox.apache.org/repos/asf?p=couchdb.git;h=e8b2c74 ]

chore: increase timeout for pausing writer COUCHDB-3384

  


[best paintball mask for 
glasses|https://bestpaintballmask.net/best-paintball-mask-for-glasses/]

[social media marketing 
nyc|https://www.amraandelma.com/social-media-agency-nyc/]


  


EUnit: couch_replicator_compact_tests failure
-

 Key: COUCHDB-3384
 URL: https://issues.apache.org/jira/browse/COUCHDB-3384
 Project: CouchDB
  Issue Type: Test
  Components: Test Suite
Reporter: Joan Touzet
Priority: Major

One instance so far. Debian 8, Erlang 18.3.
{noformat}
[os_mon] cpu supervisor port (cpu_sup): Erlang has closed
 remote -> local
   couch_replicator_compact_tests:90: should_run_replication...[0.008 s] ok
   couch_replicator_compact_tests:81: should_all_processes_be_alive...ok
   couch_replicator_compact_tests:141: 
should_populate_and_compact...*failed*
in function couch_replicator_compact_tests:pause_writer/1 
(test/couch_replicator_compact_tests.erl, line 343)
in call from 
couch_replicator_compact_tests:'-should_populate_and_compact/5-fun-8-'/6 
(test/couch_replicator_compact_tests.erl, line 176)
in call from lists:foreach/2 (lists.erl, line 1337)
in call from 
couch_replicator_compact_tests:'-should_populate_and_compact/5-fun-9-'/5 
(test/couch_replicator_compact_tests.erl, line 144)
**error:{assertion_failed,[{module,couch_replicator_compact_tests},
{line,345},
{reason,"Failed to pause source database writer"}]}
   output:<<"">>
   couch_replicator_compact_tests:188: should_wait_target_in_sync...ok
   couch_replicator_compact_tests:93: 
should_ensure_replication_still_running...ok
   couch_replicator_compact_tests:135: should_cancel_replication...[0.001 
s] ok
   couch_replicator_compact_tests:221: should_compare_databases...[0.299 s] 
ok
   [done in 10.740 s]
{noformat}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)



Re: New Credentials for Github jobs

2020-08-14 Thread Joan Touzet

Could we get these on the ci-couchdb server for testing? Thanks.

-Joan

On 2020-08-14 3:37 a.m., Gavin McDonald wrote:

Hi All,

For those of you waiting for the 'asf-ci' credentials - this is still not
resolved yet, and is waiting
for Cloudbees support.

However - I have created some new credentials, based off of a GH App rather
than a role account.

Look for credentials 'ASF Cloudbees Jenkins ci-builds' and give that a try
in your jobs please and see if that works for you. Let me know how it goes
for you.



Re: Failed: CouchDB (6746fa47)

2020-08-13 Thread Joan Touzet

Thanks, Jan.

The build seems to have failed due to some sort of timeout not under our 
countrol.


I logged in and forced a manual rebuild, and it completed successfully.

-Joan "¯\_(ツ)_/¯" Touzet

On 13/08/2020 05:06, Jan Lehnardt wrote:

there were a few more of this in the moderation queue, I let this one through 
as an example.

Best
Jan
—


On 12. Aug 2020, at 03:54, Read the Docs  wrote:


Build Failed for CouchDB (master)



You can find out more about this failure here:
https://readthedocs.org/projects/couchdb/builds/11643833/

If you have questions, a good place to start is the FAQ:
https://docs.readthedocs.io/page/faq.html



Keep documenting,
Read the Docs
--
http://readthedocs.org




Jenkins upgrade

2020-08-02 Thread Joan Touzet

Hey everyone,

Infra upgraded our Jenkins instance today. With this came a required 
agent version update.


While our ARM and macos builders auto-updated (the Jenkins master can 
ssh in to that node), all other machines needed their services 
restarted. (Our runit run command always pulls down the latest jar, so 
restarting the service is all that's necessary.)


For those "blessed" to do this, the IBM nodes can be reached via ssh, 
agent forwarding and the 
https://github.com/apache/couchdb-infra-cm/blob/main/ssh.cfg . The 
FreeBSD nodes, only I have access to, and have restarted them.


-Joan "keep those plates spinning" Touzet


Re: [ci-builds] GitHub credentials

2020-07-29 Thread Joan Touzet
Infra hasn't approved these in the past. If that policy changes, I'd 
very much like to know about it.


For CouchDB we use a token on my account that I added for this purpose, 
limited to Apache repos only. Of course, these API calls count towards 
my personal limit, which affects other GitHub work that I do outside of 
the ASF.


-joan

On 29/07/2020 10:18, Andor Molnar wrote:

I’ve created a dummy Github user for ZooKeeper, it works fine in terms of 
branch scanning, but it doesn’t have permissions to update the Github Build 
status at the end of each build.

I think I should add it to the project as contributor/member, but not sure how 
to do that.

Please advise.

Andor




On 2020. Jul 27., at 13:24, Andor Molnar  wrote:

I’m interested in this one too.
Currently I’m using regular ‘Git’ source instead of GitHub to speed up branch 
discovery, but this way I cannot run builds against Pull Requests.

Andor




On 2020. Jul 23., at 21:38, Zoran Regvart  wrote:

Hi builders,
I see some questions on this but not much conclusion currently.

What credentials should we use with GitHub SCM source? Should we use
personal access tokens, or will there be an INFRA provided
Jenkins-wide credential we can use (like a GitHub App[1])?

We're now at the mercy of the GitHub API limits and as more projects
are migrated I expect that to have a big impact.

zoran

[1] 
https://docs.cloudbees.com/docs/cloudbees-ci/latest/cloud-admin-guide/github-app-auth
--
Zoran Regvart






Do we publish a CouchDB+Clouseau docker?

2020-07-28 Thread Joan Touzet

Hi there,

Recently IBM donated their CouchDB+Clouseau (in RedHat UBI form) docker 
container to the apache/couchdb-docker repository. One of their 
customers, Grapevine AI, is asking if we can release this and publish it 
under the apache/couchdb Docker Hub location.


As I said in the merge request here : 
https://github.com/apache/couchdb-docker/pull/187



We haven't had the time to review any licensing and operational considerations 
to publish this image. Previously, licensing considerations around Java and the 
runtime were the main reason we didn't ship this in our binary downloads. 
Docker is a little different in that we're only putting together a recipe, and 
a 3rd party builds the binaries, but it deserves discussion.

Given that Cloudant haven't donated the Clouseau code to Apache, and it's my 
understanding that this code isn't likely to be maintained going forward beyond 
bare-minimum effort to keep it running, I'm reluctant to slap the Apache name 
on it and mark this a supported image from our perspective - though it's been 
stable for a while. Perhaps @rnewson can comment further here, as the main 
progenitor of the code in question. I will bring this up on our development 
mailing list, where the decision needs to be made.

Of course, nothing's stopping you from building and publishing the image yourself for 
your own needs today. Keep in mind you may not label it as "Apache CouchDB" or 
advertise it as such, as this is trademarked by the Foundation and under our control.


I'd like to see what the (binding, voting) committers think about this 
issue before acting, as I can't decide simply what's in the best 
interest of the project on this one. Please speak up.


-Joan "groggy" Touzet


Re: CI errors, host key update

2020-07-26 Thread Joan Touzet

Fixed.

For future reference, these ASF VMs don't allow home directory
~/.ssh/id_rsa.pub keys. You have to have root and stick the authorized 
keys in at /etc/ssh/ssh_keys/.pub .


root@couchdb-vm:/etc/ssh/ssh_keys# ls -la
total 24
drwxr-xr-x 2 rootroot 4096 Jul 26 20:57 .
drwxr-xr-x 5 rootroot 4096 Jul  1 14:43 ..
-rw-r- 1 asf999  root 3494 Jul  1 14:43 asf999.pub
-rw--- 1 backup  root  677 Jul  1 14:43 backup.pub
-rw-r--r-- 1 jenkins root  757 Jul 26 20:57 jenkins.pub

That last file (copied from the old VM) fixed the problem.

On 2020-07-26 4:34 p.m., Joan Touzet wrote:
Ah, that's a missing pubkey in the jenkins user on repo-nightly, not 
that the hostkey changed. I thought I got that, but apparently not. I'll 
fix it today.


-Joan

On 2020-07-26 7:05 a.m., Jan Lehnardt wrote:

Hey all,

we are getting CI fails on master because the host key for 
repo-nightly changed.


I’m not sure how to fix this:


https://ci-couchdb.apache.org/blue/organizations/jenkins/jenkins-cm1%2FFullPlatformMatrix/detail/master/209/pipeline 




[2020-07-26T10:55:22.484Z] + rsync -avz -e ssh -o 
StrictHostKeyChecking=no -i  
@repo-nightly.couchdb.org:/var/www/html/master .


[2020-07-26T10:55:28.535Z] Warning: Permanently added 
'repo-nightly.couchdb.org,209.188.14.151' (ECDSA) to the list of known 
hosts.


[2020-07-26T10:55:28.880Z] @repo-nightly.couchdb.org: Permission 
denied (publickey).


[2020-07-26T10:55:28.880Z] rsync: connection unexpectedly closed (0 
bytes received so far) [Receiver]


[2020-07-26T10:55:28.880Z] rsync error: unexplained error (code 255) 
at io.c(235) [Receiver=3.1.3]


[2020-07-26T10:55:28.880Z] + mkdir -p master

[2020-07-26T10:55:28.880Z] + rm -rf master/debian/* master/el6/* 
master/el7/* master/el8/*


[2020-07-26T10:55:28.880Z] + mkdir -p master/debian master/el6 
master/el7 master/el8 master/source


[2020-07-26T10:55:28.880Z] + rsync -avz -e ssh -o 
StrictHostKeyChecking=no -i  
@repo-nightly.couchdb.org:/var/www/html/js .


[2020-07-26T10:55:34.924Z] @repo-nightly.couchdb.org: Permission 
denied (publickey).


[2020-07-26T10:55:34.924Z] rsync: connection unexpectedly closed (0 
bytes received so far) [Receiver]


[2020-07-26T10:55:34.924Z] rsync error: unexplained error (code 255) 
at io.c(235) [Receiver=3.1.3]


script returned exit code 255

Best
Jan



Re: CI errors, host key update

2020-07-26 Thread Joan Touzet
Ah, that's a missing pubkey in the jenkins user on repo-nightly, not 
that the hostkey changed. I thought I got that, but apparently not. I'll 
fix it today.


-Joan

On 2020-07-26 7:05 a.m., Jan Lehnardt wrote:

Hey all,

we are getting CI fails on master because the host key for repo-nightly changed.

I’m not sure how to fix this:


https://ci-couchdb.apache.org/blue/organizations/jenkins/jenkins-cm1%2FFullPlatformMatrix/detail/master/209/pipeline


[2020-07-26T10:55:22.484Z] + rsync -avz -e ssh -o StrictHostKeyChecking=no -i 
 @repo-nightly.couchdb.org:/var/www/html/master .

[2020-07-26T10:55:28.535Z] Warning: Permanently added 
'repo-nightly.couchdb.org,209.188.14.151' (ECDSA) to the list of known hosts.

[2020-07-26T10:55:28.880Z] @repo-nightly.couchdb.org: Permission denied 
(publickey).

[2020-07-26T10:55:28.880Z] rsync: connection unexpectedly closed (0 bytes 
received so far) [Receiver]

[2020-07-26T10:55:28.880Z] rsync error: unexplained error (code 255) at 
io.c(235) [Receiver=3.1.3]

[2020-07-26T10:55:28.880Z] + mkdir -p master

[2020-07-26T10:55:28.880Z] + rm -rf master/debian/* master/el6/* master/el7/* 
master/el8/*

[2020-07-26T10:55:28.880Z] + mkdir -p master/debian master/el6 master/el7 
master/el8 master/source

[2020-07-26T10:55:28.880Z] + rsync -avz -e ssh -o StrictHostKeyChecking=no -i 
 @repo-nightly.couchdb.org:/var/www/html/js .

[2020-07-26T10:55:34.924Z] @repo-nightly.couchdb.org: Permission denied 
(publickey).

[2020-07-26T10:55:34.924Z] rsync: connection unexpectedly closed (0 bytes 
received so far) [Receiver]

[2020-07-26T10:55:34.924Z] rsync error: unexplained error (code 255) at 
io.c(235) [Receiver=3.1.3]

script returned exit code 255

Best
Jan



Re: [ACTION REQUIRED] Review .asf.yaml changes to apache/couchdb repo

2020-07-22 Thread Joan Touzet

We already have the behaviour of line 32. The line:

  pullrequests: notificati...@couchdb.apache.org

which is not commented out, is the same as saying:

  pullrequests_status: notificati...@couchdb.apache.org
  pullrequests_comment: notificati...@couchdb.apache.org

If people don't want notification of new/closed PRs on dev@, that's 
fine, I just thought it might increase visibility.


-Joan

On 2020-07-22 5:17 p.m., Alessio 'Blaster' Biancalana wrote:

Maybe line 32 would be fine.

Having PR status notifications streamed on dev would be noisy I think.

Thanks for bringing this to the table!

Alessio

On Wed, Jul 22, 2020 at 6:45 PM Joan Touzet  wrote:


We now have a plurality of +1s so I'm going to merge this.

On 21/07/2020 18:12, Joan Touzet wrote:

Please see lines 29-32 of the file. We can also make this change if we
want - which would send new/closed PR notifications to dev@, while
sending all comments/etc to notifications@ as we do today.


Just a note that while this didn't happen, I'm +1 if someone else wants
to file the PR to uncomment those (and comment out line 28.)

-Joan





Re: [ACTION REQUIRED] Review .asf.yaml changes to apache/couchdb repo

2020-07-22 Thread Joan Touzet

We now have a plurality of +1s so I'm going to merge this.

On 21/07/2020 18:12, Joan Touzet wrote:
Please see lines 29-32 of the file. We can also make this change if we 
want - which would send new/closed PR notifications to dev@, while 
sending all comments/etc to notifications@ as we do today.


Just a note that while this didn't happen, I'm +1 if someone else wants 
to file the PR to uncomment those (and comment out line 28.)


-Joan


[ACTION REQUIRED] Review .asf.yaml changes to apache/couchdb repo

2020-07-21 Thread Joan Touzet

Committers, please review this PR and provide your comments:

   https://github.com/apache/couchdb/pull/3020

The text of the PR is as follows:


This introduces the `.asf.yaml` file, which gives us direct control over GitHub 
features we've previously had to ask ASF Infra to maintain for us.

For now, this PR introduces only one specific change: it disables the ability to directly merge a 
PR with a branch. Instead, only the "squash" and "rebase" options will be 
allowed.

Because of this change, I am requesting 3 +1s from committers before I'll merge 
this change - though only a lazy majority is required.

Please see lines 29-32 of the file. We can also make this change if we want - 
which would send new/closed PR notifications to dev@, while sending all 
comments/etc to notifications@ as we do today.


-Joan


CouchDB Apache VM update

2020-07-20 Thread Joan Touzet

Hey y'all,

Infra contacted me recently over our couchdb-vm2.apache.org machine, 
which needed to be deprecated and replaced with a new vm at a new 
datacentre. That work is now done.


The new host is couchdb-vm.apache.org, and it runs Ubuntu 20.04. We have 
2 CNAMEs (aliases) to this host:


  repo-nightly.couchdb.org -- this is the same as always, but powered
  purely by the nginx fancyindex module,
  so no server-side JS/PHP/etc

  logs.couchdb.org -- this takes over for couchdb-vm2.a.o in our CI
  build process. Same CouchDB service, now running
  3.1.0. Go here if your build fails and you need
  detailed logs from your test run.

PMC members can request ssh & sudo access to this machine through an 
Infra ticket if desired, for instance to add your own CouchDB admin

user.

Cheers,
Joan "new vm, same as the old vm" Touzet


Re: [DISCUSS] couchdb 4.0 transactional semantics

2020-07-16 Thread Joan Touzet




On 2020-07-16 4:50 p.m., Joan Touzet wrote:



On 2020-07-16 2:24 p.m., Robert Samuel Newson wrote:


Agreed on all 4 points. On the final point, it's worth noting that a 
continuous changes feed was two-phase, the first is indeed over a 
snapshot of the db as of the start of the _changes request, the second 
phase is an endless series of subsequent snapshots. the 4.0 behaviour 
won't exactly match that but it's definitely in the same spirit.


Agreed also on requiring pagination (I've not reviewed the proposed 
pagination api in sufficient detail to +1 it yet). Would we start the 
response as rows are retrieved, though? That's my preference, with an 
unclean termination if we hit txn_too_old, and an upper bound on the 
"limit" parameter or equivalent chosen such that txn_too_old is 
vanishingly unlikely.


On compatibility, there's precedent for a minor release of old 
branches just to add replicator compatibility. for example, the 
replicator could call _changes again if it received a complete 
_changes response (i.e, one that ended with a } that completes the 
json object) that did not include a "last_seq" row. The 4.0 replicator 
would always do this.


I wouldn't really want to release a new 1.x, would you? Augh.

If we're going to change how replication works, wouldn't it better to 
simply say "there is no guaranteed one-shot replication back from 4.x to 
1.x?" Or, intentionally break backward compatibility so one-shot 
replication to un-upgraded old Couches refuses to work at all? This 
would prevent the confusion by making it clear - you can't do things 
this way anymore.


Sorry, meant to say we publish that the workaround is you need either a 
"push" replication from 4.x -> 1.x, or must use a hypothetically patched 
3.x+ replicator as a "third party" to replicate successfully from 4.x -> 
non-patched older CouchDBs.


I'd rather support this scenario than have to support explaining why the 
"one shot" replication back to an old 1.x, when initiated by a 1.x 
cluster, is returning results "ahead" of the time at which the one-shot 
replication was started.




We could do a point release of 3.x, sure.

-Joan



B.

On 16 Jul 2020, at 17:25, Paul Davis  
wrote:


 From what I'm reading it sounds like we have general consensus on a 
few things:


1. A single CouchDB API call should map to a single FDB transaction
2. We absolutely do not want to return a valid JSON response to any
streaming API that hit a transaction boundary (because data
loss/corruption)
3. We're willing to change the API requirements so that 2 is not an 
issue.

4. None of this applies to continuous changes since that API call was
never a single snapshot.

If everyone generally agrees with that summarization, my suggestion
would be that we just revisit the new pagination APIs and make them
the only behavior rather than having them be opt-in. I believe those
APIs already address all the concerns in this thread and the only
reason we kept the older versions with `restart_tx` was to maintain
API backwards compatibility at the expense of a slight change to
semantics of snapshots. However, if there's a consensus that the
semantics are more important than allowing a blanket `GET
/db/_all_docs` I think it'd make the most sense to just embrace the
pagination APIs that already exist and were written to cover these
issues.

The only thing I'm not 100% on is how to deal with non-continuous
replications. I.e., the older single shot replication. Do we go back
with patches to older replicators to allow 4.0 compatibility? Just
declare that you have to mediate a replication on the newer of the two
CouchDB deployments? Sniff the replicator's UserAgent and behave
differently on 4.x for just that special case?

Paul

On Wed, Jul 15, 2020 at 7:25 PM Adam Kocoloski  
wrote:


Sorry, I also missed that you quoted this specific bit about eagerly 
requesting a new snapshot. Currently the code will just react to the 
transaction expiring, then wait till it acquires a new snapshot if 
“restart_tx” is set (which can take a couple of milliseconds on a 
FoundationDB cluster that is deployed across multiple AZs in a cloud 
Region) and then proceed.


Adam

On Jul 15, 2020, at 6:54 PM, Adam Kocoloski  
wrote:


Right now the code has an internal “restart_tx” flag that is used 
to automatically request a new snapshot if the original one expires 
and continue streaming the response. It can be used for all manner 
of multi-row responses, not just _changes.


As this is a pretty big change to the isolation guarantees provided 
by the database Bob volunteered to elevate the issue to the mailing 
list for a deeper discussion.


Cheers, Adam


On Jul 15, 2020, at 11:38 AM, Joan Touzet  wrote:

I'm having trouble following the thread...

On 14/07/2020 14:56, Adam Kocoloski wrote:
For cases where you’re not concerned about the snapshot isolation 
(e.g. streaming an entire _changes feed

Re: [DISCUSS] couchdb 4.0 transactional semantics

2020-07-16 Thread Joan Touzet




On 2020-07-16 2:24 p.m., Robert Samuel Newson wrote:


Agreed on all 4 points. On the final point, it's worth noting that a continuous 
changes feed was two-phase, the first is indeed over a snapshot of the db as of 
the start of the _changes request, the second phase is an endless series of 
subsequent snapshots. the 4.0 behaviour won't exactly match that but it's 
definitely in the same spirit.

Agreed also on requiring pagination (I've not reviewed the proposed pagination api in 
sufficient detail to +1 it yet). Would we start the response as rows are retrieved, 
though? That's my preference, with an unclean termination if we hit txn_too_old, and an 
upper bound on the "limit" parameter or equivalent chosen such that txn_too_old 
is vanishingly unlikely.

On compatibility, there's precedent for a minor release of old branches just to add 
replicator compatibility. for example, the replicator could call _changes again if it 
received a complete _changes response (i.e, one that ended with a } that completes the 
json object) that did not include a "last_seq" row. The 4.0 replicator would 
always do this.


I wouldn't really want to release a new 1.x, would you? Augh.

If we're going to change how replication works, wouldn't it better to 
simply say "there is no guaranteed one-shot replication back from 4.x to 
1.x?" Or, intentionally break backward compatibility so one-shot 
replication to un-upgraded old Couches refuses to work at all? This 
would prevent the confusion by making it clear - you can't do things 
this way anymore.


We could do a point release of 3.x, sure.

-Joan



B.


On 16 Jul 2020, at 17:25, Paul Davis  wrote:

 From what I'm reading it sounds like we have general consensus on a few things:

1. A single CouchDB API call should map to a single FDB transaction
2. We absolutely do not want to return a valid JSON response to any
streaming API that hit a transaction boundary (because data
loss/corruption)
3. We're willing to change the API requirements so that 2 is not an issue.
4. None of this applies to continuous changes since that API call was
never a single snapshot.

If everyone generally agrees with that summarization, my suggestion
would be that we just revisit the new pagination APIs and make them
the only behavior rather than having them be opt-in. I believe those
APIs already address all the concerns in this thread and the only
reason we kept the older versions with `restart_tx` was to maintain
API backwards compatibility at the expense of a slight change to
semantics of snapshots. However, if there's a consensus that the
semantics are more important than allowing a blanket `GET
/db/_all_docs` I think it'd make the most sense to just embrace the
pagination APIs that already exist and were written to cover these
issues.

The only thing I'm not 100% on is how to deal with non-continuous
replications. I.e., the older single shot replication. Do we go back
with patches to older replicators to allow 4.0 compatibility? Just
declare that you have to mediate a replication on the newer of the two
CouchDB deployments? Sniff the replicator's UserAgent and behave
differently on 4.x for just that special case?

Paul

On Wed, Jul 15, 2020 at 7:25 PM Adam Kocoloski  wrote:


Sorry, I also missed that you quoted this specific bit about eagerly requesting 
a new snapshot. Currently the code will just react to the transaction expiring, 
then wait till it acquires a new snapshot if “restart_tx” is set (which can 
take a couple of milliseconds on a FoundationDB cluster that is deployed across 
multiple AZs in a cloud Region) and then proceed.

Adam


On Jul 15, 2020, at 6:54 PM, Adam Kocoloski  wrote:

Right now the code has an internal “restart_tx” flag that is used to 
automatically request a new snapshot if the original one expires and continue 
streaming the response. It can be used for all manner of multi-row responses, 
not just _changes.

As this is a pretty big change to the isolation guarantees provided by the 
database Bob volunteered to elevate the issue to the mailing list for a deeper 
discussion.

Cheers, Adam


On Jul 15, 2020, at 11:38 AM, Joan Touzet  wrote:

I'm having trouble following the thread...

On 14/07/2020 14:56, Adam Kocoloski wrote:

For cases where you’re not concerned about the snapshot isolation (e.g. 
streaming an entire _changes feed), there is a small performance benefit to 
requesting a new FDB transaction asynchronously before the old one actually 
times out and swapping over to it. That’s a pattern I’ve seen in other FDB 
layers but I’m not sure we’ve used it anywhere in CouchDB yet.


How does _changes work right now in the proposed 4.0 code?

-Joan








Re: [DISCUSS] couchdb 4.0 transactional semantics

2020-07-15 Thread Joan Touzet

I'm having trouble following the thread...

On 14/07/2020 14:56, Adam Kocoloski wrote:

For cases where you’re not concerned about the snapshot isolation (e.g. 
streaming an entire _changes feed), there is a small performance benefit to 
requesting a new FDB transaction asynchronously before the old one actually 
times out and swapping over to it. That’s a pattern I’ve seen in other FDB 
layers but I’m not sure we’ve used it anywhere in CouchDB yet.


How does _changes work right now in the proposed 4.0 code?

-Joan


Re: alarm_handler doc

2020-07-13 Thread Joan Touzet
This is coming from the Erlang VM and telling you that you're nearly out 
of available memory. CouchDB doesn't react well to running out of RAM; 
it usually crashes.


While this warning will be suppressed in future versions of CouchDB, you 
should probably check that you have enough RAM in your CouchDB 
server/container/VM/etc.


On 2020-07-13 6:23 p.m., Arturo Mardones wrote:

Hello at All!

I'm getting this message very often

[info] 2020-07-13T21:19:09.240457Z couchdb@127.0.0.1 <0.56.0> 
alarm_handler: {set,{system_memory_high_wa
termark,[]}}

I've reviewed some older mails and mention that is not important, and even
is related to the client browser cache?

Anyone can give me some link or light about if I really can discard this
message, and what really means

Thanks!!!

Arturo.



Re: Is this mailing list obsolete now?

2020-07-12 Thread Joan Touzet

Hi Kiril,

On 12/07/2020 15:43, Kiril Stankov wrote:

I see that some topics on the list are not in the github discussions and
vice versa?

Shall we all consider the mailing list obsolete and move to github?


Not at all. We're currently in early, closed beta testing of the GitHub 
Discussions functionality. So far, we're happy with it, but there's no 
telling what will happen.


It's been pretty popular, though, as you can see.

Is there a way to get summaries from github or other kind of
notifications by email?


The intent is that, once GitHub releases webhook functionality for 
Discussions, we'll have at least unidirectional integration of GH 
Discussion posts mirrored to this mailing list. They've indicated this 
won't happen prior to the public beta launch, and possibly not before 
the feature is out of beta.


It's unclear yet whether we'll be able to enable posting to GH 
Discussions back from replies on the mailing list, but it's something 
under review and has been requested from GitHub development and ASF Infra.


-Joan "tempus fugit" Touzet


Re: X-Content-Type-Options and strict-transport-security

2020-07-02 Thread Joan Touzet
Best option: use a reverse proxy like haproxy or nginx to inject these. 
You can also terminate SSL at this layer for better SSL support and 
performance.


-Joan

On 02/07/2020 05:01, Mody, Darshan Arvindkumar (Darshan) wrote:

Hi

In our project we would like to set the header X-Content-Type-Options and 
strict-transport-security whenever CouchDB responds to an request

How can we set the headers?

Thanks in advance

Regards
Darshan



Re: [DISCUSS] New Reduce design for FDB

2020-06-24 Thread Joan Touzet




On 2020-06-24 1:32 p.m., Garren Smith wrote:

On Wed, Jun 24, 2020 at 6:47 PM Joan Touzet  wrote:


Hi Garren,

If the "options" field is left out, what is the default behaviour?



All group_levels will be indexed. I imagine this is what most CouchDB uses
will want.


Great!





Is there no way to specify multiple group_levels to get results that
match the original CouchDB behaviour? Your changed behaviour would be
acceptable if I could do something like `?group_level=2,3,4,5`.



I imagine we could, it would make the code a lot more complex. What is the
reason for that?
I find the fact that we return multiple group_levels for a set group_level
very confusing. To me it feels like
the reason we return extra group_levels is because of how b-tree's work
rather than it being a useful thing for a user.


This is the canonical example (and the previous 2-3 slides)

https://speakerdeck.com/wohali/10-common-misconceptions-about-apache-couchdb?slide=25

There are ways to do this with your approach, but they'll require retooling.





-Joan

On 24/06/2020 08:03, Garren Smith wrote:

Quick Note I have a gist markdown version of this that might be easier to
read

https://gist.github.com/garrensmith/1ad1176e007af9c389301b1b6b00f180


Hi Everyone,

The team at Cloudant have been relooking at Reduce indexes for CouchDB on
FDB and we want to simply what we had initially planned and change some

of

the reduce behaviour compared to CouchDB 3.x

Our initial design was to use a skip list. However this hasn’t proven to

be

particularly useful approach. It would take very long to update and I

can’t

find a good algorithm to query the skip list effectively.

So instead I would like to propose a much simpler reduce implementation.

I

would like to use this as the base for reduce and we can look at adding
more functionality later if we need to.

For the new reduce design, instead of creating a skip list, we will

instead

create group_level indexes for a key. For example say we have the

following

keys we want to add to a reduce index:

```
([2019, 6, 1] , 1)
([2019, 6, 20] , 1)
([2019, 7, 3] , 1)
```

We would then create the following group_level indexes:

```
Level 0:
(null, 3)

Level=1:
([2019], 3)

Level 2:
([2019,6], 2)
([2019, 7] , 1)

Level3:
([2019, 6, 1,] , 1)
([2019, 6, 20,] , 1)
([2019, 7, 3,] , 1)
```

All of these group_level indexes would form part of the reduce index and
would be updated at the same time. We don’t need to know the actual
`group_levels` ahead of time as we would take any key we need to index

look

at its length and add it to the group_levels it would belong to.

Another nice optimization we can do with this is when a user creates a

view

they can specify the number of group levels to index e.g:

```
{
_id: _design/my-ddoc
views: {
   one: {
 map: function (doc) {emit(doc.val, 1)},
 reduce: "_sum"
   },

   two: {
 map: function (doc) {emit(doc.age, 1)},
reduce: "_count"
   }
 },

 options: {group_levels: [1,3,5]}
}
```
This gives the user the ability to trade off index build speed, storage
overhead and performance.

One caveat of that, for now, is if a user changes the number of
`group_levels` to be indexed, the index is invalidated and we would have

to

build it from scratch again. Later we could look at doing some work

around

that so that isn’t the case.

This design will result in a behaviour change. Previously with reduce if
you set `group_level=2`. It will return all results with `group_level=2`
and below. E.g  reduce key/values of the following:

```
# group = true
("key":1,"value":2},
{"key":2,"value":2},
{"key":3,"value":2},
{"key":[1,1],"value":1},
{"key":[1,2,6],"value":1},
{"key":[2,1],"value":1},
{"key":[2,3,6],"value":1},
{"key":[3,1],"value":1},
{"key":[3,1,5],"value":1},
{"key":[3,4,5],"value":1}
```

Then doing a query group_level=2 returns:

```
# group_level = 2
{"rows":[
{"key":1,"value":2},
{"key":2,"value":2},
{"key":3,"value":2},
{"key":[1,1],"value":1},
{"key":[1,2],"value":1},
{"key":[2,1],"value":1},
{"key":[2,3],"value":1},
{"key":[3,1],"value":2},
{"key":[3,4],"value":1}
]}
```

I want to **CHANGE** this behaviour, so if a query specifies
`group_level=2` then **only** `group_level=2` returns would be returned.
E.g from the example above the results would be:

```
# group_level = 2
{"rows":[
{"key":[1,1],"value":1},
{"key":[1,2],"value":1},
{"key":[2,1],"value":1},
{"key":[2,3],"value":1},
{&

Re: [DISCUSS] New Reduce design for FDB

2020-06-24 Thread Joan Touzet

Hi Garren,

If the "options" field is left out, what is the default behaviour?

Is there no way to specify multiple group_levels to get results that 
match the original CouchDB behaviour? Your changed behaviour would be 
acceptable if I could do something like `?group_level=2,3,4,5`.


-Joan

On 24/06/2020 08:03, Garren Smith wrote:

Quick Note I have a gist markdown version of this that might be easier to
read https://gist.github.com/garrensmith/1ad1176e007af9c389301b1b6b00f180

Hi Everyone,

The team at Cloudant have been relooking at Reduce indexes for CouchDB on
FDB and we want to simply what we had initially planned and change some of
the reduce behaviour compared to CouchDB 3.x

Our initial design was to use a skip list. However this hasn’t proven to be
particularly useful approach. It would take very long to update and I can’t
find a good algorithm to query the skip list effectively.

So instead I would like to propose a much simpler reduce implementation. I
would like to use this as the base for reduce and we can look at adding
more functionality later if we need to.

For the new reduce design, instead of creating a skip list, we will instead
create group_level indexes for a key. For example say we have the following
keys we want to add to a reduce index:

```
([2019, 6, 1] , 1)
([2019, 6, 20] , 1)
([2019, 7, 3] , 1)
```

We would then create the following group_level indexes:

```
Level 0:
(null, 3)

Level=1:
([2019], 3)

Level 2:
([2019,6], 2)
([2019, 7] , 1)

Level3:
([2019, 6, 1,] , 1)
([2019, 6, 20,] , 1)
([2019, 7, 3,] , 1)
```

All of these group_level indexes would form part of the reduce index and
would be updated at the same time. We don’t need to know the actual
`group_levels` ahead of time as we would take any key we need to index look
at its length and add it to the group_levels it would belong to.

Another nice optimization we can do with this is when a user creates a view
they can specify the number of group levels to index e.g:

```
{
_id: _design/my-ddoc
   views: {
  one: {
map: function (doc) {emit(doc.val, 1)},
reduce: "_sum"
  },

  two: {
map: function (doc) {emit(doc.age, 1)},
   reduce: "_count"
  }
},

options: {group_levels: [1,3,5]}
}
```
This gives the user the ability to trade off index build speed, storage
overhead and performance.

One caveat of that, for now, is if a user changes the number of
`group_levels` to be indexed, the index is invalidated and we would have to
build it from scratch again. Later we could look at doing some work around
that so that isn’t the case.

This design will result in a behaviour change. Previously with reduce if
you set `group_level=2`. It will return all results with `group_level=2`
and below. E.g  reduce key/values of the following:

```
# group = true
("key":1,"value":2},
{"key":2,"value":2},
{"key":3,"value":2},
{"key":[1,1],"value":1},
{"key":[1,2,6],"value":1},
{"key":[2,1],"value":1},
{"key":[2,3,6],"value":1},
{"key":[3,1],"value":1},
{"key":[3,1,5],"value":1},
{"key":[3,4,5],"value":1}
```

Then doing a query group_level=2 returns:

```
# group_level = 2
{"rows":[
{"key":1,"value":2},
{"key":2,"value":2},
{"key":3,"value":2},
{"key":[1,1],"value":1},
{"key":[1,2],"value":1},
{"key":[2,1],"value":1},
{"key":[2,3],"value":1},
{"key":[3,1],"value":2},
{"key":[3,4],"value":1}
]}
```

I want to **CHANGE** this behaviour, so if a query specifies
`group_level=2` then **only** `group_level=2` returns would be returned.
E.g from the example above the results would be:

```
# group_level = 2
{"rows":[
{"key":[1,1],"value":1},
{"key":[1,2],"value":1},
{"key":[2,1],"value":1},
{"key":[2,3],"value":1},
{"key":[3,1],"value":2},
{"key":[3,4],"value":1}
]}
```


## Group_level=0
`Group_level=0` queries would work as follows:
1. `group_level=0` without startkey/endkey and then the group_level=0 index
is used
2. For a `group_level=0` with a startkey/endkey or where `group_level=0` is
not indexed, the query will look for the smallest `group_level` and use
that to calculate the `group_level=0` result
3. `group_level=0` indexes with a startkey/endkey could timeout and be slow
in some cases because we having to do quite a lot of aggregation when
reading keys. But I don’t think that is much different from how it is done
now.

## Group=true
We will always build the `group=true` index.

## Querying non-indexed group_level
If a query has a `group_level` that is not indexed. We can do two things
here, CouchDB could use the nearest  `group_level` to service the query or
it could return an error that this `group_level` is not available to query.
I would like to make this configurable so that an admin can choose how
reduce indexes behave.

## Supported Builtin Reduces
Initially, we would support reduces that can be updated by calculating a
delta change and applying it to all the group_levels. That means we can
support `_sum` and `_count` quite easily. Initially, we won’t implement
`max` and `min`. 

Re: Controlling the images used for the builds/releases

2020-06-22 Thread Joan Touzet
Hey Jarek, thanks for starting this thread. It's a thorny issue, for 
sure, especially because binary releases are not "official" from an ASF 
perspective.


(Of course, this is a technicality; the fact that your PMC is building 
these and linking them from project pages, and/or publishing them out as 
apache/ or top-level  at Docker Hub can be seen as a 
kind of officiality. It's just, for the moment, not an Official Act of 
the Foundation for legal reasons.)


On 22/06/2020 09:52, Jarek Potiuk wrote:

Hello Everyone,

I have a kind question and request for your opinions about using external
Docker images and downloaded binaries in the official releases for Apache
Airflow.

The question is: How much can we rely on those images being available in
those particular cases:

A) during static checks
B) during unit tests
C) for building production images for Airflow
D) for releasing production Helm Chart for Airflow

Some more explanation:

For a long time we are doing A) and B) in Apache Airflow and we followed a
practice that when we found an image that is goo for us and seems "legit"
we are using it. Example -
https://hub.docker.com/r/hadolint/hadolint/dockerfile/ - HadoLint image to
check our Dockerfiles.  Since this is easy to change pretty much
immediately, and only used for building/testing, I have no problem with
this, personally and I think it saves a lot of time and effort to maintain
some of those images.


Sure. Build tools can even be GPL, and something like a linter isn't a 
hard dependency for Airflow anyway. +1



But we are just about to start releasing Production Image and Helm Chart
for Apache Airflow and I started to wonder if this is still acceptable
practice when - by releasing the code - we make our users depend on those
images.


Just checking: surely a production Airflow Docker image doesn't have 
hadolint in it?



We are going to officially support both - image and helm chart by the
community and once we release the image and helm chart officially, those
external images and downloads will become dependencies to our official
"releases". We are allowing our users to use our official Dockerfile
to build a new image (with user's configuration) and Helm Chart is going to
be officially available for anyone to install Airflow.


Sounds like a good step for your project.


The Docker images that we are using are from various sources:

1) officially maintained images (Python, KinD, Postgres, MySQL for example)
2) images released by organizations that released them for their own
purpose, but they are not "officially maintained" by those organizations
3) images released by private individuals

While 1) is perfectly OK for both image and helm chart, I think for 2) and
3) we should bring the images to Airflow community management.


I agree, and would go a step further, see below.


Here is the list of those images I found that we use:

- aneeshkj/helm-unittest
- ashb/apache-rat:0.13-1
- godatadriven/krb5-kdc-server
- polinux/stress (?)
- osixia/openldap:1.2.0
- astronomerinc/ap-statsd-exporter:0.11.0
- astronomerinc/ap-pgbouncer:1.8.1
- astronomerinc/ap-pgbouncer-exporter:0.5.0-1

Some of those images are released by organizations that are strong
stakeholders in the project (Astronomer especially). Some other images are
by organizations that are still part of the community but not as strong
stakeholders (GoDataDriven) - some others are by private individuals who
are contributors (Ash, Aneesh) and some others are not-at-all connected to
Apache Airflow (polinux, osixia).

For me quite clearly - we are ok to rely on "officially" maintained images
and we are not ok to rely on images released by individuals in this case.
But there is a range of images in-between that I have no clarity about.

So my questions are:

1) Is this acceptable to have a non-officially released image as a
dependency in released code for the ASF project?


First question: Is it the *only* way you can run Airflow? Does it end up 
in the source tarball? If so, you need to review the ASF licensing 
requirements and make sure you're not in violation there. (Just Checking!)


Second: Most of these look like *testing* dependencies, not runtime 
dependencies.



2) If it's not - how do we determine which images are "officially
maintained".

3) If yes - how do we put the boundary - when image is acceptable? Are
there any criteria we can use or/ constraints we can put on the
licences/organizations releasing the images we want to make dependencies
for released code of ours?


How hard would it be for the Airflow community to import the Dockerfiles 
and build the images themselves? And keep those imported forks up to 
date? We do this a lot in CouchDB for our dependencies (not just Docker) 
where it's a personal project of someone in the community, or even where 
it's some corporate thing that we want to be sure we don't break on when 
they implement a change for their own reasons.


Automating building 

Re: Getting FDB work onto master

2020-06-18 Thread Joan Touzet

Restarting this thread.

Can I get an update on where things are at?

On 31/03/2020 13:13, Paul Davis wrote:

There are a few other bits to `make check` that aren't included in
`make check-fdb`. Updating `make check` should just be a matter of
taking our test subset and applying them to `make check`.

On Tue, Mar 31, 2020 at 11:04 AM Garren Smith  wrote:


On the fdb branch we have a make check-fdb which is a subset of all the
tests that should pass. I think we should use that instead of make check

On Tue, Mar 31, 2020 at 5:34 PM Joan Touzet  wrote:


Took a bit longer than expected, but it's been tested & validated, and
then re-pushed. I'll push up the other platforms as well (except for
CentOS 8, which still has a broken Python dependency).

PR for the changes necessary was also merged to couchdb-ci.

FDB should feel free to merge to master once `make check` is working.

We can hammer out the wide matrix issues on a slower timeframe.

-Joan

On 2020-03-30 19:42, Joan Touzet wrote:

OOPS, looks like I pushed the wrong image.

I'll build the kerl-based version of the image and re-push. This will
take a couple of hours, since I have to build 3x Erlangs from source.

Good to know step 1 works! Next step for y'all: fix `make check`.

-Joan

On 2020-03-30 6:05 p.m., Robert Samuel Newson wrote:

noting that


https://ci-couchdb.apache.org/blue/organizations/jenkins/jenkins-cm1%2FPullRequests/detail/PR-2732/7/pipeline/

now fails because kerl isn't there. related?

B.


On 30 Mar 2020, at 22:11, Robert Samuel Newson 
wrote:

Nice, make check-fdb passes for me on that branch (the 2nd time, the
1st time was a spurious failure, timing related).

B.


On 30 Mar 2020, at 22:04, Robert Samuel Newson 
wrote:

Hi,

Great timing, I merged something to prototype/fdb-layer without the
check passing, I'm trying this now.

First note, the kerl line doesn't work but it seems there's a system
wide erlang 20 install instead.

B.


On 30 Mar 2020, at 19:50, Joan Touzet  wrote:

Hi everyone, hope you're all staying at home[1].

I've just pushed out a new version of our
couchdbdev/debian-buster-erlang-all Docker image. This now includes
the fdb binaries, as well as client libraries and headers. This is
a necessary (but not sufficient) step to getting the fdb prototype
merged to master.

Can someone please test if this works correctly for them to build
and test CouchDB (with fdb)?

Here's instructions:

docker pull couchdbdev/debian-buster-erlang-all
docker run -it couchdbdev/debian-buster-erlang-all
# then, inside the image:
cd
git clone https://github.com/apache/couchdb
cd couchdb && git checkout 
. /usr/local/kerl/20.3.8.25/activate
# you still need to fix make check, but Paul says this should work:
make check-fdb

The next step would be to fix `make check`. Then, you can merge the
fdb branch to master.

CI on master will be broken after fdb merge until we get answers to
these questions: [2].

**REMEMBER**: Any 3.x fixes should land on the 3.x branch at this
point. If they're backend specific, there's no need for them to
land on master anymore.

**QUESTION**: Now that we have a new feature (JWT), it's likely the
next CouchDB release would be 3.1.0 - so, probably no need to land
more fixes on 3.0.x at this point. Does everyone agree?

-Joan "I miss restaurants" Touzet

[1]: https://www.youtube.com/watch?v=rORMGH0jE2I
[2]:

https://forums.foundationdb.org/t/package-download-questions/2037










Re: Is everything ok on our Jenkins cluster?

2020-06-18 Thread Joan Touzet

On 18/06/2020 13:19, Alessio 'Blaster' Biancalana wrote:

Hi Joan,
Hi opened the infra bug, could you please check I've done everything
correctly?

https://issues.apache.org/jira/browse/INFRA-20441


Looks fine.

FYI Paul Davis just updated and restarted the agents on all of the 
machines. Nick just got his PR through. Can you try restarting your 
build now?


-Joan



I also shared the link to the thread. Strange situation :/

@Paul could you check again? Definitely the issue wasn't fixed all day
long, and it looks like it's not a networking blurb.

Alessio

On Thu, Jun 18, 2020 at 12:48 AM Joan Touzet  wrote:


Can you try opening an Infra ticket on this?

https://issues.apache.org/jira

Open it against the Infra project and share with them the link(s). You
can also link to this mailing list discussion via
https://lists.apache.org/ .

-Joan

On 2020-06-17 5:16 p.m., Alessio 'Blaster' Biancalana wrote:

Thanks for the response Paul!
Same stuff, I tried rerunning the job but it looks stuck with that error



https://ci-couchdb.apache.org/blue/organizations/jenkins/jenkins-cm1%2FPullRequests/detail/PR-2893/6/pipeline


I don't know why that happens, if you have any clue...

Thanks,
Alessio

Il mer 17 giu 2020, 23:01 Paul Davis  ha
scritto:


I looked at Jenkins and saw them all as connected and in sync. Is
there more to the report or was this some sort of networking burb?

On Wed, Jun 17, 2020 at 2:20 PM Joan Touzet  wrote:


IBM maintains these workers for us - will have to ask Paul Davis to

take

a look.

-Joan

On 17/06/2020 05:36, Alessio 'Blaster' Biancalana wrote:

Hey folks,
I have a job on Jenkins that is repeatedly giving me this error:

<



https://ci-couchdb.apache.org/blue/organizations/jenkins/jenkins-cm1%2FPullRequests/detail/PR-2893/4/pipeline/#step-117-log-1

[2020-06-17T09:09:49.584Z]

Recording test results
<



https://ci-couchdb.apache.org/blue/organizations/jenkins/jenkins-cm1%2FPullRequests/detail/PR-2893/4/pipeline/#step-117-log-2

[2020-06-17T09:09:49.718Z]

Remote call on JNLP4-connect connection from
76.9a.30a9.ip4.static.sl-reverse.com/169.48.154.118:7778 failed
<



https://ci-couchdb.apache.org/blue/organizations/jenkins/jenkins-cm1%2FPullRequests/detail/PR-2893/4/pipeline/#step-117-log-3

Remote

call on JNLP4-connect connection from
76.9a.30a9.ip4.static.sl-reverse.com/169.48.154.118:7778 failed

I was wondering, is everything ok on our Jenkins cluster? Maybe

there's

some maintenance I'm not aware of?

Cheers,
Alessio











Re: Is everything ok on our Jenkins cluster?

2020-06-18 Thread Joan Touzet

This was fixed by updating the agent on the FreeBSD machines.

On 18/06/2020 11:20, Jan Lehnardt wrote:

FWIW, I’m getting this here on the FreeBSD build:


Remote call on JNLP4-connect connection from 67.223.99.43/67.223.99.43:61326 
failed


https://ci-couchdb.apache.org/blue/organizations/jenkins/jenkins-cm1%2FFullPlatformMatrix/detail/master/178/pipeline

Best
Jan
—



On 17. Jun 2020, at 23:16, Alessio 'Blaster' Biancalana 
 wrote:

Thanks for the response Paul!
Same stuff, I tried rerunning the job but it looks stuck with that error

https://ci-couchdb.apache.org/blue/organizations/jenkins/jenkins-cm1%2FPullRequests/detail/PR-2893/6/pipeline

I don't know why that happens, if you have any clue...

Thanks,
Alessio

Il mer 17 giu 2020, 23:01 Paul Davis  ha
scritto:


I looked at Jenkins and saw them all as connected and in sync. Is
there more to the report or was this some sort of networking burb?

On Wed, Jun 17, 2020 at 2:20 PM Joan Touzet  wrote:


IBM maintains these workers for us - will have to ask Paul Davis to take
a look.

-Joan

On 17/06/2020 05:36, Alessio 'Blaster' Biancalana wrote:

Hey folks,
I have a job on Jenkins that is repeatedly giving me this error:

  <

https://ci-couchdb.apache.org/blue/organizations/jenkins/jenkins-cm1%2FPullRequests/detail/PR-2893/4/pipeline/#step-117-log-1

[2020-06-17T09:09:49.584Z]

Recording test results
  <

https://ci-couchdb.apache.org/blue/organizations/jenkins/jenkins-cm1%2FPullRequests/detail/PR-2893/4/pipeline/#step-117-log-2

[2020-06-17T09:09:49.718Z]

Remote call on JNLP4-connect connection from
76.9a.30a9.ip4.static.sl-reverse.com/169.48.154.118:7778 failed
  <

https://ci-couchdb.apache.org/blue/organizations/jenkins/jenkins-cm1%2FPullRequests/detail/PR-2893/4/pipeline/#step-117-log-3

Remote

call on JNLP4-connect connection from
76.9a.30a9.ip4.static.sl-reverse.com/169.48.154.118:7778 failed

I was wondering, is everything ok on our Jenkins cluster? Maybe there's
some maintenance I'm not aware of?

Cheers,
Alessio







Re: Is everything ok on our Jenkins cluster?

2020-06-17 Thread Joan Touzet

Can you try opening an Infra ticket on this?

https://issues.apache.org/jira

Open it against the Infra project and share with them the link(s). You 
can also link to this mailing list discussion via 
https://lists.apache.org/ .


-Joan

On 2020-06-17 5:16 p.m., Alessio 'Blaster' Biancalana wrote:

Thanks for the response Paul!
Same stuff, I tried rerunning the job but it looks stuck with that error

https://ci-couchdb.apache.org/blue/organizations/jenkins/jenkins-cm1%2FPullRequests/detail/PR-2893/6/pipeline

I don't know why that happens, if you have any clue...

Thanks,
Alessio

Il mer 17 giu 2020, 23:01 Paul Davis  ha
scritto:


I looked at Jenkins and saw them all as connected and in sync. Is
there more to the report or was this some sort of networking burb?

On Wed, Jun 17, 2020 at 2:20 PM Joan Touzet  wrote:


IBM maintains these workers for us - will have to ask Paul Davis to take
a look.

-Joan

On 17/06/2020 05:36, Alessio 'Blaster' Biancalana wrote:

Hey folks,
I have a job on Jenkins that is repeatedly giving me this error:

   <

https://ci-couchdb.apache.org/blue/organizations/jenkins/jenkins-cm1%2FPullRequests/detail/PR-2893/4/pipeline/#step-117-log-1

[2020-06-17T09:09:49.584Z]

Recording test results
   <

https://ci-couchdb.apache.org/blue/organizations/jenkins/jenkins-cm1%2FPullRequests/detail/PR-2893/4/pipeline/#step-117-log-2

[2020-06-17T09:09:49.718Z]

Remote call on JNLP4-connect connection from
76.9a.30a9.ip4.static.sl-reverse.com/169.48.154.118:7778 failed
   <

https://ci-couchdb.apache.org/blue/organizations/jenkins/jenkins-cm1%2FPullRequests/detail/PR-2893/4/pipeline/#step-117-log-3

Remote

call on JNLP4-connect connection from
76.9a.30a9.ip4.static.sl-reverse.com/169.48.154.118:7778 failed

I was wondering, is everything ok on our Jenkins cluster? Maybe there's
some maintenance I'm not aware of?

Cheers,
Alessio







Re: Is everything ok on our Jenkins cluster?

2020-06-17 Thread Joan Touzet
IBM maintains these workers for us - will have to ask Paul Davis to take 
a look.


-Joan

On 17/06/2020 05:36, Alessio 'Blaster' Biancalana wrote:

Hey folks,
I have a job on Jenkins that is repeatedly giving me this error:

  
[2020-06-17T09:09:49.584Z]
Recording test results
  
[2020-06-17T09:09:49.718Z]
Remote call on JNLP4-connect connection from
76.9a.30a9.ip4.static.sl-reverse.com/169.48.154.118:7778 failed
  
Remote
call on JNLP4-connect connection from
76.9a.30a9.ip4.static.sl-reverse.com/169.48.154.118:7778 failed

I was wondering, is everything ok on our Jenkins cluster? Maybe there's
some maintenance I'm not aware of?

Cheers,
Alessio



Re: Publicly available usage/download statistics?

2020-06-14 Thread Joan Touzet

On 14/06/2020 02:42, Ilya Novojilov wrote:

isn't CouchDB based on Apache Beam?


No. Not at all. Some of the dev team is friends with the Beam team, but 
there's no direct connection.


You might have seen the `beam.smp` process running when you run CouchDB. 
That's the Erlang VM:


  https://en.wikipedia.org/wiki/BEAM_(Erlang_virtual_machine)

Jonathan Hall  wrote:



I'm working with O'Reilly Media on a proposal for some online training
related to CouchDB.  They have asked for any usage/popularity statics
about CouchDB.  I've done a little online research, and found very little.


I collect and share download statistics regularly. Here's the latest update:

https://gist.github.com/wohali/78c14c9afa317bf665854d55ad1e70ed

Does that help? If you want numbers for the first half of 2020 I can 
collect those on Monday.


-Joan


Re: Automatically building GitHub pull requests with Jenkins

2020-06-09 Thread Joan Touzet
Try specifying your git repository as 
https://github.com/apache/guacamole-server instead of 
git://github.com/apache/guacamole-server.git ? Just a guess.


On 09/06/2020 16:45, Mike Jumper wrote:

Hello all,

I've been trying to configure Jenkins jobs to automatically build pull
requests for the Guacamole repositories on GitHub using the "CloudBees Pull
Request Builder" as documented here:

https://cwiki.apache.org/confluence/display/INFRA/Kicking+off+a+build+in+Jenkins+with+a+GitHub+PR

https://docs.cloudbees.com/docs/admin-resources/latest/plugins/pull-request-builder-for-github

So far, I am having no luck, with the current attempt producing a cryptic
IOException apparently related to git:

"... hudson.remoting.ProxyException:
hudson.remoting.FastPipedInputStream$ClosedBy: The pipe was closed at...
 at
hudson.remoting.FastPipedInputStream.close(FastPipedInputStream.java:112)
 at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:363)
 at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:284)
 at
com.cloudbees.jenkins.plugins.git.vmerge.ChannelTransport$GitPushTask.invoke(ChannelTransport.java:133)
 at
com.cloudbees.jenkins.plugins.git.vmerge.ChannelTransport$GitPushTask.invoke(ChannelTransport.java:117)
 at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3052)
 at hudson.remoting.UserRequest.perform(UserRequest.java:212)
 at hudson.remoting.UserRequest.perform(UserRequest.java:54)
 at hudson.remoting.Request$2.run(Request.java:369)
Caused: hudson.remoting.ProxyException: java.io.IOException: Pipe is
already closed
 at
hudson.remoting.FastPipedOutputStream.write(FastPipedOutputStream.java:154)
 at
hudson.remoting.FastPipedOutputStream.write(FastPipedOutputStream.java:138)
 at
hudson.remoting.ProxyOutputStream$Chunk$1.run(ProxyOutputStream.java:255)
 at hudson.remoting.PipeWriter$1.run(PipeWriter.java:158)
 at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at
hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:131)
 at
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
Caused: java.io.IOException: Pipe is already closed
 ..."

(see
https://builds.apache.org/view/E-G/view/Guacamole/job/guacamole-server-pull-request/12/console
)

Any ideas on what might be going wrong here, or any examples on this being
used in practice?

Thanks,

- Mike



Re: Announcing GitHub Discussions for Apache CouchDB [Beta]

2020-06-09 Thread Joan Touzet

Hi Piotr,

We learned of the feature based on friends who are testing the feature 
in private beta, in places like the JavaScript React community:


  https://github.com/vercel/next.js/discussions

That page shows some of the upcoming features in Discussions, currently 
hard coded for that community.


You'll have to talk to ASF Infrastructure if you want to experiment with 
this feature. Open a JIRA ticket with them; they are the gatekeepers.


-Joan "now in the correct Channel" Touzet

On 2020-06-09 5:22 p.m., Piotr Zarzycki wrote:

Hi Guys,

I'm PMC in a different Apache project and I was wondering - what required
actually to switch ON Discussion on GH? How idea started?

Thanks,
Piotr


On Tue, Jun 9, 2020, 10:48 PM Alessio 'Blaster' Biancalana <
dottorblas...@apache.org> wrote:


Woh! This is huuuge!
Thanks for making this happen :-)

Alessio

On Tue, Jun 9, 2020 at 8:14 PM Joan Touzet  wrote:


Hi everyone,

Thanks to some personal connections and the support of ASF Infra, Apache
CouchDB now has GitHub Discussions enabled on our repository:

https://github.com/apache/couchdb/discussions

Right now, we're beta testing this (in conjunction with MS/GitHub and
the ASF) as a new user support channel. You may have already seen us
move a few Issues over to Discussions, where they now belong.

We're hoping that this gives people a friendlier way to build community
around CouchDB, one that doesn't require mailing list membership or
joining a possibly-intimidating Slack channel. Please feel free to use
the Discussions to ask for help with CouchDB, share ideas, show us what
you've done with CouchDB, offer thanks to the team, or post things you
learned and want to share with others.

One key feature is that we hope to give more recognition and affordance
to our non-committer contributors through the Discussions platform. Stay
tuned for more info.

As more Discussions features become available, we'll make use of them to
customize our space there, including linking to our project
"bylaws"/rules and code of conduct.

Two caveats: Right now, there is no webhook availability for
Discussions, which means we can't mirror activity there to our user@
mailing lists. We've been told this will arrive around the time
Discussions leaves private beta.

The other: Please remember that all discussion of project *direction*
and *decision making* must occur on the dev@ mailing list. It's fine to
link to threads in Discussions to help start the decision making
process, just the same as we do with Slack, Stack Overflow, etc. today.

I hope this comes as welcome news to everyone. I'm pretty excited about
it! Special thanks to Jan Lehnardt, who helped get this enabled for us,
and Garren Smith, who started the discussion months ago about moving out
of the twentieth century for community building.

-Joan "I'm a woman. I can change. If I have to. I guess." Touzet







Announcing GitHub Discussions for Apache CouchDB [Beta]

2020-06-09 Thread Joan Touzet

Hi everyone,

Thanks to some personal connections and the support of ASF Infra, Apache 
CouchDB now has GitHub Discussions enabled on our repository:


  https://github.com/apache/couchdb/discussions

Right now, we're beta testing this (in conjunction with MS/GitHub and 
the ASF) as a new user support channel. You may have already seen us 
move a few Issues over to Discussions, where they now belong.


We're hoping that this gives people a friendlier way to build community 
around CouchDB, one that doesn't require mailing list membership or 
joining a possibly-intimidating Slack channel. Please feel free to use 
the Discussions to ask for help with CouchDB, share ideas, show us what 
you've done with CouchDB, offer thanks to the team, or post things you 
learned and want to share with others.


One key feature is that we hope to give more recognition and affordance 
to our non-committer contributors through the Discussions platform. Stay 
tuned for more info.


As more Discussions features become available, we'll make use of them to 
customize our space there, including linking to our project 
"bylaws"/rules and code of conduct.


Two caveats: Right now, there is no webhook availability for 
Discussions, which means we can't mirror activity there to our user@ 
mailing lists. We've been told this will arrive around the time 
Discussions leaves private beta.


The other: Please remember that all discussion of project *direction* 
and *decision making* must occur on the dev@ mailing list. It's fine to 
link to threads in Discussions to help start the decision making 
process, just the same as we do with Slack, Stack Overflow, etc. today.


I hope this comes as welcome news to everyone. I'm pretty excited about 
it! Special thanks to Jan Lehnardt, who helped get this enabled for us, 
and Garren Smith, who started the discussion months ago about moving out 
of the twentieth century for community building.


-Joan "I'm a woman. I can change. If I have to. I guess." Touzet


Re: moving email lists to GitHub Discussions (Was: [DISCUSS] moving email lists to Discourse)

2020-05-26 Thread Joan Touzet
Quick update for those who don't want to read JIRA (you can subscribe to 
the issue, by the way - could use more voices...):


Because the request is to put user support discussions there only for 
now, we aren't strictly required to have email integration with our 
users@ list. That's good!


The bad news is that GitHub Repo Discussions are still in limited beta, 
and we need someone at GH to enable that for our repo.


I have asked Infra to reach out to GH to ask about this, but have zero 
visibility into progress on that.


If I don't see any progress in a week or two, I'll poke them again.

The _good_ news is that if we felt we wanted to move forward with an 
alternative solution - again for user help only, no product decision 
making allowed outside of dev@ / something that cc's to dev@ - we could 
do that. But let's not be too hasty, GH Repo Discussions looks like the 
best match for us and the least long-term maintenance work.


-Joan "less admin == more maintainable" Touzet

On 2020-05-26 10:52, Nick Vatamaniuc wrote:

+1

Thank you, Joan!

On Fri, May 22, 2020 at 2:50 PM Jan Lehnardt  wrote:


Thanks Joan, I’m looking forward to Infra feedback.

Best
Jan
—


On 22. May 2020, at 19:31, Joan Touzet  wrote:

I haven't gotten a lot of feedback on this proposal. (I know a lot of people 
are marching towards deadlines right now.) I also don't want to take it to 
users@, unless there's a reality of it happening.

In the interest of moving this forward, I'm going to open an exploratory issue 
with Infra to see how much work it'd be to make this happen. Hopefully, we're 
not the first people to ask.

We'll still need a vote here, or on users@, before we would actually move 
activity to GH Discussions, but it won't be the gating factor for a while yet, 
I bet.

FYI, per our project guidelines/bylaws, this would be a non-technical decision, 
allowing for lazy consensus and a lazy majority (3 binding +1s, more binding 
+1s than binding -1s), with binding votes cast by committers, and no vetos.

-Joan

On 2020-05-12 14:41, Joan Touzet wrote:

On 2020-05-12 5:46 a.m., Ilya Khlopotov wrote:

I would be +1 as long as it works and we have options to migrate archive 
elsewhere if/when we need to.
You are proposing to mirror email traffic which means that mail archive would 
have a complete history and spare the project from total vendor lock in.


Yup, that'd be a requirement from the ASF's perspective, regardless of 
technology we select.
-Joan

Best regards,
ILYA

On 2020/05/11 19:04:53, Joan Touzet  wrote:

On 2020-03-15 9:36, Dave Cottlehuber wrote:

On Fri, 13 Mar 2020, at 14:35, Naomi Slater wrote:

apparently GitHub has discussions now. it's still in beta, but you can
specifically request it if you want it if you contact support, I think

e.g., https://github.com/zeit/next.js/discussions
<https://github.com/zeit/next.js/discussions>


interesting.


I'm interested to know what we think about this and how this
might/could fit into our plans for user support, discussion, etc.


Given that we already have email integration with GitHub, this will
probably be easier to get through the ASF bureaucracy than something
brand new.

I'm willing to take this through Infra if people agree to it. It doesn't
look like there are any separate "boards" or tags yet, so the proposal
would likely be that discussions there would get emailed onto user@. The
hard part will be getting replies to the thread on user@ to go back into
the discussion on GH; we might be able to get an "asf-bot" to do this
for us.

I also looked at Infra's JIRA database, and no one has put in this
request there yet. So, we'd be the first, with all the difficulties that
entails.

Can I get an informal "vote" on this approach and go-ahead? Since it's
informal, anyone is encouraged to respond.

-Joan "adopt, adapt, improve" Touzet





Fwd: help for couchDB

2020-05-22 Thread Joan Touzet
I received this private email. I don't have time to answer it, but if 
anyone wants, feel free to respond to Lei directly.


-Joan

 Forwarded Message 
Subject:help for couchDB
Date:   Sat, 23 May 2020 09:27:39 +0800
From:   Lei Zhang 
To: woh...@apache.org



Hi Joan,
     This is Lei from China, I am new to CouchDB , and right now I am 
doing some research on CouchDB.  I am writing this email to get some 
help from you expert.
    Could you please give me some Video on how to use the CouchDB ?  so 
I can operate the CouchDB asap.


Many thanks
Lei


Re: moving email lists to GitHub Discussions (Was: [DISCUSS] moving email lists to Discourse)

2020-05-22 Thread Joan Touzet

On 2020-05-22 13:31, Joan Touzet wrote:
I haven't gotten a lot of feedback on this proposal. (I know a lot of 
people are marching towards deadlines right now.) I also don't want to 
take it to users@, unless there's a reality of it happening.


In the interest of moving this forward, I'm going to open an exploratory 
issue with Infra to see how much work it'd be to make this happen. 
Hopefully, we're not the first people to ask.


https://issues.apache.org/jira/browse/INFRA-20301 has been filed.

We'll still need a vote here, or on users@, before we would actually 
move activity to GH Discussions, but it won't be the gating factor for a 
while yet, I bet.


FYI, per our project guidelines/bylaws, this would be a non-technical 
decision, allowing for lazy consensus and a lazy majority (3 binding 
+1s, more binding +1s than binding -1s), with binding votes cast by 
committers, and no vetos.


-Joan

On 2020-05-12 14:41, Joan Touzet wrote:

On 2020-05-12 5:46 a.m., Ilya Khlopotov wrote:
I would be +1 as long as it works and we have options to migrate 
archive elsewhere if/when we need to.
You are proposing to mirror email traffic which means that mail 
archive would have a complete history and spare the project from 
total vendor lock in.




Yup, that'd be a requirement from the ASF's perspective, regardless of 
technology we select.


-Joan


Best regards,
ILYA

On 2020/05/11 19:04:53, Joan Touzet  wrote:

On 2020-03-15 9:36, Dave Cottlehuber wrote:

On Fri, 13 Mar 2020, at 14:35, Naomi Slater wrote:
apparently GitHub has discussions now. it's still in beta, but you 
can
specifically request it if you want it if you contact support, I 
think


e.g., https://github.com/zeit/next.js/discussions
<https://github.com/zeit/next.js/discussions>


interesting.


I'm interested to know what we think about this and how this
might/could fit into our plans for user support, discussion, etc.


Given that we already have email integration with GitHub, this will
probably be easier to get through the ASF bureaucracy than something
brand new.

I'm willing to take this through Infra if people agree to it. It 
doesn't

look like there are any separate "boards" or tags yet, so the proposal
would likely be that discussions there would get emailed onto user@. 
The
hard part will be getting replies to the thread on user@ to go back 
into

the discussion on GH; we might be able to get an "asf-bot" to do this
for us.

I also looked at Infra's JIRA database, and no one has put in this
request there yet. So, we'd be the first, with all the difficulties 
that

entails.

Can I get an informal "vote" on this approach and go-ahead? Since it's
informal, anyone is encouraged to respond.

-Joan "adopt, adapt, improve" Touzet



Re: moving email lists to GitHub Discussions (Was: [DISCUSS] moving email lists to Discourse)

2020-05-22 Thread Joan Touzet
I haven't gotten a lot of feedback on this proposal. (I know a lot of 
people are marching towards deadlines right now.) I also don't want to 
take it to users@, unless there's a reality of it happening.


In the interest of moving this forward, I'm going to open an exploratory 
issue with Infra to see how much work it'd be to make this happen. 
Hopefully, we're not the first people to ask.


We'll still need a vote here, or on users@, before we would actually 
move activity to GH Discussions, but it won't be the gating factor for a 
while yet, I bet.


FYI, per our project guidelines/bylaws, this would be a non-technical 
decision, allowing for lazy consensus and a lazy majority (3 binding 
+1s, more binding +1s than binding -1s), with binding votes cast by 
committers, and no vetos.


-Joan

On 2020-05-12 14:41, Joan Touzet wrote:

On 2020-05-12 5:46 a.m., Ilya Khlopotov wrote:
I would be +1 as long as it works and we have options to migrate 
archive elsewhere if/when we need to.
You are proposing to mirror email traffic which means that mail 
archive would have a complete history and spare the project from total 
vendor lock in.




Yup, that'd be a requirement from the ASF's perspective, regardless of 
technology we select.


-Joan


Best regards,
ILYA

On 2020/05/11 19:04:53, Joan Touzet  wrote:

On 2020-03-15 9:36, Dave Cottlehuber wrote:

On Fri, 13 Mar 2020, at 14:35, Naomi Slater wrote:

apparently GitHub has discussions now. it's still in beta, but you can
specifically request it if you want it if you contact support, I think

e.g., https://github.com/zeit/next.js/discussions
<https://github.com/zeit/next.js/discussions>


interesting.


I'm interested to know what we think about this and how this
might/could fit into our plans for user support, discussion, etc.


Given that we already have email integration with GitHub, this will
probably be easier to get through the ASF bureaucracy than something
brand new.

I'm willing to take this through Infra if people agree to it. It doesn't
look like there are any separate "boards" or tags yet, so the proposal
would likely be that discussions there would get emailed onto user@. The
hard part will be getting replies to the thread on user@ to go back into
the discussion on GH; we might be able to get an "asf-bot" to do this
for us.

I also looked at Infra's JIRA database, and no one has put in this
request there yet. So, we'd be the first, with all the difficulties that
entails.

Can I get an informal "vote" on this approach and go-ahead? Since it's
informal, anyone is encouraged to respond.

-Joan "adopt, adapt, improve" Touzet



Re: [PROPOSAL] Future security announcement policy

2020-05-22 Thread Joan Touzet
I'm curious what the Apache Security team's opinion is on this (they are 
cc'ed on every email to secur...@couchdb.apache.org).


The detailed policy for the ASF is here:

https://www.apache.org/security/committers.html

The only reference here to public/private is step 11:

> The project team agrees the fix, the announcement and the release
> schedule with the reporter.

And then in step 15, the vulnerability release is announced:

> after, or at the same time as, the release announcement.

It says nothing about saying "something's coming," for or against.

The problem I see is that if people know a problem is about to be 
resolved, they will look at version control closely to see if they can 
spot what the fix is. Because the release process takes a minimum of 4 
days - 3 days for the vote to pass, and 24 hours for the mirrors to 
update - this could leave unpatched people more exposed for longer than 
they would with a "0-day".


To work around this and always give people a heads up on a release, we'd 
be forced into preparing all high-profile security releases in private. 
I did *not* enjoy when we had to do this last time (2.3.0 or 2.3.1, I 
think), and I'm sure no one else in the process did, either.


The "we've always got security patches in every release" isn't a bad 
one, but it could be a lie. We don't always fix security things. 
Personally I'd rather be honest (and surprise people with a patch) than 
lie and tell people there's patches when there aren't any.


-Joan "would like to know more from security@ first" Touzet


On 2020-05-22 7:43, Jan Lehnardt wrote:

I like the OpenSSL announcements and their categorisation. They allow me to 
decide, whether I have to pencil in an upgrade for the date of the release or 
not. So *if* we decide to do this, I’d advocate to include severity and 
mitigation information in broad strokes at least.

I’m +0 on making the change.

Best
Jan
—


On 22. May 2020, at 13:38, Robert Samuel Newson  wrote:

Hi All,

We've just published a CVE and it made me think about our current announcement 
policy.

Currently, when we receive notice of a security issue, the PMC investigate it, 
fix it if it's genuine, then we prepare and publish a release without 
mentioning the security issue. A week after publication we publish the CVE.

I think we can do better. I follow haproxy and openssl announcements for 
security reasons and have found their early warning very helpful. I wonder if 
we can do something similar?

My proposal is modest. Everything stays the same as today except we announce 
that there is a security fix in the release _at the time we publish it_. The 
details are withheld for the regular 7 day period.

Are there objections to that step? Should we do more? Would it useful to 
categorise the security issue (low, medium, high. whether it is present in the 
default config. whether it can be mitigated without taking the upgrade)?

B.





Re: Sudden very slow indexing of the views

2020-05-21 Thread Joan Touzet

Hi Alan,

On 2020-05-21 18:41, Alan Malta wrote:

Joan, indeed I had around 60 documents with conflicts. I managed to fix
them and I no longer see any conflicts. Low quality performance remains
though.


Double-check that you're not running out of memory (swapping to disk) or 
CPU (lots of process churn). The hypervisor may not be telling you what 
you need to know here - try an in-guest tool like top, atop, or s-tui.


-Joan


Re: CouchDB vulnerability

2020-05-20 Thread Joan Touzet




On 2020-05-20 3:52 p.m., Andrea Brancatelli wrote:

Thanks Joan,

You’re accurate as usual.

Do you think it’s worth writing to exploit-db to correct those misleading 
reports?


Well, it says the exploit is "unconfirmed," which I think means it's 
just some random user's submission. I think it's meaningless enough (and 
easily explainable, by pointing anyone to this public email thread via 
https://lists.apache.org/) to not warrant official project action, but 
if you want, you're welcome to write to them :)


-Joan "late nights this week" Touzet



Inviato da iPhone


Il giorno 20 mag 2020, alle ore 19:29, Joan Touzet  ha 
scritto:

Hi Andrea,


On 2020-05-20 9:37, Andrea Brancatelli wrote:
A client sent us a link about a supposed security problem with one of
our couchdb 2.3.1 instances.
He related to this https://www.exploit-db.com/exploits/46595 which, to
me, seems a quite confused report that, I guess, can be related to a
"out of the box" couchdb setup in admin party.


I agree.

The first 3 things are just showing that, in admin party, you can create a DB, 
delete a DB, and create a document. This is nothing new.

#4 is showing you can create an admin on a new install if there is no admin 
there already. Same thing.

#5 and #6 are nonsense entries, in that they are adding nonsense config 
settings through the admin config API. Not only are these not possible once you 
leave admin party, junk in the config file like this will be ignored.

There is no new exploit or CVE here.


Am I wrong? Do a correctly setup couchdb with a local admin and correct
grants to the dbs suffer of that issue?


Nope! In short, none of this is possible once you disable admin party - except 
for #3 in 2.x, and that's fixable by tightening up each DB's _security.


Thanks.


-Joan "open by default is confusing in 2020" Touzet




Re: CouchDB vulnerability

2020-05-20 Thread Joan Touzet

Hi Andrea,

On 2020-05-20 9:37, Andrea Brancatelli wrote:

A client sent us a link about a supposed security problem with one of
our couchdb 2.3.1 instances.

He related to this https://www.exploit-db.com/exploits/46595 which, to
me, seems a quite confused report that, I guess, can be related to a
"out of the box" couchdb setup in admin party.


I agree.

The first 3 things are just showing that, in admin party, you can create 
a DB, delete a DB, and create a document. This is nothing new.


#4 is showing you can create an admin on a new install if there is no 
admin there already. Same thing.


#5 and #6 are nonsense entries, in that they are adding nonsense config 
settings through the admin config API. Not only are these not possible 
once you leave admin party, junk in the config file like this will be 
ignored.


There is no new exploit or CVE here.


Am I wrong? Do a correctly setup couchdb with a local admin and correct
grants to the dbs suffer of that issue?


Nope! In short, none of this is possible once you disable admin party - 
except for #3 in 2.x, and that's fixable by tightening up each DB's 
_security.




Thanks.



-Joan "open by default is confusing in 2020" Touzet


Re: Sudden very slow indexing of the views

2020-05-20 Thread Joan Touzet

HI Alan,

* What version of CouchDB?
* Do you regularly compact your databases?
* Have you looked for conflicted documents recently?

-Joan

On 2020-05-20 10:02, Alan Malta wrote:

Hi everyone,

it's been more than a week that I have been debugging a strange
performance problem with CouchDB; mainly affecting couchdb views.

About my Couch setup, I have one central couch instance and around 5
to 15 other instances replicating documents to it. In addition to the
replication, each of those other couch instances are also running a
service that posts documents to the central one via '_bulk_docs' API.
It's important to note that this model is deployed in production for
many years now.

What started to happen is that the indexing of the views became very
very slow, like < 1k changes within 10min. Making GET calls to the
views (either with or without reduce function), I also see a poor
response rate (a few tens kilobytes, either remotely or localhost).

Has anyone ever faced such slowness with CouchDB (views)? Would you
have any recommendations on where I should start looking and tests to
be performed? I have already ruled out problems with the virtual
machine and the hypervisor (load is normal for months). I have also
already recreated the views from scratch; recreated the database from
scratch (dumping deleted docs). I have also created a view to see
whether there were any large documents, and the biggest one is only
.5MB.

When I replicated the database from scratch today, CouchDB indexed
around 1.5M docs in an hour or so; while it's been the last 2h
indexing 26k changes to the database...

Any help or pointers would be very much appreciated here.
Thanks,
Alan.



Re: Should we continue with FDB RFC's

2020-05-19 Thread Joan Touzet

Technically, the code still isn't on master yet.

:D

-Joan "can we please merge to master already" Touzet

On 2020-05-19 15:19, Paul Davis wrote:

Can +1 but its gonna feel really silly when I think about how the code
is already merged...

On Tue, May 19, 2020 at 12:28 PM Joan Touzet  wrote:


Looks like the Mango one has the required +1 already.

There's reviews of the map index one by Adam, Paul, and Mike (Rhodes)
but neither have explicitly +1'ed. Can any of you get to this?

I'd rather not be the deciding +1 right now, too much else on my plate
to give this the attention it deserves for that - but I have skimmed it.

-Joan

On 2020-05-18 7:49, Garren Smith wrote:

Great thanks for the feedback. Its good to know that they are still
considered useful. I've updated my mango and map index RFC's to match the
current implementations.
I would like to merge them in.

Cheers
Garren


On Thu, May 14, 2020 at 11:14 PM Joan Touzet  wrote:


The intent of the RFCs was to give people a place to look at what's
being done, comment on the implementation decisions, and to form the
basis for eventual documentation.

I think they've been relatively successful on the first two pieces, but
it sounds like they've fallen behind, especially because we have quite a
few languishing PRs over in the couchdb-documentation repo.

My hope had been that those PRs would land much faster - even if they
were WIPs - and would get updated regularly with new PRs.

Is that too onerous of a request?

I agree with Adam that the level of detail doesn't have to be there in
great detail when it comes to implementation decisions. It only really
needs to be there in detail for API changes, so we have good source
material for the eventual documentation side of things. Since 4.0 is
meant to be largely API compatible with 3.0, I hope this is also in-line
with expectations.

-Joan "engineering, more than anything, means writing it down" Touzet

On 2020-05-13 8:53 a.m., Adam Kocoloski wrote:

I do find them useful and would be glad to see us maintain some sort of

“system architecture guide” as a living document. I understand that can be
a challenge when things are evolving quickly, though I also think that if
there’s a substantial change to the design from the RFC it could be worth a
note to dev@ to call that out.


I imagine we can omit some level of detail from these documents to still

capture the main points of the data model and data flows without needing to
update them e.g. every time a new field is added to a packed value.


Cheers, Adam


On May 13, 2020, at 5:29 AM, Garren Smith  wrote:

Hi All,

The majority of RFC's for CouchDB 4.x have gone stale and I want to know
what everyone thinks we should do about it? Do you find the RFC's

useful?


So far I've found maintaining the RFC's really difficult. Often we

write an

RFC, then write the code. The code often ends up quite different from

how

we thought it would when writing the RFC. Following that smaller code
changes and improvements to a section moves the codebase even further

from

the RFC design. Do we keep updating the RFC for every change or should

we

leave it at a certain point?

I've found the discussion emails to be really useful way to explore the
high-level design of each new feature. I would probably prefer that we
continue the discussion emails but don't do the RFC unless its a feature
that a lot of people want to be involved in the design.

Cheers
Garren








Re: Should we continue with FDB RFC's

2020-05-19 Thread Joan Touzet

Looks like the Mango one has the required +1 already.

There's reviews of the map index one by Adam, Paul, and Mike (Rhodes) 
but neither have explicitly +1'ed. Can any of you get to this?


I'd rather not be the deciding +1 right now, too much else on my plate 
to give this the attention it deserves for that - but I have skimmed it.


-Joan

On 2020-05-18 7:49, Garren Smith wrote:

Great thanks for the feedback. Its good to know that they are still
considered useful. I've updated my mango and map index RFC's to match the
current implementations.
I would like to merge them in.

Cheers
Garren


On Thu, May 14, 2020 at 11:14 PM Joan Touzet  wrote:


The intent of the RFCs was to give people a place to look at what's
being done, comment on the implementation decisions, and to form the
basis for eventual documentation.

I think they've been relatively successful on the first two pieces, but
it sounds like they've fallen behind, especially because we have quite a
few languishing PRs over in the couchdb-documentation repo.

My hope had been that those PRs would land much faster - even if they
were WIPs - and would get updated regularly with new PRs.

Is that too onerous of a request?

I agree with Adam that the level of detail doesn't have to be there in
great detail when it comes to implementation decisions. It only really
needs to be there in detail for API changes, so we have good source
material for the eventual documentation side of things. Since 4.0 is
meant to be largely API compatible with 3.0, I hope this is also in-line
with expectations.

-Joan "engineering, more than anything, means writing it down" Touzet

On 2020-05-13 8:53 a.m., Adam Kocoloski wrote:

I do find them useful and would be glad to see us maintain some sort of

“system architecture guide” as a living document. I understand that can be
a challenge when things are evolving quickly, though I also think that if
there’s a substantial change to the design from the RFC it could be worth a
note to dev@ to call that out.


I imagine we can omit some level of detail from these documents to still

capture the main points of the data model and data flows without needing to
update them e.g. every time a new field is added to a packed value.


Cheers, Adam


On May 13, 2020, at 5:29 AM, Garren Smith  wrote:

Hi All,

The majority of RFC's for CouchDB 4.x have gone stale and I want to know
what everyone thinks we should do about it? Do you find the RFC's

useful?


So far I've found maintaining the RFC's really difficult. Often we

write an

RFC, then write the code. The code often ends up quite different from

how

we thought it would when writing the RFC. Following that smaller code
changes and improvements to a section moves the codebase even further

from

the RFC design. Do we keep updating the RFC for every change or should

we

leave it at a certain point?

I've found the discussion emails to be really useful way to explore the
high-level design of each new feature. I would probably prefer that we
continue the discussion emails but don't do the RFC unless its a feature
that a lot of people want to be involved in the design.

Cheers
Garren








Re: [DISCUSS] CouchDB 3.0.1 and 3.1.0 release plans

2020-05-17 Thread Joan Touzet

On 2020-05-17 3:06, Justin Mclean wrote:

Hi,


Justin, which line of that file?


This one:
"Copyright 2020 The Apache Foundation”

It should be:
  Copyright [] [name of copyright owner]

As it’s an instruction on how to apply the license to your own work.


PR up, needs +1 before merging from someone:

https://github.com/apache/couchdb/pull/2891


Will someone on dev@ volunteer to resolve this? Thanks.


I’m happy to help if you need it just ask.


As the saying goes "Pull Requests welcome" :D Thanks for the offer.

-Joan "off to bed" Touzet


Thanks,
Justin


Re: [DISCUSS] CouchDB 3.0.1 and 3.1.0 release plans

2020-05-17 Thread Joan Touzet

On 2020-05-17 2:43, Justin Mclean wrote:

Hi there,
I took a look at your recent release and noticed a couple of issues. Be aware 
I'm not part of your project, and I'm missing a large part of your history, so 
these things may have been discussed before and are that way for good reasons.
- Your license file has an incorrect copyright line in the license appendix.


Justin, which line of that file?


- Your notice file lists the copyright of 3rd party dependencies. This is not 
how a notice file works. You need to include relevant parts from ALv2 bundled 
code notice files and relocated copyrights. Relocated copyright only occurs 
when a 3rd party header is replaced during a software grant. [1]


Will someone on dev@ volunteer to resolve this? Thanks.


- Your release contains code that is under the Erlang Public License. This is 
category B and as such can't normally be included in a source release in the 
form it's in. [2] I couldn't find any discussion on this. How was it decided 
that this code was OK to include?


Looks like this was missed with the 2.0 import. Paul did the work in 
2016 prior to the 2.0 release:


https://github.com/apache/couchdb-couch-log/commit/b6b766ddfbff8db789723689419ee6f634e752b8#diff-e7183b47c7155debfec662d667b6fcd4

https://github.com/cloudant/couchdb/commit/094ceae9fd81f1d71647728d4a2e583423618914

https://issues.apache.org/jira/browse/COUCHDB-3067

Paul, how do you want to proceed? Looks like it's just 2 files that we 
can replace quickly and spin new releases, but we'll have to go back and 
do a 2.3.2 as well, and expunge 2.3.1.


-Joan



Thanks,
Justin

P.S I'm not subscribed to this list so please CC me on any replies.

1. https://www.apache.org/dev/licensing-howto.html
2. https://www.apache.org/legal/resolved.html#category-b



Re: Should we continue with FDB RFC's

2020-05-14 Thread Joan Touzet
The intent of the RFCs was to give people a place to look at what's 
being done, comment on the implementation decisions, and to form the 
basis for eventual documentation.


I think they've been relatively successful on the first two pieces, but 
it sounds like they've fallen behind, especially because we have quite a 
few languishing PRs over in the couchdb-documentation repo.


My hope had been that those PRs would land much faster - even if they 
were WIPs - and would get updated regularly with new PRs.


Is that too onerous of a request?

I agree with Adam that the level of detail doesn't have to be there in 
great detail when it comes to implementation decisions. It only really 
needs to be there in detail for API changes, so we have good source 
material for the eventual documentation side of things. Since 4.0 is 
meant to be largely API compatible with 3.0, I hope this is also in-line 
with expectations.


-Joan "engineering, more than anything, means writing it down" Touzet

On 2020-05-13 8:53 a.m., Adam Kocoloski wrote:

I do find them useful and would be glad to see us maintain some sort of “system 
architecture guide” as a living document. I understand that can be a challenge 
when things are evolving quickly, though I also think that if there’s a 
substantial change to the design from the RFC it could be worth a note to dev@ 
to call that out.

I imagine we can omit some level of detail from these documents to still 
capture the main points of the data model and data flows without needing to 
update them e.g. every time a new field is added to a packed value.

Cheers, Adam


On May 13, 2020, at 5:29 AM, Garren Smith  wrote:

Hi All,

The majority of RFC's for CouchDB 4.x have gone stale and I want to know
what everyone thinks we should do about it? Do you find the RFC's useful?

So far I've found maintaining the RFC's really difficult. Often we write an
RFC, then write the code. The code often ends up quite different from how
we thought it would when writing the RFC. Following that smaller code
changes and improvements to a section moves the codebase even further from
the RFC design. Do we keep updating the RFC for every change or should we
leave it at a certain point?

I've found the discussion emails to be really useful way to explore the
high-level design of each new feature. I would probably prefer that we
continue the discussion emails but don't do the RFC unless its a feature
that a lot of people want to be involved in the design.

Cheers
Garren




Re: [DISCUSS] _changes feed on database partitions

2020-05-14 Thread Joan Touzet




On 2020-05-13 10:07 a.m., Robert Samuel Newson wrote:

Hi,

Yes, I think this would be a good addition for 3.0. I think we didn't add it 
before because of concerns of accidental misuse (attempting to replicate with 
it but forgetting a range, etc)?


This was definitely a concern, but it makes me wonder: are we also 
potentially discussing adding per-shard _changes feeds? I wonder if the 
work is roughly analogous.



Whatever the reasons, I think exposing the per-partition _changes feed exactly 
as you've described will be useful. We should state explicitly in the 
accompanying docs that the replicator does not use this endpoint (though, of 
course, it might be enhanced to do so in a future release).

 From 4.0 onward, there's a discussion elsewhere on whether any of the 
_partition endpoints continue to exist (leaning towards keeping them just to 
avoid unnecessary upgrade pain?), so a note in that thread would be good too. 
It does seem odd to enhance an endpoint in 3.0 to then remove it entirely in 
4.0. The reasons for removing _partition are compelling however, as the 
motivating (internal) reason for introducing _partition is gone.

B.


On 12 May 2020, at 22:59, Adam Kocoloski  wrote:

Hi all,

When we introduced partitioned databases in 3.0 we declined to add a 
partition-specific _changes endpoint, because we didn’t have a prebuilt index 
that could support it. It sounds like the lack of that endpoint is a bit of a 
drag. I wanted to start this thread to consider adding it.

Note: this isn’t a fully-formed proposal coming from my team with a plan to 
staff the development of it. Just a discussion :)

In the simplest case, a _changes feed could be implemented by scanning the 
by_seq index of the shard that hosts the named partition. We already get some 
efficiencies here: we don’t need to touch any of the other shards of the 
database, and we have enough information in the by_seq btree to filter out 
documents from other partitions without actually retrieving them from disk, so 
we can push the filter down quite nicely without a lot of extra processing. 
It’s just a very cheap binary prefix pattern match on the docid.

Most consumers of the _changes feed work incrementally, and we can support that 
here as well. It’s not like we need to do a full table scan on every 
incremental request.

If the shard is hosting so many partitions that this filter is becoming a 
bottleneck, resharding (also new in 3.0) is probably a good option. Partitioned 
databases are particularly amenable to increasing the shard count. Global 
indexes on the database become more expensive to query, but those ought to be a 
smaller percentage of queries in this data model.

Finally, if the overhead of filtering out non-matching partitions is just too 
high, we could support the use of user-created indexes, e.g. by having a user 
create a Mango index on _local_seq. If such an index exists, our “query 
planner” uses it for the partitioned _changes feed. If not, resort to the scan 
on the shard’s by_seq index as above.

I’d like to do some basic benchmarking, but I have a feeling the by_seq work quite 
well in the majority of cases, and the user-defined index is a good "escape 
valve” if we need it. WDYT?

Adam




Re: [DISCUSS] length restrictions in 4.0

2020-05-12 Thread Joan Touzet
I presume the workaround would be "Replicate back to CouchDB 3.x, but 
truncate to 236 characters in the process?" You'd lose fidelity in the 
db name that way.


-Joan

On 2020-05-12 4:05 p.m., Robert Newson wrote:

I still don’t understand how the internal shard database name format has any 
bearing on our public interface, present or future.



Re: moving email lists to GitHub Discussions (Was: [DISCUSS] moving email lists to Discourse)

2020-05-12 Thread Joan Touzet

On 2020-05-12 5:46 a.m., Ilya Khlopotov wrote:

I would be +1 as long as it works and we have options to migrate archive 
elsewhere if/when we need to.
You are proposing to mirror email traffic which means that mail archive would 
have a complete history and spare the project from total vendor lock in.



Yup, that'd be a requirement from the ASF's perspective, regardless of 
technology we select.


-Joan


Best regards,
ILYA

On 2020/05/11 19:04:53, Joan Touzet  wrote:

On 2020-03-15 9:36, Dave Cottlehuber wrote:

On Fri, 13 Mar 2020, at 14:35, Naomi Slater wrote:

apparently GitHub has discussions now. it's still in beta, but you can
specifically request it if you want it if you contact support, I think

e.g., https://github.com/zeit/next.js/discussions
<https://github.com/zeit/next.js/discussions>


interesting.


I'm interested to know what we think about this and how this
might/could fit into our plans for user support, discussion, etc.


Given that we already have email integration with GitHub, this will
probably be easier to get through the ASF bureaucracy than something
brand new.

I'm willing to take this through Infra if people agree to it. It doesn't
look like there are any separate "boards" or tags yet, so the proposal
would likely be that discussions there would get emailed onto user@. The
hard part will be getting replies to the thread on user@ to go back into
the discussion on GH; we might be able to get an "asf-bot" to do this
for us.

I also looked at Infra's JIRA database, and no one has put in this
request there yet. So, we'd be the first, with all the difficulties that
entails.

Can I get an informal "vote" on this approach and go-ahead? Since it's
informal, anyone is encouraged to respond.

-Joan "adopt, adapt, improve" Touzet



Re: moving email lists to GitHub Discussions (Was: [DISCUSS] moving email lists to Discourse)

2020-05-11 Thread Joan Touzet

On 2020-03-15 9:36, Dave Cottlehuber wrote:

On Fri, 13 Mar 2020, at 14:35, Naomi Slater wrote:

apparently GitHub has discussions now. it's still in beta, but you can
specifically request it if you want it if you contact support, I think

e.g., https://github.com/zeit/next.js/discussions



interesting.


I'm interested to know what we think about this and how this
might/could fit into our plans for user support, discussion, etc.


Given that we already have email integration with GitHub, this will 
probably be easier to get through the ASF bureaucracy than something 
brand new.


I'm willing to take this through Infra if people agree to it. It doesn't 
look like there are any separate "boards" or tags yet, so the proposal 
would likely be that discussions there would get emailed onto user@. The 
hard part will be getting replies to the thread on user@ to go back into 
the discussion on GH; we might be able to get an "asf-bot" to do this 
for us.


I also looked at Infra's JIRA database, and no one has put in this 
request there yet. So, we'd be the first, with all the difficulties that 
entails.


Can I get an informal "vote" on this approach and go-ahead? Since it's 
informal, anyone is encouraged to respond.


-Joan "adopt, adapt, improve" Touzet


Re: [VOTE]: Deprecate _update Endpoint

2020-05-11 Thread Joan Touzet

Hi Adam, dev@,

On 2020-05-06 11:40, Adam Kocoloski wrote:

When we looked at some of our internal usage data we found that _update had 
measurably higher adoption than the rendering functions, so we didn’t push so 
hard on deprecating it yet.

I’d feel better about removing this endpoint if there was a clear plan to 
provide alternative server-side partial update functionality in a future 
release. We had some good discussions in GitHub a while back:

https://github.com/apache/couchdb/issues/1554
https://github.com/apache/couchdb/issues/1559


Thanks for mentioning these.

For the record, I'm happy to help bring 1554 (and, if Jan wishes, 1559) 
through to RFCs if people wish, but I can't commit to implementing 
either one at this time.


Without someone clearly assigned to drive this to the finish line, I am 
uncomfortable with making deprecation of _update contingent upon 
implementation of either of these.


-Joan "too busy to code right now" Touzet


It’d be good to review those designs and see if we can advance them to the 
level of an RFC. I suspect we’re close.

+0


On May 6, 2020, at 10:53 AM, Nick V  wrote:

+1



On May 6, 2020, at 08:04, Jonathan Hall  wrote:

+1


On 5/6/20 1:57 PM, Jan Lehnardt wrote:
Hey all,

it appears we missed an item in our 3.0 deprecations list and we should
clear this up.

We have as of yet failed to capture consensus here about the
deprecation of the _update endpoint. I think we *have* consensus here,
but we didn’t make it stick in writing.

To recap: the _update endpoint was added to allow arbitrary data to be
POSTed to CouchDB and for developers to take whatever and turn that
into a JSON document that then gets stored into CouchDB. Initially,
this was added so we can process HTML Form submits. With the advent of
XHR/fetch in browsers, this is no longer necessary. Another aim at the
time was allowing legacy data systems that e.g. send XML via HTTP to
configurable URLs to directly integrate with CouchDB. This is still a
valid use-case, but easily enough worked around.

There is also a constant level of confusion with the similarly named
validate_doc_update feature, which enforces access control and schema
conformity on all document writes. There is no proposal to deprecate
this feature at this point and the _update endpoint and functionality
are fully distinct from validate_doc_update.

_update is the logical reverse of a _show and we already have
deprecated that. It follows that we also deprecate _update for the same
reasons (which I’m not going to rehash here for the 400th time).

Since this is an API deprecation as per our bylaws[1], please cast your
votes (or abstain to agree, as per lazy-consensus).

Best
Jan “XML, in this economy?” Lehnardt
—
[1]: https://couchdb.apache.org/bylaws.html#api





[ANNOUNCE] Apache CouchDB 3.0.1 and 3.1.0 released

2020-05-06 Thread Joan Touzet

Dear community,

Apache CouchDB® 3.0.1 and 3.1.0 have been released and are available for 
download.

Apache CouchDB® lets you access your data where you need it. The Couch 
Replication Protocol is implemented in a variety of projects and products that 
span every imaginable computing environment from globally distributed 
server-clusters, over mobile phones to web browsers.

Store your data safely, on your own servers, or with any leading cloud 
provider. Your web- and native applications love CouchDB, because it speaks 
JSON natively and supports binary data for all your data storage needs.

The Couch Replication Protocol lets your data flow seamlessly between server 
clusters to mobile phones and web browsers, enabling a compelling offline-first 
user-experience while maintaining high performance and strong reliability. 
CouchDB comes with a developer-friendly query language, and optionally 
MapReduce for simple, efficient, and comprehensive data retrieval.

https://couchdb.apache.org/#download

Pre-built packages for Windows, macOS, Debian/Ubuntu and RHEL/CentOS are 
available.

CouchDB 3.0.1 is a maintenance release, and was originally published on 
2020-05-05.

CouchDB 3.1.0 is a feature release, and was originally published on 2020-05-05.

The community would like to thank all contributors for their part in making 
this release, from the smallest bug report or patch to major contributions in 
code, design, or marketing, we couldn’t have done it without you!

See the official release notes document for an exhaustive list of all changes:

http://docs.couchdb.org/en/stable/whatsnew/3.1.html

Release Notes highlights from 3.0.1:

  - A memory leak when encoding large binary content was patched
  
  - Improvements in documentation and defaults
  
  - JavaScript will no longer corrupt UTF-8 strings in various JS functions
  
Release Notes highlights from 3.1.0:


Everything from 3.0.1, plus...

  - Support for Java Web Tokens

  - Support for SpiderMonkey 68, including binaries for Ubuntu 20.04 (Focal 
Fossa)

  - Up to a 40% performance improvement in the database compactor

  -
On behalf of the CouchDB PMC,
Joan Touzet



[ANNOUNCE] Apache CouchDB 3.0.1 and 3.1.0 released

2020-05-06 Thread Joan Touzet

Dear community,

Apache CouchDB® 3.0.1 and 3.1.0 have been released and are available for 
download.

Apache CouchDB® lets you access your data where you need it. The Couch 
Replication Protocol is implemented in a variety of projects and products that 
span every imaginable computing environment from globally distributed 
server-clusters, over mobile phones to web browsers.

Store your data safely, on your own servers, or with any leading cloud 
provider. Your web- and native applications love CouchDB, because it speaks 
JSON natively and supports binary data for all your data storage needs.

The Couch Replication Protocol lets your data flow seamlessly between server 
clusters to mobile phones and web browsers, enabling a compelling offline-first 
user-experience while maintaining high performance and strong reliability. 
CouchDB comes with a developer-friendly query language, and optionally 
MapReduce for simple, efficient, and comprehensive data retrieval.

https://couchdb.apache.org/#download

Pre-built packages for Windows, macOS, Debian/Ubuntu and RHEL/CentOS are 
available.

CouchDB 3.0.1 is a maintenance release, and was originally published on 
2020-05-05.

CouchDB 3.1.0 is a feature release, and was originally published on 2020-05-05.

The community would like to thank all contributors for their part in making 
this release, from the smallest bug report or patch to major contributions in 
code, design, or marketing, we couldn’t have done it without you!

See the official release notes document for an exhaustive list of all changes:

http://docs.couchdb.org/en/stable/whatsnew/3.1.html

Release Notes highlights from 3.0.1:

  - A memory leak when encoding large binary content was patched
  
  - Improvements in documentation and defaults
  
  - JavaScript will no longer corrupt UTF-8 strings in various JS functions
  
Release Notes highlights from 3.1.0:


Everything from 3.0.1, plus...

  - Support for Java Web Tokens

  - Support for SpiderMonkey 68, including binaries for Ubuntu 20.04 (Focal 
Fossa)

  - Up to a 40% performance improvement in the database compactor

  -
On behalf of the CouchDB PMC,
Joan Touzet



[ANNOUNCE] Apache CouchDB 3.0.1 and 3.1.0 released

2020-05-06 Thread Joan Touzet

Dear community,

Apache CouchDB® 3.0.1 and 3.1.0 have been released and are available for 
download.

Apache CouchDB® lets you access your data where you need it. The Couch 
Replication Protocol is implemented in a variety of projects and products that 
span every imaginable computing environment from globally distributed 
server-clusters, over mobile phones to web browsers.

Store your data safely, on your own servers, or with any leading cloud 
provider. Your web- and native applications love CouchDB, because it speaks 
JSON natively and supports binary data for all your data storage needs.

The Couch Replication Protocol lets your data flow seamlessly between server 
clusters to mobile phones and web browsers, enabling a compelling offline-first 
user-experience while maintaining high performance and strong reliability. 
CouchDB comes with a developer-friendly query language, and optionally 
MapReduce for simple, efficient, and comprehensive data retrieval.

https://couchdb.apache.org/#download

Pre-built packages for Windows, macOS, Debian/Ubuntu and RHEL/CentOS are 
available.

CouchDB 3.0.1 is a maintenance release, and was originally published on 
2020-05-05.

CouchDB 3.1.0 is a feature release, and was originally published on 2020-05-05.

The community would like to thank all contributors for their part in making 
this release, from the smallest bug report or patch to major contributions in 
code, design, or marketing, we couldn’t have done it without you!

See the official release notes document for an exhaustive list of all changes:

http://docs.couchdb.org/en/stable/whatsnew/3.1.html

Release Notes highlights from 3.0.1:

  - A memory leak when encoding large binary content was patched
  
  - Improvements in documentation and defaults
  
  - JavaScript will no longer corrupt UTF-8 strings in various JS functions
  
Release Notes highlights from 3.1.0:


Everything from 3.0.1, plus...

  - Support for Java Web Tokens

  - Support for SpiderMonkey 68, including binaries for Ubuntu 20.04 (Focal 
Fossa)

  - Up to a 40% performance improvement in the database compactor

  -
On behalf of the CouchDB PMC,
Joan Touzet



[ANNOUNCE] Apache CouchDB 3.0.1 and 3.1.0 released

2020-05-06 Thread Joan Touzet

Dear community,

Apache CouchDB® 3.0.1 and 3.1.0 have been released and are available for 
download.

Apache CouchDB® lets you access your data where you need it. The Couch 
Replication Protocol is implemented in a variety of projects and products that 
span every imaginable computing environment from globally distributed 
server-clusters, over mobile phones to web browsers.

Store your data safely, on your own servers, or with any leading cloud 
provider. Your web- and native applications love CouchDB, because it speaks 
JSON natively and supports binary data for all your data storage needs.

The Couch Replication Protocol lets your data flow seamlessly between server 
clusters to mobile phones and web browsers, enabling a compelling offline-first 
user-experience while maintaining high performance and strong reliability. 
CouchDB comes with a developer-friendly query language, and optionally 
MapReduce for simple, efficient, and comprehensive data retrieval.

https://couchdb.apache.org/#download

Pre-built packages for Windows, macOS, Debian/Ubuntu and RHEL/CentOS are 
available.

CouchDB 3.0.1 is a maintenance release, and was originally published on 
2020-05-05.

CouchDB 3.1.0 is a feature release, and was originally published on 2020-05-05.

The community would like to thank all contributors for their part in making 
this release, from the smallest bug report or patch to major contributions in 
code, design, or marketing, we couldn’t have done it without you!

See the official release notes document for an exhaustive list of all changes:

http://docs.couchdb.org/en/stable/whatsnew/3.1.html

Release Notes highlights from 3.0.1:

  - A memory leak when encoding large binary content was patched
  
  - Improvements in documentation and defaults
  
  - JavaScript will no longer corrupt UTF-8 strings in various JS functions
  
Release Notes highlights from 3.1.0:


Everything from 3.0.1, plus...

  - Support for Java Web Tokens

  - Support for SpiderMonkey 68, including binaries for Ubuntu 20.04 (Focal 
Fossa)

  - Up to a 40% performance improvement in the database compactor

  -
On behalf of the CouchDB PMC,
Joan Touzet



[RESULTS] [VOTE] Release Apache CouchDB 3.1.0

2020-05-04 Thread Joan Touzet

Dear community,

The vote has now closed.

Thank you to everyone who participated!

The results are:

+1 - 4 votes
+0 - 0 votes
-0 - 0 votes
-1 - 0 votes

The vote is PASED.

Thanks,

-Joan "Let's 3.1.0" Touzet

On 2020-04-30 0:02, Joan Touzet wrote:

(YES, this is another -RC1, a simultaneous release to 3.0.1!)

Dear community,

I would like to propose that we release Apache CouchDB 3.1.0.

Candidate release notes:

   https://docs.couchdb.org/en/latest/whatsnew/3.1.html

We encourage the whole community to download and test these release 
artefacts so that any critical issues can be resolved before the release 
is made. Everyone is free to vote on this release, so dig right in! 
(Only PMC members have binding votes, but they depend on community 
feedback to gauge if an official release is ready to be made.)


The release artefacts we are voting on are available here:

     https://dist.apache.org/repos/dist/dev/couchdb/source/3.1.0/rc.1/

There, you will find a tarball, a GPG signature, and SHA256/SHA512 
checksums.


Please follow the test procedure here:


https://cwiki.apache.org/confluence/display/COUCHDB/Testing+a+Source+Release 



Please remember that "RC1" is an annotation. If the vote passes, these 
artefacts will be released as Apache CouchDB 3.1.0


The intent is to push the successful build on Monday May 4, and announce 
on Tuesday May 5.


Please cast your votes now.

Thanks,
Joan "No one remembers Victoria Jackson on USA Network" Touzet


[RESULTS] [VOTE] Release Apache CouchDB 3.0.1

2020-05-04 Thread Joan Touzet

Dear community,

The vote has now closed.

Thank you to everyone who participated!

The results are:

+1 - 4 votes
+0 - 0 votes
-0 - 0 votes
-1 - 0 votes

The vote is PASED.

Thanks,

-Joan "Let's also 3.0.1" Touzet

On 2020-04-30 0:00, Joan Touzet wrote:

Dear community,

I would like to propose that we release Apache CouchDB %VERSION%.

Candidate release notes:

   https://docs.couchdb.org/en/latest/whatsnew/3.0.html#version-3-0-1

We encourage the whole community to download and test these release 
artefacts so that any critical issues can be resolved before the release 
is made. Everyone is free to vote on this release, so dig right in! 
(Only PMC members have binding votes, but they depend on community 
feedback to gauge if an official release is ready to be made.)


The release artefacts we are voting on are available here:

     https://dist.apache.org/repos/dist/dev/couchdb/source/3.0.1/rc.1/

There, you will find a tarball, a GPG signature, and SHA256/SHA512 
checksums.


Please follow the test procedure here:


https://cwiki.apache.org/confluence/display/COUCHDB/Testing+a+Source+Release 



Please remember that "RC1" is an annotation. If the vote passes, these 
artefacts will be released as Apache CouchDB 3.0.1.


The intent is to push the successful build on Monday May 4, and announce 
on Tuesday May 5.


Please cast your votes now.

Thanks,
Joan "UP! all night" Touzet


Re: [DISCUSS] length restrictions in 4.0

2020-05-04 Thread Joan Touzet

I suspect he means when replicating back to a 3.x or 2.x cluster.

On 2020-05-04 3:03 p.m., Robert Samuel Newson wrote:


But we don't need to add a file extension or a timestamp to database names.

B.


On 4 May 2020, at 18:42, Nick Vatamaniuc  wrote:

Hello everyone,

Good idea, +1 with one minor tweak: database name length in versions
<4.0 was restricted by the maximum file name on whatever file system
the server was running on. In practice that was 255, then there is an
extension and a timestamp in the filename which made the db name limit
be 238 so I suggest to use that instead.

-Nick

On Mon, May 4, 2020 at 11:51 AM Robert Samuel Newson  wrote:


Hi,

I think I speak for many in accepting the risk that we're excluding doc ids 
formed from 4096-bit RSA signatures.

I don't think I made it clear but I think these should be fixed limits (i.e, 
not configurable) in order to ensure inter-replication between couchdb 
installations wherever they are.

B.


On 4 May 2020, at 10:52, Ilya Khlopotov  wrote:

Hello,

Thank you Robert for starting this important discussion. I think that the 
values you propose make sense.
I can see a case when user would use hashes as document ids. All existent hash 
functions I am aware of should return data which fit into 512 characters. There 
is only one case which doesn't fit into 512 limit. If user would decide to use 
RSA signatures as document ids and they use 4096 bytes sized keys the signature 
size would be 684 bytes.

However in this case users can easily replace signatures with hashes of 
signatures. So I wouldn't worry about it to much. 512 sounds plenty to me.

+1 to set hard limits on db name size and doc id size with proposed values.

Best regards,
iilyak

On 2020/05/01 18:36:45, Robert Samuel Newson  wrote:

Hello,

There are other threads related to doc size (etc) limits for CouchDB 4.0, 
motivated by restrictions in FoundationDB, but we haven't discussed database 
name length and doc id length limits. These are encoded into FoundationDB keys 
and so we would be wise to forcibly limit their length from the start.

I propose 256 character limit for database name and 512 character limit for doc 
ids.

If you can't uniquely identify your database or document within those limits I 
argue that you're doing something wrong, and the limits here, while making FDB 
happy, are an aid to sensible application design.

Does anyone want higher or lower limits? Comments pls.

B.








[NOTICE] Scheduling of Apache CouchDB 3.0.1 and 3.1.0 releases

2020-05-04 Thread Joan Touzet

Dear community,

The Apache CouchDB 3.0.1 and 3.1.0 releases are ready to go. There will 
be a short delay while we wait for the Apache mirror system to sync up.


I plan to make the announcement at:

Tuesday, 2020-05-04, 12:00:00 UTC-4

I will use these release notes:

https://docs.couchdb.org/en/stable/whatsnew/3.0.html
https://docs.couchdb.org/en/stable/whatsnew/3.1.html

You can track the status of the Apache mirror system here:

https://www.apache.org/mirrors/

Please make any necessary preparations.

Thanks,
Joan


Re: Back to "Admin Party"

2020-05-03 Thread Joan Touzet

Remove all admin users defined in [admins].

On 2020-05-03 3:23 p.m., Bill Stephenson wrote:

No, I want to play with it a bit to see how I can use it in Admin Party mode 
(as if I just installed it).

—Bill




On May 3, 2020, at 2:19 PM, Daniel Holth  wrote:

Do you want to find the local.ini and put a new password in? It'll encode
the plaintext password typed into the correct .ini field

On Sun, May 3, 2020, 3:18 PM Bill Stephenson 
wrote:


Is there way to reset my CouchDB (2.3.0) to "Admin Party" that I installed
on my Mac Mini?


-Bill








Re: couch on ubuntu 20.4?

2020-05-02 Thread Joan Touzet
There is currently a vote happening for release of CouchDB 3.0.1 and 
3.1.0 on the developers list. 3.1.0 introduces SpiderMonkey 68 support, 
which is required for us to provide packages on Ubuntu 20.04 (Focal Fossa).


Chances are good the vote will past Monday. That means packages will 
release Tuesday.


I encourage everyone who is capable of building from source and testing 
functionality to check out the thread on the dev mailing list and 
provide feedback.


-Joan "it's the weekend" Touzet

On 2020-05-02 6:06 p.m., Rene Veerman wrote:

any idea when it'll be out?

coz in order to get rid of bugs while playing movies, i had to upgrade one
of my machines over here, and i doubt running the 18.04 installer is going
to work on a 20.4 system..



Re: can't get couchdb to work on https

2020-05-01 Thread Joan Touzet
Yup! Scroll a little bit down in our docs and we provide a minimal 
working config for Nginx. As the docs say:


"Proxy buffering must be disabled, or continuous replication will not 
function correctly behind nginx."


https://docs.couchdb.org/en/latest/best-practices/reverse-proxies.html?highlight=haproxy#reverse-proxying-with-nginx

-Joan "lisp machines are fun" Touzet

On 2020-05-01 15:29, Joel Jucá wrote:

You can make this setup using Nginx too. I'm unsure about haproxy but Nginx
is quite trivial to setup.

On Fri, May 1, 2020 at 4:26 PM Joan Touzet  wrote:


Hi Bill,

haproxy should be as simple as installing the binary on your *NIX
platform, then using something similar to our shipped configuration:


https://docs.couchdb.org/en/latest/best-practices/reverse-proxies.html?highlight=haproxy#reverse-proxying-with-haproxy


Also, I see this walkthrough is referenced elsewhere as working for
Let's Encrypt and CouchDB:


https://www.joshmorony.com/creating-a-couchdb-database-on-an-ubuntu-server-digital-ocean/

Hope they help,
Joan "3.0.1 and 3.1.0 out hopefully next week" Touzet

On 2020-05-01 15:16, Bill Stephenson wrote:

FWIW, I tried the instructions I provided earlier this week and didn’t

get them to work again. I don’t know if it’s a change made by Let’s Encrypt
or I forget exactly what I did.


I’ll go through the process setting up a Digital Ocean vps again as soon

as I get some time because getting those certs configured has always been a
bit of a pain and it’d be a good thing to nail that process down.


If anyone has a list of instruction on setting up haproxy they can share

I’d be glad to have them and give that a shot too.



Kindest Regards,

Bill Stephenson
Tech Support
www.cherrypc.com <http://www.ezinvoice.com/>
1-417-546-8390





On Apr 30, 2020, at 3:56 PM, Joan Touzet  wrote:

On 2020-04-30 16:22, Rene Veerman wrote:

i'm really only looking for a quick and easy way to getting https to

work

again..


Bill Stephenson gave you a step-by-step that seemed reasonable to me.


do the creators of couchdb read this mailinglist?


Yes.

Most of us terminate SSL ahead of CouchDB at a reverse proxy (such as

haproxy). Some of us have even contemplated dropping native SSL support in
CouchDB entirely, because configuring it is a bit of a pain, as you've
found. But it can be done, and it does work.


For SSL in pure CouchDB, when I must, I use something like EasyRSA:

   https://github.com/OpenVPN/easy-rsa

to generate the certs, then munge them together and start it. It works

OK. But I do this about once every 2 years max.


-Joan "Erlang's SSL support isn't great" Touzet



On Sun, Apr 26, 2020 at 3:04 PM Joel Jucá 

wrote:

Rene,

Your problem seems to be infrastructure-related, rather than CouchDB
related. I would recommend you to read about Infrastructure as Code.

This

is a practice that allows a developer to declare its infrastructure

(in

your specific case, server configuration) and have some sort of
reproducibility from it. Then, you could also understand every single
change made to your server infrastructure - and even share it as a

Gist,

for instance, and have some sort of feedback/pull request directly on

it.


I would recommend you Ansible (
https://www.ansible.com/resources/get-started).
It's a great solution that allows you to declare your server

configuration

as YAML files and use it within Ansible CLI to reproduce the declared
configuration on a targeted server (eg: your Ubuntu-powered CouchDB
server).

I've struggled a lot with server configuration back in 2010-2012 when

I was

a full-stack PHP/Drupal developer, and after discovering Ansible I

could

never imagine myself handling performing a complex task (server
configuration) manually!

I hope it helps you in some way.

On Sat, Apr 25, 2020 at 6:28 PM Rene Veerman 


wrote:


yes, i did..

On Sat, Apr 25, 2020 at 9:16 PM Bill Stephenson




wrote:


Did you do a "sudo ufw allow 6984”?


Kindest Regards,

Bill Stephenson
Tech Support
www.cherrypc.com <http://www.ezinvoice.com/>
1-417-546-8390





On Apr 25, 2020, at 9:28 AM, Rene Veerman 


wrote:


also (FYI) : i have already entered the right port forwarding

commands

into

my ADSL modem..

On Sat, Apr 25, 2020 at 4:21 PM Rene Veerman <

seductivea...@gmail.com>

wrote:


that gets me a 'connection refused' :

('albatross' === localhost === nicer.app)

root@albatross:/opt/couchdb/letsencrypt# service couchdb stop
root@albatross:/opt/couchdb/letsencrypt# telnet localhost 6984
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
root@albatross:/opt/couchdb/letsencrypt# telnet nicer.app 6984
Trying 127.0.0.1...
Trying 82.161.37.94...
telnet: Unable to connect to remote host: Connection refused
root@albatross:/opt/couchdb/letsencrypt#

On Sat, Apr 25, 2020 at 1:41 PM Florian Westreicher <

st...@meredrica.org>

wrote:


Did you try to telnet to the port whi

Re: can't get couchdb to work on https

2020-05-01 Thread Joan Touzet

Hi Bill,

haproxy should be as simple as installing the binary on your *NIX 
platform, then using something similar to our shipped configuration:


https://docs.couchdb.org/en/latest/best-practices/reverse-proxies.html?highlight=haproxy#reverse-proxying-with-haproxy


Also, I see this walkthrough is referenced elsewhere as working for 
Let's Encrypt and CouchDB:


https://www.joshmorony.com/creating-a-couchdb-database-on-an-ubuntu-server-digital-ocean/

Hope they help,
Joan "3.0.1 and 3.1.0 out hopefully next week" Touzet

On 2020-05-01 15:16, Bill Stephenson wrote:

FWIW, I tried the instructions I provided earlier this week and didn’t get them 
to work again. I don’t know if it’s a change made by Let’s Encrypt or I forget 
exactly what I did.

I’ll go through the process setting up a Digital Ocean vps again as soon as I 
get some time because getting those certs configured has always been a bit of a 
pain and it’d be a good thing to nail that process down.

If anyone has a list of instruction on setting up haproxy they can share I’d be 
glad to have them and give that a shot too.


Kindest Regards,

Bill Stephenson
Tech Support
www.cherrypc.com <http://www.ezinvoice.com/>
1-417-546-8390





On Apr 30, 2020, at 3:56 PM, Joan Touzet  wrote:

On 2020-04-30 16:22, Rene Veerman wrote:

i'm really only looking for a quick and easy way to getting https to work
again..


Bill Stephenson gave you a step-by-step that seemed reasonable to me.


do the creators of couchdb read this mailinglist?


Yes.

Most of us terminate SSL ahead of CouchDB at a reverse proxy (such as haproxy). 
Some of us have even contemplated dropping native SSL support in CouchDB 
entirely, because configuring it is a bit of a pain, as you've found. But it 
can be done, and it does work.

For SSL in pure CouchDB, when I must, I use something like EasyRSA:

  https://github.com/OpenVPN/easy-rsa

to generate the certs, then munge them together and start it. It works OK. But 
I do this about once every 2 years max.

-Joan "Erlang's SSL support isn't great" Touzet



On Sun, Apr 26, 2020 at 3:04 PM Joel Jucá  wrote:

Rene,

Your problem seems to be infrastructure-related, rather than CouchDB
related. I would recommend you to read about Infrastructure as Code. This
is a practice that allows a developer to declare its infrastructure (in
your specific case, server configuration) and have some sort of
reproducibility from it. Then, you could also understand every single
change made to your server infrastructure - and even share it as a Gist,
for instance, and have some sort of feedback/pull request directly on it.

I would recommend you Ansible (
https://www.ansible.com/resources/get-started).
It's a great solution that allows you to declare your server configuration
as YAML files and use it within Ansible CLI to reproduce the declared
configuration on a targeted server (eg: your Ubuntu-powered CouchDB
server).

I've struggled a lot with server configuration back in 2010-2012 when I was
a full-stack PHP/Drupal developer, and after discovering Ansible I could
never imagine myself handling performing a complex task (server
configuration) manually!

I hope it helps you in some way.

On Sat, Apr 25, 2020 at 6:28 PM Rene Veerman 
wrote:


yes, i did..

On Sat, Apr 25, 2020 at 9:16 PM Bill Stephenson




wrote:


Did you do a "sudo ufw allow 6984”?


Kindest Regards,

Bill Stephenson
Tech Support
www.cherrypc.com <http://www.ezinvoice.com/>
1-417-546-8390





On Apr 25, 2020, at 9:28 AM, Rene Veerman 

wrote:


also (FYI) : i have already entered the right port forwarding

commands

into

my ADSL modem..

On Sat, Apr 25, 2020 at 4:21 PM Rene Veerman <

seductivea...@gmail.com>

wrote:


that gets me a 'connection refused' :

('albatross' === localhost === nicer.app)

root@albatross:/opt/couchdb/letsencrypt# service couchdb stop
root@albatross:/opt/couchdb/letsencrypt# telnet localhost 6984
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
root@albatross:/opt/couchdb/letsencrypt# telnet nicer.app 6984
Trying 127.0.0.1...
Trying 82.161.37.94...
telnet: Unable to connect to remote host: Connection refused
root@albatross:/opt/couchdb/letsencrypt#

On Sat, Apr 25, 2020 at 1:41 PM Florian Westreicher <

st...@meredrica.org>

wrote:


Did you try to telnet to the port while couchdb is down? If there

is

no

open port, telnet won't connect.


On April 25, 2020 03:50:56 Rene Veerman 

wrote:




unfortunately that didn't fix things either. i'm still stuck at

the

eaddrinuse error..



[info] 2020-04-25T01:49:15.730815Z couchdb@127.0.0.1 <0.232.0>



Apache CouchDB has started on https://0.0.0.0:6984/
[info] 2020-04-25T01:49:15.731032Z couchdb@127.0.0.1 <0.11.0>



Application couch started on node 'couchdb@127.0.0.1'
[info] 2020-04-25T01:49:15.731178Z couchdb@127.0.0.1 <0.11.0>



Application ets_lru started on node 'couchdb@127.0.0

3.0.1-RC2 and 3.1.0-RC2 Windows, Linux test binaries uploaded

2020-04-30 Thread Joan Touzet

You can find the binaries in the usual places:

https://dist.apache.org/repos/dist/dev/couchdb/binary/win/3.0.1/rc.2/
https://dist.apache.org/repos/dist/dev/couchdb/binary/win/3.1.0/rc.2/

https://repo-nightly.couchdb.org/ (check in the 3.x and 3.0.x folders)

-Joan "enough CouchDB for today" Touzet


Re: can't get couchdb to work on https

2020-04-30 Thread Joan Touzet

On 2020-04-30 16:22, Rene Veerman wrote:

i'm really only looking for a quick and easy way to getting https to work
again..


Bill Stephenson gave you a step-by-step that seemed reasonable to me.


do the creators of couchdb read this mailinglist?


Yes.

Most of us terminate SSL ahead of CouchDB at a reverse proxy (such as 
haproxy). Some of us have even contemplated dropping native SSL support 
in CouchDB entirely, because configuring it is a bit of a pain, as 
you've found. But it can be done, and it does work.


For SSL in pure CouchDB, when I must, I use something like EasyRSA:

  https://github.com/OpenVPN/easy-rsa

to generate the certs, then munge them together and start it. It works 
OK. But I do this about once every 2 years max.


-Joan "Erlang's SSL support isn't great" Touzet



On Sun, Apr 26, 2020 at 3:04 PM Joel Jucá  wrote:


Rene,

Your problem seems to be infrastructure-related, rather than CouchDB
related. I would recommend you to read about Infrastructure as Code. This
is a practice that allows a developer to declare its infrastructure (in
your specific case, server configuration) and have some sort of
reproducibility from it. Then, you could also understand every single
change made to your server infrastructure - and even share it as a Gist,
for instance, and have some sort of feedback/pull request directly on it.

I would recommend you Ansible (
https://www.ansible.com/resources/get-started).
It's a great solution that allows you to declare your server configuration
as YAML files and use it within Ansible CLI to reproduce the declared
configuration on a targeted server (eg: your Ubuntu-powered CouchDB
server).

I've struggled a lot with server configuration back in 2010-2012 when I was
a full-stack PHP/Drupal developer, and after discovering Ansible I could
never imagine myself handling performing a complex task (server
configuration) manually!

I hope it helps you in some way.

On Sat, Apr 25, 2020 at 6:28 PM Rene Veerman 
wrote:


yes, i did..

On Sat, Apr 25, 2020 at 9:16 PM Bill Stephenson




wrote:


Did you do a "sudo ufw allow 6984”?


Kindest Regards,

Bill Stephenson
Tech Support
www.cherrypc.com 
1-417-546-8390





On Apr 25, 2020, at 9:28 AM, Rene Veerman 

wrote:


also (FYI) : i have already entered the right port forwarding

commands

into

my ADSL modem..

On Sat, Apr 25, 2020 at 4:21 PM Rene Veerman <

seductivea...@gmail.com>

wrote:


that gets me a 'connection refused' :

('albatross' === localhost === nicer.app)

root@albatross:/opt/couchdb/letsencrypt# service couchdb stop
root@albatross:/opt/couchdb/letsencrypt# telnet localhost 6984
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
root@albatross:/opt/couchdb/letsencrypt# telnet nicer.app 6984
Trying 127.0.0.1...
Trying 82.161.37.94...
telnet: Unable to connect to remote host: Connection refused
root@albatross:/opt/couchdb/letsencrypt#

On Sat, Apr 25, 2020 at 1:41 PM Florian Westreicher <

st...@meredrica.org>

wrote:


Did you try to telnet to the port while couchdb is down? If there

is

no

open port, telnet won't connect.


On April 25, 2020 03:50:56 Rene Veerman 

wrote:




unfortunately that didn't fix things either. i'm still stuck at

the

eaddrinuse error..



[info] 2020-04-25T01:49:15.730815Z couchdb@127.0.0.1 <0.232.0>



Apache CouchDB has started on https://0.0.0.0:6984/
[info] 2020-04-25T01:49:15.731032Z couchdb@127.0.0.1 <0.11.0>



Application couch started on node 'couchdb@127.0.0.1'
[info] 2020-04-25T01:49:15.731178Z couchdb@127.0.0.1 <0.11.0>



Application ets_lru started on node 'couchdb@127.0.0.1'
[notice] 2020-04-25T01:49:15.737605Z couchdb@127.0.0.1 <0.284.0>



rexi_server : started servers
[notice] 2020-04-25T01:49:15.738914Z couchdb@127.0.0.1 <0.288.0>



rexi_buffer : started servers
[info] 2020-04-25T01:49:15.739062Z couchdb@127.0.0.1 <0.11.0>



Application rexi started on node 'couchdb@127.0.0.1'
[notice] 2020-04-25T01:49:15.786354Z couchdb@127.0.0.1 <0.318.0>



mem3_reshard_dbdoc start init()
[notice] 2020-04-25T01:49:15.790014Z couchdb@127.0.0.1 <0.320.0>



mem3_reshard start init()
[notice] 2020-04-25T01:49:15.790112Z couchdb@127.0.0.1 <0.321.0>



mem3_reshard db monitor <0.321.0> starting
[notice] 2020-04-25T01:49:15.792025Z couchdb@127.0.0.1 <0.320.0>



mem3_reshard starting reloading jobs
[notice] 2020-04-25T01:49:15.792087Z couchdb@127.0.0.1 <0.320.0>



mem3_reshard finished reloading jobs
[info] 2020-04-25T01:49:15.792900Z couchdb@127.0.0.1 <0.11.0>



Application mem3 started on node 'couchdb@127.0.0.1'
[info] 2020-04-25T01:49:15.793024Z couchdb@127.0.0.1 <0.11.0>



Application fabric started on node 'couchdb@127.0.0.1'
[error] 2020-04-25T01:49:15.796505Z couchdb@127.0.0.1 <0.330.0>



CRASH REPORT Process  (<0.330.0>) with 0 neighbors exited with

reason:

eaddrinuse at 

Re: [VOTE] Release Apache CouchDB 3.1.0 (RC2)

2020-04-30 Thread Joan Touzet

Testing my own release.

Platform: Windows 10 (1909) and Windows 7 (latest) x86_64

sha256 - matches
sha512 - matches
make check - still failing on the same 2 Elixir tests, but hand-patched.
 I didn't cherry-pick the full fix from #2684 (f796cd1)
 I don't feel this requires an RC3.
build .msi binary - passes
install on Windows 7 - passes
curl http://admin:password@localhost:5984/ :

{"couchdb":"Welcome","version":"3.1.0","git_sha":"ff0feea20","uuid":"ece8ffd5e8cef4de935f24086e9d0bd6","features":["access-ready","partitioned","pluggable-storage-engines","reshard","scheduler"],"vendor":{"name":"The 
Apache Software Foundation"}}


fauxton self-check - passes

+1


On 2020-04-30 14:39, Joan Touzet wrote:

Dear community,

I would like to propose that we release Apache CouchDB 3.1.0.

Changes since the last round:

   https://github.com/apache/couchdb/compare/3.1.0-RC1...3.1.0-RC2

Candidate release notes:

   https://docs.couchdb.org/en/latest/whatsnew/3.1.html

We encourage the whole community to download and test these release 
artefacts so that any critical issues can be resolved before the release 
is made. Everyone is free to vote on this release, so dig right in! 
(Only PMC members have binding votes, but they depend on community 
feedback to gauge if an official release is ready to be made.)


The release artefacts we are voting on are available here:

     https://dist.apache.org/repos/dist/dev/couchdb/source/3.1.0/rc.2/

There, you will find a tarball, a GPG signature, and SHA256/SHA512 
checksums.


Please follow the test procedure here:


https://cwiki.apache.org/confluence/display/COUCHDB/Testing+a+Source+Release 



Please remember that "RC2" is an annotation. If the vote passes, these 
artefacts will be released as Apache CouchDB 3.1.0.


The intent is still to push the successful build on Monday May 4, and 
announce on Tuesday May 5.


Please cast your votes now.

Thanks,
Joan "and have a biscuit" Touzet


Re: [VOTE] Release Apache CouchDB 3.0.1 (RC2)

2020-04-30 Thread Joan Touzet

Testing my own release.

Platform: Windows 10 (1909) and Windows 7 (latest) x86_64

sha256 - matches
sha512 - matches
make check - still failing on the same 2 Elixir tests, but hand-patched.
 I didn't cherry-pick the full fix from #2684 (f796cd1)
 I don't feel this requires an RC3.
build .msi binary - passes
install on Windows 7 - passes
curl http://admin:password@localhost:5984/ :

{"couchdb":"Welcome","version":"3.0.1","git_sha":"895c3748a","uuid":"ece8ffd5e8cef4de935f24086e9d0bd6","features":["access-ready","partitioned","pluggable-storage-engines","reshard","scheduler"],"vendor":{"name":"The 
Apache Software Foundation"}}


fauxton self check - passes

+1

On 2020-04-30 14:38, Joan Touzet wrote:

Dear community,

I would like to propose that we release Apache CouchDB 3.0.1.

Changes since the last round:

     https://github.com/apache/couchdb/compare/3.0.1-RC1...3.0.1-RC2

Candidate release notes:

   https://docs.couchdb.org/en/latest/whatsnew/3.0.html#version-3-0-1

We encourage the whole community to download and test these release 
artefacts so that any critical issues can be resolved before the release 
is made. Everyone is free to vote on this release, so dig right in! 
(Only PMC members have binding votes, but they depend on community 
feedback to gauge if an official release is ready to be made.)


The release artefacts we are voting on are available here:

     https://dist.apache.org/repos/dist/dev/couchdb/source/3.0.1/rc.2/

There, you will find a tarball, a GPG signature, and SHA256/SHA512 
checksums.


Please follow the test procedure here:


https://cwiki.apache.org/confluence/display/COUCHDB/Testing+a+Source+Release 



Please remember that "RC2" is an annotation. If the vote passes, these 
artefacts will be released as Apache CouchDB 3.0.1.


The intent is still to push the successful build on Monday May 4, and 
announce on Tuesday May 5.


Please cast your votes now.

Thanks,
Joan "time to descale the coffee maker" Touzet


[VOTE] Release Apache CouchDB 3.1.0 (RC2)

2020-04-30 Thread Joan Touzet

Dear community,

I would like to propose that we release Apache CouchDB 3.1.0.

Changes since the last round:

  https://github.com/apache/couchdb/compare/3.1.0-RC1...3.1.0-RC2

Candidate release notes:

  https://docs.couchdb.org/en/latest/whatsnew/3.1.html

We encourage the whole community to download and test these release 
artefacts so that any critical issues can be resolved before the release 
is made. Everyone is free to vote on this release, so dig right in! 
(Only PMC members have binding votes, but they depend on community 
feedback to gauge if an official release is ready to be made.)


The release artefacts we are voting on are available here:

https://dist.apache.org/repos/dist/dev/couchdb/source/3.1.0/rc.2/

There, you will find a tarball, a GPG signature, and SHA256/SHA512 
checksums.


Please follow the test procedure here:


https://cwiki.apache.org/confluence/display/COUCHDB/Testing+a+Source+Release

Please remember that "RC2" is an annotation. If the vote passes, these 
artefacts will be released as Apache CouchDB 3.1.0.


The intent is still to push the successful build on Monday May 4, and 
announce on Tuesday May 5.


Please cast your votes now.

Thanks,
Joan "and have a biscuit" Touzet


[VOTE] Release Apache CouchDB 3.0.1 (RC2)

2020-04-30 Thread Joan Touzet

Dear community,

I would like to propose that we release Apache CouchDB 3.0.1.

Changes since the last round:

https://github.com/apache/couchdb/compare/3.0.1-RC1...3.0.1-RC2

Candidate release notes:

  https://docs.couchdb.org/en/latest/whatsnew/3.0.html#version-3-0-1

We encourage the whole community to download and test these release 
artefacts so that any critical issues can be resolved before the release 
is made. Everyone is free to vote on this release, so dig right in! 
(Only PMC members have binding votes, but they depend on community 
feedback to gauge if an official release is ready to be made.)


The release artefacts we are voting on are available here:

https://dist.apache.org/repos/dist/dev/couchdb/source/3.0.1/rc.2/

There, you will find a tarball, a GPG signature, and SHA256/SHA512 
checksums.


Please follow the test procedure here:


https://cwiki.apache.org/confluence/display/COUCHDB/Testing+a+Source+Release

Please remember that "RC2" is an annotation. If the vote passes, these 
artefacts will be released as Apache CouchDB 3.0.1.


The intent is still to push the successful build on Monday May 4, and 
announce on Tuesday May 5.


Please cast your votes now.

Thanks,
Joan "time to descale the coffee maker" Touzet


<    1   2   3   4   5   6   7   8   9   10   >