Daniel,

The problem is whenever we do batch — every month from now on — let’s say
all providers work but just one of them failed in rc’s — if we cancel the
entire vote again and start from scratch — it means +3 days. And since
getting to the result of providers already takes a good amount of time +3
days just delays it further.

And delaying all other providers just because one of them (let’s say
telegram like provider fails) — might not be what we want.

So how I look at it is yes it is a Single Vote (which we can argue and
change to a VOTE email per provider to avoid all confusion) — but we are
voting on each individual provider too (at least me until now).

I will stick with my +1 vote.

Regards,
Kaxil

On Thu, Mar 4, 2021 at 8:47 AM Jarek Potiuk <ja...@potiuk.com> wrote:

> Hey Daniel,
>
> The proposal is not new.  We followed the very same proposal several times
> already for a number of batches of providers (I think 5 or 6 times
> already), I honestly do not feel there is anything "new" about it.
>
> I just tried to be helpful and explain what was there already because I
> think some of the people involved apparently did not realize we had this in
> place. I hope it is helpful to catch-up for those who missed it.
>
> Even in the email that I sent there is the link to the process for PMC
> members and contributors explaining what the responsibilities of PMC and
> contributors is :) - and those are the very same documents I mentioned in
> the explanation.
>
> Rather than restarting the vote I would prefer to continue with the
> release process - I know our users are waiting on it, and if we restart the
> vote now this means another at least 3 days of delay. Rather than that I
> would love to continue the voting process (we've also done that in the past
> - voting process lasts until 72H pass and 3 +1 votes are cast.
>
> Kaxil is the only one who made a vote so far (besides me) - so I will
> leave to you Kaxil, if you would like to withdraw the vote. (in case the
> process was not clear and now you changed your mind). But as soon as we
> have three PMC member votes the voting ends.
>
> J.
>
>
> On Wed, Mar 3, 2021 at 11:58 PM Daniel Imberman <daniel.imber...@gmail.com>
> wrote:
>
>> Hi Jarek,
>>
>> I think all of this sounds fine, but I think that we should start a new
>> vote with this understanding. I wouldn't feel comfortable assuming that any
>> of the previous +1's are still applicable as we have changed what people
>> are +1’ing.
>>
>> At the minimum I think we could have people re-affirm their votes on this
>> thread based on the new proposal
>>
>> Once we figure that out then +1 from me :)
>>
>> On Wed, Mar 3, 2021 at 1:44 PM, Jarek Potiuk <ja...@potiuk.com> wrote:
>>
>> Hello Everyone,
>>
>> We need one more PMC member vote in order to be able to release the
>> packages.
>>
>> Just to describe what the current status of the batch is:
>>
>> Based on discussion here: https://github.com/apache/airflow/issues/14511
>> - I am planning to follow the process we had previously documented in our
>> release procedures so far - that release manager tries to batch a number of
>> providers in single "voting process" and when there are problems discovered
>> with certain providers they might be excluded.
>>
>> I plan to skip the following providers from release now and release them
>> together on a ad-hoc basis whenever all the relevant issues are merged:
>>
>> * apache.druid, microsoft.azure, apache.beam
>>
>> I also noticed that snowflake python connector has been released this
>> morning as promised
>> https://pypi.org/project/snowflake-connector-python/2.4.0/ - it fixes
>> the last problem with the dependencies that were plaguing us, so I also
>> plan to remove the snowflake provider from this batch.
>>
>>
>> ---------
>>
>> I just wanted to use the opportunity to describe the current process for
>> deciding providers, because apparently not everyone is aware that we
>> already have an established and documented process for provider's releases
>> that was discussed on the devlist and it is documented in our release
>> process description.
>>
>> Possibly we will extract it into a separate policy and maybe discuss some
>> aspects of it (the discussion was raised today at the dev call of ours but
>> I wanted to make sure that we are all referring to the same "starting
>> point" and the "process" I based my recent actions on.
>>
>> *1) Batching the providers as the default*
>>
>> The decision on when to release and particularly preference for releasing
>> providers in batches is described in the
>> https://github.com/apache/airflow/blob/master/dev/README_RELEASE_PROVIDER_PACKAGES.md#decide-when-to-release
>>
>> > Decide when to release
>> > You can release provider packages separately from the main Airflow on
>> an ad-hoc basis, whenever we find that
>> a given provider needs to be released - due to new features or due to bug
>> fixes.
>> You can release each provider package separately, but due to voting and
>> release overhead we try to group
>> releases of provider packages together.
>>
>> *2) Possibility of excluding certain packages from the release.*
>>
>> The possibility of excluding certain packages which (for whatever reason)
>> we decide to remove at the discretion of release manager is described here:
>> https://github.com/apache/airflow/blob/master/dev/README_RELEASE_PROVIDER_PACKAGES.md#prepare-voting-email-for-providers-release-candidate
>>
>> Prepare voting email for Providers release candidate
>> ....
>>
>> > Due to the nature of packages, not all packages have to be released as
>> convenience packages in the final release. During the voting process the
>> voting PMCs might decide to exclude certain packages from the release if
>> some critical problems have been found in some packages.
>>
>> And the process of removing it is part of the described release process:
>>
>> > In case you decided to remove some of the packages. remove them from
>> dist folder now:
>>
>> > ls dist/*<provider>*
>> > rm dist/*<provider>*
>>
>> The issue of excluding certain packages have been discussed in this
>> thread in the mailing list :
>> https://lists.apache.org/thread.html/rc620a8a503cc7b14850c0a2a1fca1f6051d08a7e3e6d2cbdeb691dda%40%3Cdev.airflow.apache.org%3E
>> - where we had a -1 veto from a PMC member on the whole batch of providers
>> where we found a cncf.kubrenetes and google providers had critical
>> problems.
>>
>> We discussed it then and the two PMC members proposed a solution that was
>> not objected to by anyone in the VOTE thread - to remove the packages from
>> the batch.
>>
>> I continued this in the continuation of the voting thread
>> https://lists.apache.org/thread.html/r752c5d5171de4ff626663d30e1d50c4b0d2994f66bf8918d816dabd8%40%3Cdev.airflow.apache.org%3E
>> with the following message which specifically pointed out to my proposal
>> where I specifically linked to the message above and asked for comments.
>>
>> > As discussed before: -1 on a single provider does not invalidate the
>> whole
>> vote (from
>>
>> https://github.com/apache/airflow/tree/master/dev#vote-and-verify-the-backport-providers-release-candidate
>> ):
>>
>> > "Due to the nature of backport packages, not all packages have to be
>> released as convenience packages in the final release. During the voting
>> process the voting PMCs might decide to exclude certain packages from the
>> release if some critical problems have been found in some packages."
>>
>> > We will merge the fix and most likely release a new google package right
>> after this one. Looking at the super-localized problem here my current
>> decision will be to release 2020.10.29 'google package" - together with
>> other packages and release 2020.11.01 (or smth) - but only the google one
>> -
>> right after we merge the fix.
>>
>> > Any comments to that?
>>
>>
>> J.
>>
>>
>>
>> On Wed, Mar 3, 2021 at 1:23 AM Kaxil Naik <kaxiln...@gmail.com> wrote:
>>
>>> +1 (binding).
>>>
>>> Verified Signature and SHA12.
>>>
>>> Based on the changes (and Changelog) I can verify that the following
>>> providers should work fine:
>>>
>>>
>>>    - spark
>>>    - kubernetes
>>>    - jenkins
>>>    - microsoft.azure
>>>    - mysql
>>>    - telegram
>>>    - and all the ones that just have doc changes
>>>
>>>
>>> Regards,
>>> Kaxil
>>>
>>> On Tue, Mar 2, 2021 at 9:01 PM Ryan Hatter <ryannhat...@gmail.com>
>>> wrote:
>>>
>>>> There were some changes to the operator after my PR was merged:
>>>> https://github.com/apache/airflow/blob/master/airflow/providers/google/cloud/transfers/gdrive_to_gcs.py
>>>>
>>>> Pak Andrey (Scuall1992 on GitHub) might be able to confirm the operator
>>>> is functional.
>>>>
>>>> On Mar 2, 2021, at 13:16, Jarek Potiuk <ja...@potiuk.com> wrote:
>>>>
>>>> 
>>>> Hello everyone - just a reminder that we have voting (hopefully)
>>>> finishing tomorrow.
>>>>
>>>> I'd love to get some votes for that.
>>>>
>>>> Just to clarify what the PMC votes mean, because I believe there were
>>>> some question raised about the release process which we are going to
>>>> discuss it tomorrow at the dev call but let me just express my
>>>> interpretation of https://infra.apache.org/release-publishing.html
>>>>
>>>> PMC member vote (as I understand it) does not mean that this PMC member
>>>> tested the release functionality (neither Release Manager).
>>>> This merely means that the PMC member agrees that the software was
>>>> released according to the requirements and process described in
>>>> https://infra.apache.org/release-publishing.html and that the
>>>> signatures, hash-sums and software packages are as expected by the process.
>>>> This is how I interpret this part of the release process "Release
>>>> managers do the mechanical work; but the PMC in general, and the PMC chair
>>>> in particular (as an officer of the Foundation), are responsible for
>>>> compliance with ASF requirements."
>>>>
>>>> My understanding is that it is not feasible (neither for Airflow nor
>>>> Providers) that the PMC members (nor release manager) tests the software
>>>> and all features/bugfixes. We've never done that and I believe we will
>>>> never do. We are reaching out to the community to test and we make a best
>>>> effort to test whatever we release automatically (unit tests, integration
>>>> tests, testing if providers are installable/importable with Airflow 2.0 and
>>>> latest source code of Airflow). And we hardly can do more than that.
>>>>
>>>> Happy to discuss it tomorrow, but in the meantime If some of the PMCs
>>>> could do the review of the process and check the compliance, to be ready to
>>>> cast your votes - I'd love that.
>>>>
>>>> J.
>>>>
>>>> On Tue, Mar 2, 2021 at 8:44 PM Jarek Potiuk <ja...@potiuk.com> wrote:
>>>>
>>>>> Hey Ryan,
>>>>>
>>>>> There is no **must** in re-testing it. Providing that you tested it
>>>>> before with real GSuite account is for me enough of a confirmation ;).
>>>>>
>>>>> J.
>>>>>
>>>>> On Sun, Feb 28, 2021 at 10:00 PM Abdur-Rahmaan Janhangeer <
>>>>> arj.pyt...@gmail.com> wrote:
>>>>>
>>>>>> Salutes for having a GSuite account just for the functionality 👍👍👍
>>>>>>
>>>>>> On Mon, 1 Mar 2021, 00:05 Ryan Hatter, <ryannhat...@gmail.com> wrote:
>>>>>>
>>>>>>> I canceled my GSuite account when my PR for the gdrive to gcs
>>>>>>> operator was approved & merged. Could anyone maybe help me ensure 
>>>>>>> correct
>>>>>>> functionality?
>>>>>>>
>>>>>>>
>>>>>>> On Feb 27, 2021, at 08:48, Jarek Potiuk <ja...@potiuk.com> wrote:
>>>>>>>
>>>>>>> 
>>>>>>> I created issue, where we will track the status of tests for the
>>>>>>> providers (again - it is experiment - but I'd really love to get 
>>>>>>> feedback
>>>>>>> on the new providers from those who contributed):
>>>>>>> https://github.com/apache/airflow/issues/14511
>>>>>>>
>>>>>>> On Sat, Feb 27, 2021 at 4:28 PM Jarek Potiuk <ja...@potiuk.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>> Hey all,
>>>>>>>>
>>>>>>>> I have just cut the new wave Airflow Providers packages. This email
>>>>>>>> is calling a vote on the release,
>>>>>>>> which will last for 72 hours + day for the weekend - which means
>>>>>>>> that it will end on Wed 3 Mar 15:59:34 CET 2021.
>>>>>>>>
>>>>>>>> Consider this my (binding) +1.
>>>>>>>>
>>>>>>>> *KIND REQUEST*
>>>>>>>>
>>>>>>>> There was a recent discussion about test quality of the providers
>>>>>>>> and I would like to try to address it, still keeping the batch release
>>>>>>>> process every 3 weeks.
>>>>>>>>
>>>>>>>> We need a bit of help from the community. I have a kind request to
>>>>>>>> the authors of fixes and new features. I group the providers into those
>>>>>>>> that likely need more testing, and those that do not. I also added 
>>>>>>>> names of
>>>>>>>> those who submitted the changes and are most likely to be able to 
>>>>>>>> verify if
>>>>>>>> the RC packages are solving the problems/adding features.
>>>>>>>>
>>>>>>>> This is a bit of experiment (apologies for calling out) - but if we
>>>>>>>> find that it works, we can automate that. I will create a separate 
>>>>>>>> Issue in
>>>>>>>> Github where you will be able to "tick" the boxes for those providers 
>>>>>>>> which
>>>>>>>> they are added to. It would not be a blocker if not tested, but it 
>>>>>>>> will be
>>>>>>>> a great help if you could test the new RC provider and see if it works 
>>>>>>>> as
>>>>>>>> expected according to your changes.
>>>>>>>>
>>>>>>>> Providers with new features and fixes - likely needs some testing.:
>>>>>>>>
>>>>>>>> * *amazon* : Cristòfol Torrens, Ruben Laguna, Arati Nagmal, Ivica
>>>>>>>> Kolenkaš, JavierLopezT
>>>>>>>> * *apache.druid*: Xinbin Huang
>>>>>>>> * *apache.spark*: Igor Khrol
>>>>>>>> * *cncf.kubernetes*: jpyen, Ash Berlin-Taylor, Daniel Imberman
>>>>>>>> * *google*: Vivek Bhojawala, Xinbin Huang, Pak Andrey, uma66, Ryan
>>>>>>>> Yuan, morrme, Sam Wheating, YingyingPeng22, Ryan Hatter,Tobiasz 
>>>>>>>> Kędzierski
>>>>>>>> * *jenkins*: Maxim Lisovsky
>>>>>>>> * *microsift.azure <http://microsift.azure>*: flvndh, yyu
>>>>>>>> * *mysql*: Constantino Schillebeeckx
>>>>>>>> * *qubole*: Xinbin Huang
>>>>>>>> * *salesforce*: Jyoti Dhiman
>>>>>>>> * *slack*: Igor Khrol
>>>>>>>> * *tableau*: Jyoti Dhiman
>>>>>>>> * *telegram*: Shekhar Sing, Adil Khashtamov
>>>>>>>>
>>>>>>>> Providers with doc only changes (no need to test):
>>>>>>>>
>>>>>>>> * apache-beam
>>>>>>>> * apache-hive
>>>>>>>> * dingding
>>>>>>>> * docker
>>>>>>>> * elasticsearch
>>>>>>>> * exasol
>>>>>>>> * http
>>>>>>>> * neo4j
>>>>>>>> * openfaas
>>>>>>>> * papermill
>>>>>>>> * presto
>>>>>>>> * sendgrid
>>>>>>>> * sftp
>>>>>>>> * snowflake
>>>>>>>> * sqlite
>>>>>>>> * ssh
>>>>>>>> *
>>>>>>>>
>>>>>>>>
>>>>>>>> Airflow Providers are available at:
>>>>>>>> https://dist.apache.org/repos/dist/dev/airflow/providers/
>>>>>>>>
>>>>>>>> *apache-airflow-providers-<PROVIDER>-*-bin.tar.gz* are the binary
>>>>>>>> Python "sdist" release - they are also official "sources" for the
>>>>>>>> provider packages.
>>>>>>>>
>>>>>>>> *apache_airflow_providers_<PROVIDER>-*.whl are the binary
>>>>>>>> Python "wheel" release.
>>>>>>>>
>>>>>>>> The test procedure for PMC members who would like to test the RC
>>>>>>>> candidates are described in
>>>>>>>>
>>>>>>>> https://github.com/apache/airflow/blob/master/dev/README_RELEASE_PROVIDER_PACKAGES.md#verify-the-release-by-pmc-members
>>>>>>>>
>>>>>>>> and for Contributors:
>>>>>>>>
>>>>>>>>
>>>>>>>> https://github.com/apache/airflow/blob/master/dev/README_RELEASE_PROVIDER_PACKAGES.md#verify-by-contributors
>>>>>>>>
>>>>>>>>
>>>>>>>> Public keys are available at:
>>>>>>>> https://dist.apache.org/repos/dist/release/airflow/KEYS
>>>>>>>>
>>>>>>>> Please vote accordingly:
>>>>>>>>
>>>>>>>> [ ] +1 approve
>>>>>>>> [ ] +0 no opinion
>>>>>>>> [ ] -1 disapprove with the reason
>>>>>>>>
>>>>>>>>
>>>>>>>> Only votes from PMC members are binding, but members of the
>>>>>>>> community are
>>>>>>>> encouraged to test the release and vote with "(non-binding)".
>>>>>>>>
>>>>>>>> Please note that the version number excludes the 'rcX' string.
>>>>>>>> This will allow us to rename the artifact without modifying
>>>>>>>> the artifact checksums when we actually release.
>>>>>>>>
>>>>>>>>
>>>>>>>> Each of the packages contains a link to the detailed changelog. The
>>>>>>>> changelogs are moved to the official airflow documentation:
>>>>>>>> https://github.com/apache/airflow-site/<TODO COPY LINK TO BRANCH>
>>>>>>>>
>>>>>>>> <PASTE ANY HIGH-LEVEL DESCRIPTION OF THE CHANGES HERE!>
>>>>>>>>
>>>>>>>>
>>>>>>>> Note the links to documentation from PyPI packages are not working
>>>>>>>> until we merge
>>>>>>>> the changes to airflow site after releasing the packages officially.
>>>>>>>>
>>>>>>>> https://pypi.org/project/apache-airflow-providers-amazon/1.2.0rc1/
>>>>>>>>
>>>>>>>> https://pypi.org/project/apache-airflow-providers-apache-beam/1.0.1rc1/
>>>>>>>>
>>>>>>>> https://pypi.org/project/apache-airflow-providers-apache-druid/1.1.0rc1/
>>>>>>>>
>>>>>>>> https://pypi.org/project/apache-airflow-providers-apache-hive/1.0.2rc1/
>>>>>>>>
>>>>>>>> https://pypi.org/project/apache-airflow-providers-apache-spark/1.0.2rc1/
>>>>>>>>
>>>>>>>> https://pypi.org/project/apache-airflow-providers-cncf-kubernetes/1.0.2rc1/
>>>>>>>> https://pypi.org/project/apache-airflow-providers-dingding/1.0.2rc1/
>>>>>>>> https://pypi.org/project/apache-airflow-providers-docker/1.0.2rc1/
>>>>>>>>
>>>>>>>> https://pypi.org/project/apache-airflow-providers-elasticsearch/1.0.2rc1/
>>>>>>>> https://pypi.org/project/apache-airflow-providers-exasol/1.1.1rc1/
>>>>>>>> https://pypi.org/project/apache-airflow-providers-google/2.1.0rc1/
>>>>>>>> https://pypi.org/project/apache-airflow-providers-http/1.1.1rc1/
>>>>>>>> https://pypi.org/project/apache-airflow-providers-jenkins/1.1.0rc1/
>>>>>>>>
>>>>>>>> https://pypi.org/project/apache-airflow-providers-microsoft-azure/1.2.0rc1/
>>>>>>>> https://pypi.org/project/apache-airflow-providers-mysql/1.0.2rc1/
>>>>>>>> https://pypi.org/project/apache-airflow-providers-neo4j/1.0.1rc1/
>>>>>>>> https://pypi.org/project/apache-airflow-providers-openfaas/1.1.1rc1/
>>>>>>>>
>>>>>>>> https://pypi.org/project/apache-airflow-providers-papermill/1.0.2rc1/
>>>>>>>> https://pypi.org/project/apache-airflow-providers-presto/1.0.2rc1/
>>>>>>>> https://pypi.org/project/apache-airflow-providers-qubole/1.0.2rc1/
>>>>>>>>
>>>>>>>> https://pypi.org/project/apache-airflow-providers-salesforce/2.0.0rc1/
>>>>>>>> https://pypi.org/project/apache-airflow-providers-sendgrid/1.0.2rc1/
>>>>>>>> https://pypi.org/project/apache-airflow-providers-sftp/1.1.1rc1/
>>>>>>>> https://pypi.org/project/apache-airflow-providers-slack/3.0.0rc1/
>>>>>>>>
>>>>>>>> https://pypi.org/project/apache-airflow-providers-snowflake/1.1.1rc1/
>>>>>>>> https://pypi.org/project/apache-airflow-providers-sqlite/1.0.2rc1/
>>>>>>>> https://pypi.org/project/apache-airflow-providers-ssh/1.2.0rc1/
>>>>>>>> https://pypi.org/project/apache-airflow-providers-tableau/1.0.0rc1/
>>>>>>>> https://pypi.org/project/apache-airflow-providers-telegram/1.0.2rc1/
>>>>>>>>
>>>>>>>> Cheers,
>>>>>>>> J.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> +48 660 796 129 <+48660796129>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> +48 660 796 129 <+48660796129>
>>>>>>>
>>>>>>>
>>>>>
>>>>> --
>>>>> +48 660 796 129 <+48660796129>
>>>>>
>>>>
>>>>
>>>> --
>>>> +48 660 796 129 <+48660796129>
>>>>
>>>>
>>
>> --
>> +48 660 796 129 <+48660796129>
>>
>>
>
> --
> +48 660 796 129
>

Reply via email to