Thank you Tao, just pinged Apache Infra on this ticket.
On Tue, Dec 18, 2018 at 6:38 PM Tao Feng wrote:
> Thanks Feng for the suggestion. Just file
> https://issues.apache.org/jira/browse/INFRA-17470.
>
> On Tue, Dec 18, 2018 at 6:25 PM Feng Lu wrote:
>
> > Cool, thank you Ash. Kindly let us kn
+1!
Thank you Ash for sharing security vulnerability updates.
On Tue, Jan 8, 2019 at 2:32 PM Ash Berlin-Taylor wrote:
> CVE-2018-20245: LDAP auth backend did not validate SSL certificate for
> Apache Airflow <= 1.10.0
>
> Vendor: The Apache Software Foundation
>
> Versions Affected: <= 1.10.0
>
CVE-2018-20245: LDAP auth backend did not validate SSL certificate for
Apache Airflow <= 1.10.0
Vendor: The Apache Software Foundation
Versions Affected: <= 1.10.0
Description:
The LDAP auth backend (airflow.contrib.auth.backends.ldap_auth) was
misconfigured and contained improper checking of
Hi Airflow community,
This post summaries some security vulnerabilities that were fixed in
Airflow 1.9.0 (which is quite a while ago now) but that we never
formally reported as such.
If you are still on 1.8.2 or earlier we strongly encourage you to
upgrade to the latest version, but at least
Hi Folks!
Many of you have no doubt seen the various announcements about our
graduation to an Apache Top-Level Project (TLP). A special thanks to Sally
Khudairi for running our PR campaign.
Here are some of the links that she shared:
- GlobeNewswire
http://globenewswire.com/news-release/20
While splitting the monolithical Airfllow architecture to pieces sounds
good, there is one problem that might be difficult to tackle (or rather
impossible unless we change architecture of Airflow significantly) - namely
dependencies/requirements.
The way Airflow uses operators is that its operator
> I don't see it solving any problem than test speed (which is a big one,
yes) but doesn't reduce the amount of workload on the committers.
It's about distributed ownership. For example, I'm not a committer in
pandas, but I am the primary maintainer of pandas-gbq. You're right that if
the set of c
Can someone explain to me how having multiple packages will work in
practice?
How will we ensure that core changes don't break any hooks/operators?
How do we support the logging backends for s3/azure/gcp?
What would the release process be for the "sub"-packages?
There is nothing stopping some
I think the operator should be placed by the source.
If it's MySQLToHiveOperator then it would be placed in MySQL package.
The BIG question here is if this serve actual improvement like faster
deployment of hook/operators bug-fix to Airflow users (faster than actual
Airflow release) or this is
> I’m not sure package structure based on whether major providers will fund
development is the right approach.
Regarding data transfer operators that cover 2 different systems, we have a
few choices:
- Place all data transfer operators in special data transfer repository.
The same problems we
10 matches
Mail list logo