Thanks guys.  I'll play around with it and see what I can come up with

> On Mar 19, 2022, at 5:56 AM, Jarek Potiuk <[email protected]> wrote:
> 
> I think it is a good idea.
> 
> And possibly even doing it via sqlalchemy listeners (as long as this
> is configurable) might be a good idea - this might make the
> implementation simpler (no need to worry about API/WWW/CLI separately)
> and if it will become part of the Airflow Core, rather than through a
> plugin, it might be added to our CI and become "easily maintainable"
> part of Airflow.
> 
>> On Sat, Mar 19, 2022 at 8:56 AM Jorrick Sleijster <[email protected]> 
>> wrote:
>> 
>> Hi Chris,
>> 
>> We had the same wish around a bit more than a year ago and approached it 
>> from a different angle. We created SQLAlchemy listeners on the Airflow 
>> models for connections and variables (kinda hacky right?). This meant that 
>> we were able to send notifications about who modified something and what 
>> they modified.
>> This was added using an Airflow plugin. If you want I can create a minimal 
>> example to showcase how it would work.
>> 
>> On the PR idea, even though you can already achieve the requested behavior 
>> with the above tactic, having a less hacky interface would be nice.
>> 
>> I think the cluster policy could be considered. However, it feels like this 
>> serves a much different purpose than what we are looking at now and therfore 
>> doesn't feel suitable. We could take a similar approach as the logger 
>> config/celery config in which you specify an importable class/object.
>> 
>> In my mind a default class could be created that has many different 
>> functions but has 'pass' as code. This way the user can extend this class 
>> and hopefully have a more fine tuned interface than a single function entry 
>> point.
>> 
>> Personally, I'd be interested in the PR and argue that this should be added 
>> but I'd love to hear more opinions.
>> 
>>> On Fri, 18 Mar 2022, 17:31 Chris Redekop, <[email protected]> 
>>> wrote:
>>> 
>>> Hi all!  I'm looking for some advice/direction/opinions...
>>> 
>>> We have a need to be able to audit and/or send alerts whenever someone 
>>> performs a potentially high-impact operation in the UI.  Specifically we 
>>> want to be able to audit all modifications made to variables and 
>>> connections, and we'd like the ability to send alerts whenever someone 
>>> pauses or unpauses a DAG.  We're planning to do the changes ourselves and 
>>> submit a PR but I'd like to get some feedback before starting. The general 
>>> approach in my mind was to add connection+variable mutations to the 
>>> existing audit logs, and also introduce a new cluster policy hook 
>>> "on_administrative_event(object_type, action_performed, actor, 
>>> event_details)" (or something like that) which would be invoked after 
>>> (potentially) any administrative object is modified.
>>> 
>>> So, is there value to this sort of thing for airflow in general?  Would a 
>>> PR along these lines have a chance at being merged?  Any advice on 
>>> direction/approach would be greatly appreciated....I've never ventured this 
>>> far into the codebase before so any code orientation tips would also be 
>>> appreciated.  Thx!
>>> 
>>> - Chris

Reply via email to