gopidesupavan commented on PR #62963:
URL: https://github.com/apache/airflow/pull/62963#issuecomment-4184289676

   I like the overall direction here, but I wonder if the long-term authoring 
model should be more config-driven than Python-driven.
   
   Instead of requiring DAG authors to define many checks as Python 
validators/factories, could the LLM generate a simple JSON/YAML rule spec from 
the natural-language prompts, and then have the operator execute that spec with 
a deterministic engine?
   
   An example qualink https://github.com/gopidesupavan/qualink and 
https://gopidesupavan.github.io/qualink/guide/yaml-config/ 
   
   Not sure similarly may be great expection may have something like that.. 
   
   So that based on the prompts llm generates rules and we can simply execute 
on them?
   
   I do think a config-first layer for common checks may scale better than 
growing the Python validator API. Python validators could still remain as an 
escape hatch for advanced/custom cases.
   
   If you have any other tools may be please suggest something config driven 
much better for llms easily generate rules and we execute them.. WDYT?
   
   
   On side note at my org we started discussion to use `qualink` ( i have 
developed that 😄 ) it performs nicely llm generates rules and can be executed 
directly on the object stores.. 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to