dacort commented on code in PR #34225: URL: https://github.com/apache/airflow/pull/34225#discussion_r1450895098
########## airflow/providers/amazon/aws/links/emr.py: ########## @@ -66,3 +82,98 @@ def get_log_uri( return None log_uri = S3Hook.parse_s3_url(cluster_info["LogUri"]) return "/".join(log_uri) + + +class EmrServerlessLogsLink(BaseAwsLink): + """Helper class for constructing Amazon EMR Serverless Logs Link.""" + + name = "Spark Driver stdout" + key = "emr_serverless_logs" + + def get_link( + self, + operator: BaseOperator, + *, + ti_key: TaskInstanceKey, + ) -> str: + """ + Link to Amazon Web Services Console. + + :param operator: airflow operator + :param ti_key: TaskInstance ID to return link for + :return: link to external system + """ + conf = XCom.get_value(key=self.key, ti_key=ti_key) + if not conf: + return "" + hook = EmrServerlessHook(aws_conn_id=conf.get("conn_id")) + resp = hook.conn.get_dashboard_for_job_run( + applicationId=conf.get("application_id"), jobRunId=conf.get("job_run_id") + ) + o = urlparse(resp["url"]) + return o._replace(path="/logs/SPARK_DRIVER/stdout.gz").geturl() Review Comment: OK, cool, what I did here was change the `total_max_attempts` to "1" specifically for the hook that calls `get_dashboard_for_job_run`. With a default timeout of 60s, this means that the calls will fail fast. I'm fine with this approach, even if they fail for a network blip. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@airflow.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org