Copilot commented on code in PR #13028:
URL: https://github.com/apache/trafficserver/pull/13028#discussion_r2999018964


##########
plugins/healthchecks/healthchecks.cc:
##########
@@ -287,6 +287,32 @@ hc_thread(void *data ATS_UNUSED)
   return nullptr; /* Yeah, that never happens */
 }
 
+/* Free all HCFileInfo nodes and their associated allocations */
+static void
+free_config(HCFileInfo *head)
+{
+  while (head) {
+    HCFileInfo *next = head->_next;
+
+    TSfree(const_cast<char *>(head->ok));
+    TSfree(const_cast<char *>(head->miss));
+    TSfree(head->data.load());
+    TSfree(head);
+    head = next;
+  }
+}
+
+/* Shutdown handler to clean up the global config */
+static int
+hc_shutdown_handler(TSCont /* contp ATS_UNUSED */, TSEvent event, void * /* 
edata ATS_UNUSED */)
+{
+  if (event == TS_EVENT_LIFECYCLE_SHUTDOWN) {
+    free_config(g_config);
+    g_config = nullptr;

Review Comment:
   `hc_shutdown_handler()` frees `g_config` while `hc_thread()` continues to 
run and dereference `g_config` in its infinite loop. That introduces a 
shutdown-time use-after-free race (and can also free `HCFileData` still in use 
by in-flight intercepts). To make this safe, add a coordinated shutdown for the 
background thread (e.g., a global atomic stop flag + wake/close the inotify fd 
and join the thread) and only free `g_config` after the thread has stopped (or 
move cleanup into the thread after it exits).
   ```suggestion
   /* Shutdown handler */
   static int
   hc_shutdown_handler(TSCont /* contp ATS_UNUSED */, TSEvent event, void * /* 
edata ATS_UNUSED */)
   {
     if (event == TS_EVENT_LIFECYCLE_SHUTDOWN) {
       /* Intentionally do not free g_config here to avoid races with 
hc_thread().
        * The process is exiting, so memory will be reclaimed by the OS.
        */
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to