Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package alloy for openSUSE:Factory checked 
in at 2025-10-13 15:35:55
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/alloy (Old)
 and      /work/SRC/openSUSE:Factory/.alloy.new.18484 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "alloy"

Mon Oct 13 15:35:55 2025 rev:22 rq:1311063 version:1.11.2

Changes:
--------
--- /work/SRC/openSUSE:Factory/alloy/alloy.changes      2025-10-11 
22:50:54.054936047 +0200
+++ /work/SRC/openSUSE:Factory/.alloy.new.18484/alloy.changes   2025-10-13 
15:37:20.702282347 +0200
@@ -1,0 +2,20 @@
+Mon Oct 13 06:54:11 UTC 2025 - Johannes Kastl 
<[email protected]>
+
+- update to 1.11.2 (1.11.1 was not released):
+  * Bugfixes
+    - Fix potential deadlock in loki.source.journal when stopping
+      or reloading the component. (@thampiotr)
+    - Honor sync timeout when waiting for network availability for
+      prometheus.operator.* components. (@dehaansa)
+    - Fix prometheus.exporter.cloudwatch to not always emit debug
+      logs but respect debug property. (@kalleep)
+    - Fix an issue where component shutdown could block
+      indefinitely by adding a warning log message and a deadline
+      of 10 minutes. The deadline can be configured with the
+      --feature.component-shutdown-deadline flag if the default is
+      not suitable. (@thampiotr)
+    - Fix potential deadlocks in loki.source.file and
+      loki.source.journal when component is shutting down.
+      (@kalleep, @thampiotr)
+
+-------------------------------------------------------------------

Old:
----
  alloy-1.11.0.obscpio
  ui-1.11.0.tar.gz

New:
----
  alloy-1.11.2.obscpio
  ui-1.11.2.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ alloy.spec ++++++
--- /var/tmp/diff_new_pack.qEmF8j/_old  2025-10-13 15:37:24.394437539 +0200
+++ /var/tmp/diff_new_pack.qEmF8j/_new  2025-10-13 15:37:24.398437706 +0200
@@ -17,7 +17,7 @@
 
 
 Name:           alloy
-Version:        1.11.0
+Version:        1.11.2
 Release:        0
 Summary:        OpenTelemetry Collector distribution with programmable 
pipelines
 License:        Apache-2.0

++++++ _service ++++++
--- /var/tmp/diff_new_pack.qEmF8j/_old  2025-10-13 15:37:24.490441574 +0200
+++ /var/tmp/diff_new_pack.qEmF8j/_new  2025-10-13 15:37:24.498441910 +0200
@@ -3,7 +3,7 @@
     <param name="url">https://github.com/grafana/alloy</param>
     <param name="scm">git</param>
     <param name="exclude">.git</param>
-    <param name="revision">v1.11.0</param>
+    <param name="revision">v1.11.2</param>
     <param name="versionformat">@PARENT_TAG@</param>
     <param name="versionrewrite-pattern">v(.*)</param>
     <param name="changesgenerate">disable</param>

++++++ alloy-1.11.0.obscpio -> alloy-1.11.2.obscpio ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/alloy-1.11.0/CHANGELOG.md 
new/alloy-1.11.2/CHANGELOG.md
--- old/alloy-1.11.0/CHANGELOG.md       2025-09-30 15:10:57.000000000 +0200
+++ new/alloy-1.11.2/CHANGELOG.md       2025-10-10 15:06:43.000000000 +0200
@@ -7,9 +7,21 @@
 changes that impact end-user behavior are listed; changes to documentation or
 internal API changes are not present.
 
-Main (unreleased)
+v1.11.2
 -----------------
 
+### Bugfixes
+
+- Fix potential deadlock in `loki.source.journal` when stopping or reloading 
the component. (@thampiotr)
+
+- Honor sync timeout when waiting for network availability for 
prometheus.operator.* components. (@dehaansa)
+
+- Fix `prometheus.exporter.cloudwatch` to not always emit debug logs but 
respect debug property. (@kalleep)
+
+- Fix an issue where component shutdown could block indefinitely by adding a 
warning log message and a deadline of 10 minutes. The deadline can be 
configured with the `--feature.component-shutdown-deadline` flag if the default 
is not suitable. (@thampiotr)
+
+- Fix potential deadlocks in `loki.source.file` and `loki.source.journal` when 
component is shutting down. (@kalleep, @thampiotr)
+
 v1.11.0
 -----------------
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/alloy-1.11.0/docs/sources/reference/cli/run.md 
new/alloy-1.11.2/docs/sources/reference/cli/run.md
--- old/alloy-1.11.0/docs/sources/reference/cli/run.md  2025-09-30 
15:10:57.000000000 +0200
+++ new/alloy-1.11.2/docs/sources/reference/cli/run.md  2025-10-10 
15:06:43.000000000 +0200
@@ -65,6 +65,7 @@
 * `--config.extra-args`: Extra arguments from the original format used by the 
converter.
 * `--stability.level`: The minimum permitted stability level of functionality. 
Supported values: `experimental`, `public-preview`, and `generally-available` 
(default `"generally-available"`).
 * `--feature.community-components.enabled`: Enable community components 
(default `false`).
+* `--feature.component-shutdown-deadline`: Maximum duration to wait for a 
component to shut down before giving up and logging an error (default `"10m"`).
 * `--windows.priority`: The priority to set for the {{< param "PRODUCT_NAME" 
>}} process when running on Windows. This is only available on Windows. 
Supported values: `above_normal`, `below_normal`, `normal`, `high`, `idle`, or 
`realtime` (default `"normal"`).
 
 {{< admonition type="note" >}}
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/alloy-1.11.0/docs/sources/reference/components/loki/loki.process.md 
new/alloy-1.11.2/docs/sources/reference/components/loki/loki.process.md
--- old/alloy-1.11.0/docs/sources/reference/components/loki/loki.process.md     
2025-09-30 15:10:57.000000000 +0200
+++ new/alloy-1.11.2/docs/sources/reference/components/loki/loki.process.md     
2025-10-10 15:06:43.000000000 +0200
@@ -580,6 +580,8 @@
 
 The `stage.labels` inner block configures a labels processing stage that can 
read data from the extracted values map and set new labels on incoming log 
entries.
 
+For labels that are static, refer to 
[`stage.static_labels`][stage.static_labels]
+
 The following arguments are supported:
 
 | Name     | Type          | Description                             | Default 
| Required |
@@ -1459,6 +1461,8 @@
 
 The `stage.static_labels` inner block configures a static_labels processing 
stage that adds a static set of labels to incoming log entries.
 
+For labels that are dynamic, refer to [`stage.labels`][stage.labels]
+
 The following arguments are supported:
 
 | Name     | Type          | Description                                    | 
Default | Required |
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/alloy-1.11.0/docs/sources/reference/components/otelcol/otelcol.processor.batch.md
 
new/alloy-1.11.2/docs/sources/reference/components/otelcol/otelcol.processor.batch.md
--- 
old/alloy-1.11.0/docs/sources/reference/components/otelcol/otelcol.processor.batch.md
       2025-09-30 15:10:57.000000000 +0200
+++ 
new/alloy-1.11.2/docs/sources/reference/components/otelcol/otelcol.processor.batch.md
       2025-10-10 15:06:43.000000000 +0200
@@ -73,7 +73,7 @@
   Every batch contains up to the `send_batch_max_size` number of spans, log 
records, or metric data points.
   The excess spans, log records, or metric data points aren't lost - instead, 
they're added to the next batch.
 
-For example, assume you set `send_batch_size` to the default `8192` and there 
are 8,000 batched spans.
+For example, assume you set `send_batch_size` to `8192` and there are 8,000 
batched spans.
 If the batch processor receives 8,000 more spans at once, its behavior depends 
on how you configure `send_batch_max_size`:
 
 * If you set `send_batch_max_size` to `0`, the total batch size would be 
16,000 spans which are then flushed as a single batch.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/alloy-1.11.0/docs/sources/set-up/install/kubernetes.md 
new/alloy-1.11.2/docs/sources/set-up/install/kubernetes.md
--- old/alloy-1.11.0/docs/sources/set-up/install/kubernetes.md  2025-09-30 
15:10:57.000000000 +0200
+++ new/alloy-1.11.2/docs/sources/set-up/install/kubernetes.md  2025-10-10 
15:06:43.000000000 +0200
@@ -10,6 +10,8 @@
 
 # Deploy {{% param "FULL_PRODUCT_NAME" %}} on Kubernetes
 
+{{< docs/learning-journeys title="Monitor Kubernetes cluster infrastructure in 
Grafana Cloud" url="https://grafana.com/docs/learning-journeys/kubernetes/"; >}}
+
 {{< param "PRODUCT_NAME" >}} can be deployed on Kubernetes by using the Helm 
chart for {{< param "PRODUCT_NAME" >}}.
 
 ## Before you begin
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/alloy-1.11.0/docs/sources/set-up/install/linux.md 
new/alloy-1.11.2/docs/sources/set-up/install/linux.md
--- old/alloy-1.11.0/docs/sources/set-up/install/linux.md       2025-09-30 
15:10:57.000000000 +0200
+++ new/alloy-1.11.2/docs/sources/set-up/install/linux.md       2025-10-10 
15:06:43.000000000 +0200
@@ -10,6 +10,8 @@
 
 # Install {{% param "FULL_PRODUCT_NAME" %}} on Linux
 
+{{< docs/learning-journeys title="Monitor a Linux server in Grafana Cloud" 
url="https://grafana.com/docs/learning-journeys/linux-server-integration/"; >}}
+
 You can install {{< param "PRODUCT_NAME" >}} as a systemd service on Linux.
 
 ## Before you begin
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/alloy-1.11.0/docs/sources/set-up/install/macos.md 
new/alloy-1.11.2/docs/sources/set-up/install/macos.md
--- old/alloy-1.11.0/docs/sources/set-up/install/macos.md       2025-09-30 
15:10:57.000000000 +0200
+++ new/alloy-1.11.2/docs/sources/set-up/install/macos.md       2025-10-10 
15:06:43.000000000 +0200
@@ -10,6 +10,8 @@
 
 # Install {{% param "FULL_PRODUCT_NAME" %}} on macOS
 
+{{< docs/learning-journeys title="Monitor a macOS system in Grafana Cloud" 
url="https://grafana.com/docs/learning-journeys/macos-integration/"; >}}
+
 You can install {{< param "PRODUCT_NAME" >}} on macOS with Homebrew.
 
 {{< admonition type="note" >}}
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/alloy-1.11.0/docs/sources/set-up/install/windows.md 
new/alloy-1.11.2/docs/sources/set-up/install/windows.md
--- old/alloy-1.11.0/docs/sources/set-up/install/windows.md     2025-09-30 
15:10:57.000000000 +0200
+++ new/alloy-1.11.2/docs/sources/set-up/install/windows.md     2025-10-10 
15:06:43.000000000 +0200
@@ -10,86 +10,149 @@
 
 # Install {{% param "FULL_PRODUCT_NAME" %}} on Windows
 
-You can install {{< param "PRODUCT_NAME" >}} on Windows as a standard 
graphical install, or as a silent install.
+You can install {{< param "PRODUCT_NAME" >}} on Windows as a standard 
graphical install, a WinGet install, or a silent install.
 
 ## Standard graphical install
 
 To do a standard graphical install of {{< param "PRODUCT_NAME" >}} on Windows, 
perform the following steps.
 
-1. Navigate to the [latest release][latest] on GitHub.
-
+1. Navigate to the [releases page][releases] on GitHub.
 1. Scroll down to the **Assets** section.
+1. Download `alloy-installer-windows-amd64.exe` or download and extract 
`alloy-installer-windows-amd64.exe.zip`.
+1. Double-click on `alloy-installer-windows-amd64.exe` to install {{< param 
"PRODUCT_NAME" >}}.
 
-1. Download the file called `alloy-installer-windows-amd64.exe.zip`.
+The installer places {{< param "PRODUCT_NAME" >}} in the default directory 
`%PROGRAMFILES%\GrafanaLabs\Alloy`.
 
-1. Extract the downloaded file.
+## WinGet install
 
-1. Double-click on `alloy-installer-windows-amd64.exe` to install {{< param 
"PRODUCT_NAME" >}}.
+To install {{< param "PRODUCT_NAME" >}} with WinGet, perform the following 
steps.
 
-{{< param "PRODUCT_NAME" >}} is installed into the default directory 
`%PROGRAMFILES%\GrafanaLabs\Alloy`.
+1. Make sure that the [WinGet package 
manager](https://learn.microsoft.com/windows/package-manager/winget/) is 
installed.
+1. Run the following command.
 
-## Silent install
+   {{< code >}}
 
-To do a silent install of {{< param "PRODUCT_NAME" >}} on Windows, perform the 
following steps.
+   ```cmd
+   winget install GrafanaLabs.Alloy
+   ```
 
-1. Navigate to the [latest release][latest] on GitHub.
+   ```powershell
+   winget install GrafanaLabs.Alloy
+   ```
 
-1. Scroll down to the **Assets** section.
+   {{< /code >}}
 
-1. Download the file called `alloy-installer-windows-amd64.exe.zip`.
+## Silent install
+
+To do a silent install of {{< param "PRODUCT_NAME" >}} on Windows, perform the 
following steps.
 
-1. Extract the downloaded file.
+1. Navigate to the [releases page][releases] on GitHub.
+1. Scroll down to the **Assets** section.
+1. Download `alloy-installer-windows-amd64.exe` or download and extract 
`alloy-installer-windows-amd64.exe.zip`..
+1. Run the following command as Administrator.
 
-1. Run the following command in PowerShell or Command Prompt:
+   {{< code >}}
 
    ```cmd
-   <PATH_TO_INSTALLER> /S
+   <PATH>\alloy-installer-windows-amd64.exe /S
    ```
 
+   ```powershell
+   & <PATH>\alloy-installer-windows-amd64.exe /S
+   ```
+
+   {{< /code >}}
+
    Replace the following:
 
    - _`<PATH_TO_INSTALLER>`_: The path to the uncompressed installer 
executable.
 
 ### Silent install options
 
-* `/CONFIG=<path>` Path to the configuration file. Default: 
`$INSTDIR\config.alloy`
-* `/DISABLEREPORTING=<yes|no>` Disable [data collection][]. Default: `no`
-* `/DISABLEPROFILING=<yes|no>` Disable profiling endpoint. Default: `no`
-* `/ENVIRONMENT="KEY=VALUE\0KEY2=VALUE2"` Define environment variables for 
Windows Service. Default: ``
-* `/RUNTIMEPRIORITY="normal|below_normal|above_normal|high|idle|realtime"` Set 
the runtime priority of the {{< param "PRODUCT_NAME" >}} process. Default: 
`normal`
-* `/STABILITY="generally-available|public-preview|experimental"` Set the 
stability level of {{< param "PRODUCT_NAME" >}}. Default: `generally-available`
-* `/USERNAME="<username>"` Set the fully qualified user that Windows will use 
to run the service. Default: `NT AUTHORITY\LocalSystem`
-* `/PASSWORD="<password>"` Set the password of the user that Windows will use 
to run the service. This is not required for standard Windows Service Accounts 
like LocalSystem. Default: ``
+- `/CONFIG=<path>` Path to the configuration file. Default: 
`$INSTDIR\config.alloy`
+- `/DISABLEREPORTING=<yes|no>` Disable [data collection][]. Default: `no`
+- `/DISABLEPROFILING=<yes|no>` Disable profiling endpoint. Default: `no`
+- `/ENVIRONMENT="KEY=VALUE\0KEY2=VALUE2"` Define environment variables for 
Windows Service. Default: ``
+- `/RUNTIMEPRIORITY="normal|below_normal|above_normal|high|idle|realtime"` Set 
the runtime priority of the {{< param "PRODUCT_NAME" >}} process. Default: 
`normal`
+- `/STABILITY="generally-available|public-preview|experimental"` Set the 
stability level of {{< param "PRODUCT_NAME" >}}. Default: `generally-available`
+- `/USERNAME="<username>"` Set the fully qualified user that Windows uses to 
run the service. Default: `NT AUTHORITY\LocalSystem`
+- `/PASSWORD="<password>"` Set the password of the user that Windows uses to 
run the service. This isn't required for standard Windows Service Accounts like 
LocalSystem. Default: ``
 
 {{< admonition type="note" >}}
-The `--windows.priority` flag is in [Public preview][stability] and is not 
covered by {{< param "FULL_PRODUCT_NAME" >}} [backward compatibility][] 
guarantees.
-The `/RUNTIMEPRIORITY` installation option sets this flag, and if Alloy is not 
running with an appropriate stability level it will fail to start.
+The `--windows.priority` flag is in [public preview][stability] and isn't 
covered by {{< param "FULL_PRODUCT_NAME" >}} [backward compatibility][] 
guarantees.
+The `/RUNTIMEPRIORITY` installation option sets this flag, and if {{< param 
"PRODUCT_NAME" >}} isn't running with an appropriate stability level, it fails 
to start.
 
 [stability]: https://grafana.com/docs/release-life-cycle/
 [backward compatibility]: ../../../introduction/backward-compatibility/
+
 {{< /admonition >}}
 
-## Service Configuration
+## Service configuration
 
-{{< param "PRODUCT_NAME" >}} uses the Windows Registry 
`HKLM\Software\GrafanaLabs\Alloy` for service configuration.
+{{< param "PRODUCT_NAME" >}} uses the Windows registry key 
`HKLM\Software\GrafanaLabs\Alloy` for service configuration.
 
-* `Arguments` (Type `REG_MULTI_SZ`) Each value represents a binary argument 
for alloy binary.
-* `Environment` (Type `REG_MULTI_SZ`) Each value represents a environment 
value `KEY=VALUE` for alloy binary.
+- `Arguments`: Type `REG_MULTI_SZ`. Each value represents a binary argument 
for the {{< param "PRODUCT_NAME" >}} binary.
+- `Environment`: Type `REG_MULTI_SZ`. Each value represents an environment 
value `KEY=VALUE` for the {{< param "PRODUCT_NAME" >}} binary.
 
 ## Uninstall
 
-You can uninstall {{< param "PRODUCT_NAME" >}} with Windows Add or Remove 
Programs or `%PROGRAMFILES%\GrafanaLabs\Alloy\uninstall.exe`.
 Uninstalling {{< param "PRODUCT_NAME" >}} stops the service and removes it 
from disk.
 This includes any configuration files in the installation directory.
 
-{{< param "PRODUCT_NAME" >}} can also be silently uninstalled by running 
`uninstall.exe /S` as Administrator.
+### Standard graphical uninstall
+
+To uninstall {{< param "PRODUCT_NAME" >}}, use Add or Remove Programs or run 
the following command as Administrator.
+
+{{< code >}}
+
+```cmd
+%PROGRAMFILES%\GrafanaLabs\Alloy\uninstall.exe
+```
+
+```powershell
+& ${env:PROGRAMFILES}\GrafanaLabs\Alloy\uninstall.exe
+```
+
+{{< /code >}}
+
+### Uninstall with WinGet
+
+To install {{< param "PRODUCT_NAME" >}} with WinGet, run the following command.
+
+{{< code >}}
+
+```cmd
+winget uninstall GrafanaLabs.Alloy
+```
+
+```powershell
+winget uninstall GrafanaLabs.Alloy
+```
+
+{{< /code >}}
+
+### Silent uninstall
+
+To silently uninstall {{< param "PRODUCT_NAME" >}}, run the following command 
as Administrator.
+
+{{< code >}}
+
+```cmd
+%PROGRAMFILES%\GrafanaLabs\Alloy\uninstall.exe /S
+```
+
+```powershell
+& ${env:PROGRAMFILES}\GrafanaLabs\Alloy\uninstall.exe /S
+```
+
+{{< /code >}}
 
 ## Next steps
 
-* [Run {{< param "PRODUCT_NAME" >}}][Run]
-* [Configure {{< param "PRODUCT_NAME" >}}][Configure]
+- [Run {{< param "PRODUCT_NAME" >}}][Run]
+- [Configure {{< param "PRODUCT_NAME" >}}][Configure]
 
-[latest]: https://github.com/grafana/alloy/releases/latest
+[releases]: https://github.com/grafana/alloy/releases
 [data collection]: ../../../data-collection/
 [Run]: ../../run/windows/
 [Configure]: ../../../configure/windows/
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/alloy-1.11.0/docs/sources/set-up/migrate/from-promtail.md 
new/alloy-1.11.2/docs/sources/set-up/migrate/from-promtail.md
--- old/alloy-1.11.0/docs/sources/set-up/migrate/from-promtail.md       
2025-09-30 15:10:57.000000000 +0200
+++ new/alloy-1.11.2/docs/sources/set-up/migrate/from-promtail.md       
2025-10-10 15:06:43.000000000 +0200
@@ -179,6 +179,7 @@
   Make sure that you use the new metric names, for example, in your alerts and 
dashboards queries.
 * The logs produced by {{< param "PRODUCT_NAME" >}} differ from those produced 
by Promtail.
 * {{< param "PRODUCT_NAME" >}} exposes the {{< param "PRODUCT_NAME" >}} 
[UI][], which differs from the Promtail Web UI.
+* If you are converting a Promtail configuration for a Kubernetes daemonset, 
[modify the generated configuration][single-node-discovery] to ensure 
`discovery.kubernetes` only discovers Pods residing on the same node as the {{< 
param "PRODUCT_NAME" >}} Pod.
 
 [Promtail]: https://www.grafana.com/docs/loki/latest/clients/promtail/
 [debugging]: #debugging
@@ -192,3 +193,4 @@
 [DebuggingUI]: ../../../troubleshoot/debug/
 [configuration]: ../../../get-started/configuration-syntax/
 [UI]: ../../../troubleshoot/debug/#alloy-ui
+[single-node-discovery]: 
../../../reference/components/discovery/discovery.kubernetes/#limit-to-only-pods-on-the-same-node
\ No newline at end of file
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/alloy-1.11.0/docs/sources/tutorials/send-logs-to-loki.md 
new/alloy-1.11.2/docs/sources/tutorials/send-logs-to-loki.md
--- old/alloy-1.11.0/docs/sources/tutorials/send-logs-to-loki.md        
2025-09-30 15:10:57.000000000 +0200
+++ new/alloy-1.11.2/docs/sources/tutorials/send-logs-to-loki.md        
2025-10-10 15:06:43.000000000 +0200
@@ -117,7 +117,7 @@
     > We recommend using the `Editor` tab to copy and paste the Docker Compose 
file. However, you can also use a terminal editor like `nano` or `vim`.
    {{< /docs/ignore >}}
 
-   ```yaml
+    ```yaml
     version: '3'
     services:
       loki:
@@ -167,12 +167,12 @@
         image: grafana/grafana:11.0.0
         ports:
           - "3000:3000"
-    ```
+     ```
 
 1. To start the local Grafana instance, run the following command.
 
    ```bash
-    docker compose up -d
+   docker compose up -d
    ```
     <!-- INTERACTIVE ignore START -->
     {{< admonition type="note" >}}
@@ -206,12 +206,12 @@
 
 Copy and paste the following component configuration at the top of the file.
 
-   ```alloy
-    local.file_match "local_files" {
-        path_targets = [{"__path__" = "/var/log/*.log"}]
-        sync_period = "5s"
-    }
-   ```
+```alloy
+local.file_match "local_files" {
+  path_targets = [{"__path__" = "/var/log/*.log"}]
+  sync_period = "5s"
+}
+```
 
 This configuration creates a [local.file_match][] component named 
`local_files` which does the following:
 
@@ -223,11 +223,11 @@
 Copy and paste the following component configuration below the previous 
component in your `config.alloy` file:
 
 ```alloy
-  loki.source.file "log_scrape" {
-    targets    = local.file_match.local_files.targets
-    forward_to = [loki.process.filter_logs.receiver]
-    tail_from_end = true
-  }
+loki.source.file "log_scrape" {
+  targets    = local.file_match.local_files.targets
+  forward_to = [loki.process.filter_logs.receiver]
+  tail_from_end = true
+}
 ```
 
 This configuration creates a [`loki.source.file`][loki.source.file] component 
named `log_scrape` which does the following:
@@ -245,14 +245,14 @@
 Copy and paste the following component configuration below the previous 
component in your `config.alloy` file:
 
 ```alloy
-  loki.process "filter_logs" {
-    stage.drop {
-        source = ""
-        expression  = ".*Connection closed by authenticating user root"
-        drop_counter_reason = "noisy"
-      }
-    forward_to = [loki.write.grafana_loki.receiver]
-    }
+loki.process "filter_logs" {
+  stage.drop {
+    source = ""
+    expression  = ".*Connection closed by authenticating user root"
+    drop_counter_reason = "noisy"
+  }
+  forward_to = [loki.write.grafana_loki.receiver]
+}
 ```
 
 The `loki.process` component allows you to transform, filter, parse, and 
enrich log data.
@@ -273,16 +273,16 @@
 Copy and paste this component configuration below the previous component in 
your `config.alloy` file.
 
 ```alloy
-  loki.write "grafana_loki" {
-    endpoint {
-      url = "http://localhost:3100/loki/api/v1/push";
-
-      // basic_auth {
-      //  username = "admin"
-      //  password = "admin"
-      // }
-    }
+loki.write "grafana_loki" {
+  endpoint {
+    url = "http://localhost:3100/loki/api/v1/push";
+
+    // basic_auth {
+    //  username = "admin"
+    //  password = "admin"
+    // }
   }
+}
 ```
 
 This final component creates a [`loki.write`][loki.write] component named 
`grafana_loki` that points to `http://localhost:3100/loki/api/v1/push`.
@@ -337,7 +337,7 @@
 1. Call the `/-/reload` endpoint to tell {{< param "PRODUCT_NAME" >}} to 
reload the configuration file without a system service restart.
 
    ```bash
-    curl -X POST http://localhost:12345/-/reload
+   curl -X POST http://localhost:12345/-/reload
    ```
    <!-- INTERACTIVE ignore START -->
    {{< admonition type="tip" >}}
@@ -360,7 +360,7 @@
 {{< docs/ignore >}}
 
    ```bash
-    sudo systemctl reload alloy
+   sudo systemctl reload alloy
    ```
 
 {{< /docs/ignore >}}
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/alloy-1.11.0/internal/alloycli/cmd_run.go 
new/alloy-1.11.2/internal/alloycli/cmd_run.go
--- old/alloy-1.11.0/internal/alloycli/cmd_run.go       2025-09-30 
15:10:57.000000000 +0200
+++ new/alloy-1.11.2/internal/alloycli/cmd_run.go       2025-10-10 
15:06:43.000000000 +0200
@@ -68,6 +68,7 @@
                clusterRejoinInterval: 60 * time.Second,
                disableSupportBundle:  false,
                windowsPriority:       windowspriority.PriorityNormal,
+               taskShutdownDeadline:  10 * time.Minute,
        }
 
        cmd := &cobra.Command{
@@ -166,6 +167,7 @@
        if runtime.GOOS == "windows" {
                cmd.Flags().StringVar(&r.windowsPriority, "windows.priority", 
r.windowsPriority, fmt.Sprintf("Process priority to use when running on 
windows. This flag is currently in public preview. Supported values: %s", 
strings.Join(slices.Collect(windowspriority.PriorityValues()), ", ")))
        }
+       cmd.Flags().DurationVar(&r.taskShutdownDeadline, 
"feature.component-shutdown-deadline", r.taskShutdownDeadline, "Maximum 
duration to wait for a component to shut down before giving up and logging an 
error")
 
        addDeprecatedFlags(cmd)
        return cmd
@@ -201,6 +203,7 @@
        enableCommunityComps         bool
        disableSupportBundle         bool
        windowsPriority              string
+       taskShutdownDeadline         time.Duration
 }
 
 func (fr *alloyRun) Run(cmd *cobra.Command, configPath string) error {
@@ -384,6 +387,7 @@
                        remoteCfgService,
                        uiService,
                },
+               TaskShutdownDeadline: fr.taskShutdownDeadline,
        })
 
        ready = f.Ready
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/alloy-1.11.0/internal/component/loki/source/file/file.go 
new/alloy-1.11.2/internal/component/loki/source/file/file.go
--- old/alloy-1.11.0/internal/component/loki/source/file/file.go        
2025-09-30 15:10:57.000000000 +0200
+++ new/alloy-1.11.2/internal/component/loki/source/file/file.go        
2025-10-10 15:06:43.000000000 +0200
@@ -145,6 +145,20 @@
        })
        defer func() {
                level.Info(c.opts.Logger).Log("msg", "loki.source.file 
component shutting down, stopping readers and positions file")
+
+               // Start black hole drain routine to prevent deadlock when we 
call c.t.Stop().
+               drainCtx, cancelDrain := 
context.WithCancel(context.Background())
+               defer cancelDrain()
+               go func() {
+                       for {
+                               select {
+                               case <-drainCtx.Done():
+                                       return
+                               case <-c.handler.Chan(): // Ignore the 
remaining entries
+                               }
+                       }
+               }()
+
                c.tasksMut.RLock()
                c.stopping.Store(true)
                runner.Stop()
@@ -160,7 +174,11 @@
                case entry := <-c.handler.Chan():
                        c.receiversMut.RLock()
                        for _, receiver := range c.receivers {
-                               receiver.Chan() <- entry
+                               select {
+                               case <-ctx.Done():
+                                       return nil
+                               case receiver.Chan() <- entry:
+                               }
                        }
                        c.receiversMut.RUnlock()
                case <-c.updateReaders:
@@ -178,7 +196,11 @@
                                        select {
                                        case entry := <-c.handler.Chan():
                                                for _, receiver := range 
c.receivers {
-                                                       receiver.Chan() <- entry
+                                                       select {
+                                                       case <-readCtx.Done():
+                                                               return
+                                                       case receiver.Chan() <- 
entry:
+                                                       }
                                                }
                                        case <-readCtx.Done():
                                                return
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/alloy-1.11.0/internal/component/loki/source/journal/journal.go 
new/alloy-1.11.2/internal/component/loki/source/journal/journal.go
--- old/alloy-1.11.0/internal/component/loki/source/journal/journal.go  
2025-09-30 15:10:57.000000000 +0200
+++ new/alloy-1.11.2/internal/component/loki/source/journal/journal.go  
2025-10-10 15:06:43.000000000 +0200
@@ -38,16 +38,15 @@
 
 // Component represents reading from a journal
 type Component struct {
-       mut           sync.RWMutex
-       t             *target.JournalTarget
-       metrics       *target.Metrics
-       o             component.Options
-       handler       chan loki.Entry
-       positions     positions.Positions
-       receivers     []loki.LogsReceiver
-       argsUpdated   chan struct{}
-       args          Arguments
-       healthErr     error
+       mut            sync.RWMutex
+       t              *target.JournalTarget
+       metrics        *target.Metrics
+       o              component.Options
+       handler        chan loki.Entry
+       positions      positions.Positions
+       targetsUpdated chan struct{}
+       args           Arguments
+       healthErr      error
 }
 
 // New creates a new  component.
@@ -73,13 +72,12 @@
        }
 
        c := &Component{
-               metrics:       target.NewMetrics(o.Registerer),
-               o:             o,
-               handler:       make(chan loki.Entry),
-               positions:     positionsFile,
-               receivers:     args.Receivers,
-               argsUpdated:   make(chan struct{}, 1),
-               args:          args,
+               metrics:        target.NewMetrics(o.Registerer),
+               o:              o,
+               handler:        make(chan loki.Entry),
+               positions:      positionsFile,
+               targetsUpdated: make(chan struct{}, 1),
+               args:           args,
        }
        err = c.Update(args)
        return c, err
@@ -88,18 +86,34 @@
 // Run starts the component.
 func (c *Component) Run(ctx context.Context) error {
        defer func() {
+               level.Info(c.o.Logger).Log("msg", "loki.source.journal 
component shutting down")
+               // Start black hole drain routine to prevent deadlock when we 
call c.t.Stop().
+               drainCtx, cancelDrain := 
context.WithCancel(context.Background())
+               defer cancelDrain()
+               go func() {
+                       for {
+                               select {
+                               case <-drainCtx.Done():
+                                       return
+                               case _ = <-c.handler: // Ignore the remaining 
entries
+                               }
+                       }
+               }()
+
+               // Stop existing target
                c.mut.RLock()
+               defer c.mut.RUnlock()
                if c.t != nil {
                        err := c.t.Stop()
                        if err != nil {
                                level.Warn(c.o.Logger).Log("msg", "error 
stopping journal target", "err", err)
                        }
                }
-               c.mut.RUnlock()
-
        }()
        for {
                select {
+               case <-c.targetsUpdated:
+                       c.reloadTargets(ctx)
                case <-ctx.Done():
                        return nil
                case entry := <-c.handler:
@@ -108,31 +122,14 @@
                                Labels: entry.Labels,
                                Entry:  entry.Entry,
                        }
-                       for _, r := range c.receivers {
-                               r.Chan() <- lokiEntry
-                       }
-                       c.mut.RUnlock()
-               case <-c.argsUpdated:
-                       c.mut.Lock()
-                       if c.t != nil {
-                               err := c.t.Stop()
-                               if err != nil {
-                                       level.Error(c.o.Logger).Log("msg", 
"error stopping journal target", "err", err)
+                       for _, r := range c.args.Receivers {
+                               select {
+                               case <-ctx.Done():
+                                       return nil
+                               case r.Chan() <- lokiEntry:
                                }
-                               c.t = nil
-                       }
-                       rcs := 
alloy_relabel.ComponentToPromRelabelConfigs(c.args.RelabelRules)
-                       entryHandler := loki.NewEntryHandler(c.handler, func() 
{})
-
-                       newTarget, err := target.NewJournalTarget(c.metrics, 
c.o.Logger, entryHandler, c.positions, c.o.ID, rcs, convertArgs(c.o.ID, c.args))
-                       if err != nil {
-                               level.Error(c.o.Logger).Log("msg", "error 
creating journal target", "err", err, "path", c.args.Path)
-                               c.healthErr = fmt.Errorf("error creating 
journal target: %w", err)
-                       } else {
-                               c.t = newTarget
-                               c.healthErr = nil
                        }
-                       c.mut.Unlock()
+                       c.mut.RUnlock()
                }
        }
 }
@@ -144,7 +141,7 @@
        defer c.mut.Unlock()
        c.args = newArgs
        select {
-       case c.argsUpdated <- struct{}{}:
+       case c.targetsUpdated <- struct{}{}:
        default: // Update notification already sent
        }
        return nil
@@ -169,6 +166,71 @@
        }
 }
 
+func (c *Component) startDrainingRoutine(parentCtx context.Context) func() {
+       readCtx, cancel := context.WithCancel(parentCtx)
+       c.mut.RLock()
+       defer c.mut.RUnlock()
+       receiversCopy := make([]loki.LogsReceiver, len(c.args.Receivers))
+       copy(receiversCopy, c.args.Receivers)
+       go func() {
+               for {
+                       select {
+                       case <-readCtx.Done():
+                               return
+                       case entry := <-c.handler:
+                               lokiEntry := loki.Entry{
+                                       Labels: entry.Labels,
+                                       Entry:  entry.Entry,
+                               }
+                               for _, r := range receiversCopy {
+                                       r.Chan() <- lokiEntry
+                               }
+                       }
+               }
+       }()
+       return cancel
+}
+
+func (c *Component) reloadTargets(parentCtx context.Context) {
+       // Start draining routine to prevent potential deadlock if target 
attempts to send during Stop().
+       cancel := c.startDrainingRoutine(parentCtx)
+
+       // Grab current state
+       c.mut.RLock()
+       var targetToStop *target.JournalTarget
+       if c.t != nil {
+               targetToStop = c.t
+       }
+       rcs := alloy_relabel.ComponentToPromRelabelConfigs(c.args.RelabelRules)
+       c.mut.RUnlock()
+
+       // Stop existing target
+       if targetToStop != nil {
+               err := targetToStop.Stop()
+               if err != nil {
+                       level.Error(c.o.Logger).Log("msg", "error stopping 
journal target", "err", err)
+               }
+       }
+
+       // Stop draining routine
+       cancel()
+
+       // Create new target
+       c.mut.Lock()
+       defer c.mut.Unlock()
+       c.t = nil
+       entryHandler := loki.NewEntryHandler(c.handler, func() {})
+
+       newTarget, err := target.NewJournalTarget(c.metrics, c.o.Logger, 
entryHandler, c.positions, c.o.ID, rcs, convertArgs(c.o.ID, c.args))
+       if err != nil {
+               level.Error(c.o.Logger).Log("msg", "error creating journal 
target", "err", err, "path", c.args.Path)
+               c.healthErr = fmt.Errorf("error creating journal target: %w", 
err)
+       } else {
+               c.t = newTarget
+               c.healthErr = nil
+       }
+}
+
 func convertArgs(job string, a Arguments) *scrapeconfig.JournalTargetConfig {
        labels := model.LabelSet{
                model.LabelName("job"): model.LabelValue(job),
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/alloy-1.11.0/internal/component/loki/source/syslog/syslog.go 
new/alloy-1.11.2/internal/component/loki/source/syslog/syslog.go
--- old/alloy-1.11.0/internal/component/loki/source/syslog/syslog.go    
2025-09-30 15:10:57.000000000 +0200
+++ new/alloy-1.11.2/internal/component/loki/source/syslog/syslog.go    
2025-10-10 15:06:43.000000000 +0200
@@ -137,11 +137,9 @@
                        case <-readCtx.Done():
                                return
                        case entry := <-c.handler.Chan():
-                               c.mut.RLock()
                                for _, receiver := range fanoutCopy {
                                        receiver.Chan() <- entry
                                }
-                               c.mut.RUnlock()
                        }
                }
        }()
@@ -152,20 +150,23 @@
        // Start draining routine to prevent potential deadlock if targets 
attempt to send during Stop().
        cancel := c.startDrainingRoutine()
 
-       // Stop all targets
+       // Grab current state
        c.mut.RLock()
        var rcs []*relabel.Config
        if len(c.args.RelabelRules) > 0 {
                rcs = 
alloy_relabel.ComponentToPromRelabelConfigs(c.args.RelabelRules)
        }
+       targetsToStop := make([]*st.SyslogTarget, len(c.targets))
+       copy(targetsToStop, c.targets)
+       c.mut.RUnlock()
 
-       for _, l := range c.targets {
+       // Stop existing targets
+       for _, l := range targetsToStop {
                err := l.Stop()
                if err != nil {
                        level.Error(c.opts.Logger).Log("msg", "error while 
stopping syslog listener", "err", err)
                }
        }
-       c.mut.RUnlock()
 
        // Stop draining routine
        cancel()
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/alloy-1.11.0/internal/component/prometheus/operator/common/crdmanager.go 
new/alloy-1.11.2/internal/component/prometheus/operator/common/crdmanager.go
--- 
old/alloy-1.11.0/internal/component/prometheus/operator/common/crdmanager.go    
    2025-09-30 15:10:57.000000000 +0200
+++ 
new/alloy-1.11.2/internal/component/prometheus/operator/common/crdmanager.go    
    2025-10-10 15:06:43.000000000 +0200
@@ -364,21 +364,26 @@
        var informer cache.Informer
        var err error
 
+       timeoutCtx, cancel := context.WithTimeout(ctx, 
c.args.InformerSyncTimeout)
+       deadline, _ := timeoutCtx.Deadline()
+       defer cancel()
        backoff := backoff.New(
-               ctx,
+               timeoutCtx,
                backoff.Config{
                        MinBackoff: 1 * time.Second,
                        MaxBackoff: 10 * time.Second,
-                       MaxRetries: 3, // retry up to 3 times
+                       MaxRetries: 0, // Will retry until InformerSyncTimeout 
is reached
                },
        )
-       for backoff.Ongoing() {
+       for {
                // Retry to get the informer in case of a timeout.
                informer, err = getInformer(ctx, informers, prototype, 
c.args.InformerSyncTimeout)
-               if err == nil {
+               nextDelay := backoff.NextDelay()
+               // exit loop on success, timeout, max retries reached, or if 
next backoff exceeds timeout
+               if err == nil || !backoff.Ongoing() || 
time.Now().Add(nextDelay).After(deadline) {
                        break
                }
-               level.Warn(c.logger).Log("msg", "failed to get informer, 
retrying", "next backoff", backoff.NextDelay(), "err", err)
+               level.Warn(c.logger).Log("msg", "failed to get informer, 
retrying", "next backoff", nextDelay, "err", err)
                backoff.Wait()
        }
        if err != nil {
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/alloy-1.11.0/internal/runtime/alloy.go 
new/alloy-1.11.2/internal/runtime/alloy.go
--- old/alloy-1.11.0/internal/runtime/alloy.go  2025-09-30 15:10:57.000000000 
+0200
+++ new/alloy-1.11.2/internal/runtime/alloy.go  2025-10-10 15:06:43.000000000 
+0200
@@ -114,6 +114,9 @@
 
        // EnableCommunityComps enables the use of community components.
        EnableCommunityComps bool
+
+       // TaskShutdownDeadline is the maximum duration to wait for a component 
to shut down before giving up and logging an error.
+       TaskShutdownDeadline time.Duration
 }
 
 // Runtime is the Alloy system.
@@ -137,10 +140,11 @@
 // New creates a new, unstarted Alloy controller. Call Run to run the 
controller.
 func New(o Options) *Runtime {
        return newController(controllerOptions{
-               Options:        o,
-               ModuleRegistry: newModuleRegistry(),
-               IsModule:       false, // We are creating a new root controller.
-               WorkerPool:     worker.NewDefaultWorkerPool(),
+               Options:              o,
+               ModuleRegistry:       newModuleRegistry(),
+               IsModule:             false, // We are creating a new root 
controller.
+               WorkerPool:           worker.NewDefaultWorkerPool(),
+               TaskShutdownDeadline: o.TaskShutdownDeadline,
        })
 }
 
@@ -154,6 +158,8 @@
        IsModule          bool               // Whether this controller is for 
a module.
        // A worker pool to evaluate components asynchronously. A default one 
will be created if this is nil.
        WorkerPool worker.Pool
+       // TaskShutdownDeadline is the maximum duration to wait for a component 
to shut down before giving up and logging an error.
+       TaskShutdownDeadline time.Duration
 }
 
 // newController creates a new, unstarted Alloy controller with a specific
@@ -186,7 +192,7 @@
                opts:   o,
 
                updateQueue: controller.NewQueue(),
-               sched:       controller.NewScheduler(log),
+               sched:       controller.NewScheduler(log, 
o.TaskShutdownDeadline),
 
                modules: o.ModuleRegistry,
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/alloy-1.11.0/internal/runtime/internal/controller/scheduler.go 
new/alloy-1.11.2/internal/runtime/internal/controller/scheduler.go
--- old/alloy-1.11.0/internal/runtime/internal/controller/scheduler.go  
2025-09-30 15:10:57.000000000 +0200
+++ new/alloy-1.11.2/internal/runtime/internal/controller/scheduler.go  
2025-10-10 15:06:43.000000000 +0200
@@ -4,12 +4,19 @@
        "context"
        "fmt"
        "sync"
+       "time"
 
        "github.com/go-kit/log"
 
        "github.com/grafana/alloy/internal/runtime/logging/level"
 )
 
+var (
+       // TaskShutdownWarningTimeout is the duration after which a warning is 
logged
+       // when a task is taking too long to shut down.
+       TaskShutdownWarningTimeout = time.Minute
+)
+
 // RunnableNode is any BlockNode which can also be run.
 type RunnableNode interface {
        BlockNode
@@ -18,10 +25,11 @@
 
 // Scheduler runs components.
 type Scheduler struct {
-       ctx     context.Context
-       cancel  context.CancelFunc
-       running sync.WaitGroup
-       logger  log.Logger
+       ctx                  context.Context
+       cancel               context.CancelFunc
+       running              sync.WaitGroup
+       logger               log.Logger
+       taskShutdownDeadline time.Duration
 
        tasksMut sync.Mutex
        tasks    map[string]*task
@@ -31,12 +39,13 @@
 // components which are running.
 //
 // Call Close to stop the Scheduler and all running components.
-func NewScheduler(logger log.Logger) *Scheduler {
+func NewScheduler(logger log.Logger, taskShutdownDeadline time.Duration) 
*Scheduler {
        ctx, cancel := context.WithCancel(context.Background())
        return &Scheduler{
-               ctx:    ctx,
-               cancel: cancel,
-               logger: logger,
+               ctx:                  ctx,
+               cancel:               cancel,
+               logger:               logger,
+               taskShutdownDeadline: taskShutdownDeadline,
 
                tasks: make(map[string]*task),
        }
@@ -52,7 +61,6 @@
 // call to Synchronize.
 func (s *Scheduler) Synchronize(rr []RunnableNode) error {
        s.tasksMut.Lock()
-       defer s.tasksMut.Unlock()
 
        if s.ctx.Err() != nil {
                return fmt.Errorf("Scheduler is closed")
@@ -89,9 +97,9 @@
                )
 
                opts := taskOptions{
-                       Context:  s.ctx,
-                       Runnable: newRunnable,
-                       OnDone: func(err error) {
+                       context:  s.ctx,
+                       runnable: newRunnable,
+                       onDone: func(err error) {
                                defer s.running.Done()
 
                                if err != nil {
@@ -104,12 +112,16 @@
                                defer s.tasksMut.Unlock()
                                delete(s.tasks, nodeID)
                        },
+                       logger:               log.With(s.logger, "taskID", 
nodeID),
+                       taskShutdownDeadline: s.taskShutdownDeadline,
                }
 
                s.running.Add(1)
                s.tasks[nodeID] = newTask(opts)
        }
 
+       // Unlock the tasks mutex so that Stop calls can complete.
+       s.tasksMut.Unlock()
        // Wait for all stopping runnables to exit.
        stopping.Wait()
        return nil
@@ -125,36 +137,64 @@
 
 // task is a scheduled runnable.
 type task struct {
-       ctx    context.Context
-       cancel context.CancelFunc
-       exited chan struct{}
+       ctx      context.Context
+       cancel   context.CancelFunc
+       exited   chan struct{}
+       opts     taskOptions
+       doneOnce sync.Once
 }
 
 type taskOptions struct {
-       Context  context.Context
-       Runnable RunnableNode
-       OnDone   func(error)
+       context              context.Context
+       runnable             RunnableNode
+       onDone               func(error)
+       logger               log.Logger
+       taskShutdownDeadline time.Duration
 }
 
 // newTask creates and starts a new task.
 func newTask(opts taskOptions) *task {
-       ctx, cancel := context.WithCancel(opts.Context)
+       ctx, cancel := context.WithCancel(opts.context)
 
        t := &task{
                ctx:    ctx,
                cancel: cancel,
                exited: make(chan struct{}),
+               opts:   opts,
        }
 
        go func() {
-               err := opts.Runnable.Run(t.ctx)
+               err := opts.runnable.Run(t.ctx)
                close(t.exited)
-               opts.OnDone(err)
+               t.doneOnce.Do(func() {
+                       t.opts.onDone(err)
+               })
        }()
        return t
 }
 
 func (t *task) Stop() {
        t.cancel()
-       <-t.exited
+
+       deadlineDuration := t.opts.taskShutdownDeadline
+       if deadlineDuration == 0 {
+               deadlineDuration = time.Hour * 24 * 365 * 100 // infinite 
timeout ~= 100 years
+       }
+
+       deadlineCtx, deadlineCancel := 
context.WithTimeout(context.Background(), deadlineDuration)
+       defer deadlineCancel()
+
+       for {
+               select {
+               case <-t.exited:
+                       return // Task exited normally.
+               case <-time.After(TaskShutdownWarningTimeout):
+                       level.Warn(t.opts.logger).Log("msg", "task shutdown is 
taking longer than expected")
+               case <-deadlineCtx.Done():
+                       t.doneOnce.Do(func() {
+                               t.opts.onDone(fmt.Errorf("task shutdown 
deadline exceeded"))
+                       })
+                       return // Task took too long to exit, don't wait.
+               }
+       }
 }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/alloy-1.11.0/internal/runtime/internal/controller/scheduler_test.go 
new/alloy-1.11.2/internal/runtime/internal/controller/scheduler_test.go
--- old/alloy-1.11.0/internal/runtime/internal/controller/scheduler_test.go     
2025-09-30 15:10:57.000000000 +0200
+++ new/alloy-1.11.2/internal/runtime/internal/controller/scheduler_test.go     
2025-10-10 15:06:43.000000000 +0200
@@ -1,10 +1,12 @@
 package controller_test
 
 import (
+       "bytes"
        "context"
        "os"
        "sync"
        "testing"
+       "time"
 
        "github.com/go-kit/log"
        "github.com/stretchr/testify/require"
@@ -30,7 +32,7 @@
                        return nil
                }
 
-               sched := controller.NewScheduler(logger)
+               sched := controller.NewScheduler(logger, 1*time.Minute)
                sched.Synchronize([]controller.RunnableNode{
                        fakeRunnable{ID: "component-a", Component: 
mockComponent{RunFunc: runFunc}},
                        fakeRunnable{ID: "component-b", Component: 
mockComponent{RunFunc: runFunc}},
@@ -52,7 +54,7 @@
                        return nil
                }
 
-               sched := controller.NewScheduler(logger)
+               sched := controller.NewScheduler(logger, 1*time.Minute)
 
                for i := 0; i < 10; i++ {
                        // If a new runnable is created, runFunc will panic 
since the WaitGroup
@@ -78,7 +80,7 @@
                        return nil
                }
 
-               sched := controller.NewScheduler(logger)
+               sched := controller.NewScheduler(logger, 1*time.Minute)
 
                sched.Synchronize([]controller.RunnableNode{
                        fakeRunnable{ID: "component-a", Component: 
mockComponent{RunFunc: runFunc}},
@@ -114,3 +116,54 @@
 
 func (mc mockComponent) Run(ctx context.Context) error              { return 
mc.RunFunc(ctx) }
 func (mc mockComponent) Update(newConfig component.Arguments) error { return 
mc.UpdateFunc(newConfig) }
+
+func TestScheduler_TaskTimeoutLogging(t *testing.T) {
+       // Temporarily modify timeout values for testing
+       originalWarningTimeout := controller.TaskShutdownWarningTimeout
+       controller.TaskShutdownWarningTimeout = 50 * time.Millisecond
+       defer func() {
+               controller.TaskShutdownWarningTimeout = originalWarningTimeout
+       }()
+
+       // Create a buffer to capture log output
+       var logBuffer bytes.Buffer
+       logger := log.NewLogfmtLogger(&logBuffer)
+
+       var started sync.WaitGroup
+       started.Add(1)
+
+       // Create a component that will block and not respond to context 
cancellation
+       runFunc := func(ctx context.Context) error {
+               started.Done()
+               // Block indefinitely, ignoring context cancellation
+               // Use a long sleep to simulate a component that doesn't 
respond to cancellation
+               time.Sleep(1 * time.Second)
+               return nil
+       }
+
+       sched := controller.NewScheduler(logger, 150*time.Millisecond)
+
+       // Start a component
+       err := sched.Synchronize([]controller.RunnableNode{
+               fakeRunnable{ID: "blocking-component", Component: 
mockComponent{RunFunc: runFunc}},
+       })
+       require.NoError(t, err)
+       started.Wait()
+
+       // Remove the component, which should trigger the timeout behavior. 
This will block until the component exits.
+       err = sched.Synchronize([]controller.RunnableNode{})
+       require.NoError(t, err)
+
+       logOutput := logBuffer.String()
+       t.Logf("actual log output:\n%s", logOutput)
+
+       // Should contain warning message
+       require.Contains(t, logOutput, "task shutdown is taking longer than 
expected")
+       require.Contains(t, logOutput, "level=warn")
+
+       // Should contain error message
+       require.Contains(t, logOutput, "task shutdown deadline exceeded")
+       require.Contains(t, logOutput, "level=error")
+
+       require.NoError(t, sched.Close())
+}
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/alloy-1.11.0/internal/static/integrations/cloudwatch_exporter/cloudwatch_exporter.go
 
new/alloy-1.11.2/internal/static/integrations/cloudwatch_exporter/cloudwatch_exporter.go
--- 
old/alloy-1.11.0/internal/static/integrations/cloudwatch_exporter/cloudwatch_exporter.go
    2025-09-30 15:10:57.000000000 +0200
+++ 
new/alloy-1.11.2/internal/static/integrations/cloudwatch_exporter/cloudwatch_exporter.go
    2025-10-10 15:06:43.000000000 +0200
@@ -40,10 +40,12 @@
        var factory cachingFactory
        var err error
 
+       l := slog.New(newSlogHandler(logging.NewSlogGoKitHandler(logger), 
debug))
+
        if useAWSSDKVersionV2 {
-               factory, err = 
yaceClientsV2.NewFactory(slog.New(logging.NewSlogGoKitHandler(logger)), conf, 
fipsEnabled)
+               factory, err = yaceClientsV2.NewFactory(l, conf, fipsEnabled)
        } else {
-               factory = 
yaceClientsV1.NewFactory(slog.New(logging.NewSlogGoKitHandler(logger)), conf, 
fipsEnabled)
+               factory = yaceClientsV1.NewFactory(l, conf, fipsEnabled)
        }
 
        if err != nil {
@@ -52,7 +54,7 @@
 
        return &exporter{
                name:                 name,
-               logger:               
slog.New(logging.NewSlogGoKitHandler(logger)),
+               logger:               l,
                cachingClientFactory: factory,
                scrapeConf:           conf,
        }, nil
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/alloy-1.11.0/internal/static/integrations/cloudwatch_exporter/logger.go 
new/alloy-1.11.2/internal/static/integrations/cloudwatch_exporter/logger.go
--- old/alloy-1.11.0/internal/static/integrations/cloudwatch_exporter/logger.go 
1970-01-01 01:00:00.000000000 +0100
+++ new/alloy-1.11.2/internal/static/integrations/cloudwatch_exporter/logger.go 
2025-10-10 15:06:43.000000000 +0200
@@ -0,0 +1,30 @@
+package cloudwatch_exporter
+
+import (
+       "context"
+       "log/slog"
+
+       "github.com/grafana/alloy/internal/runtime/logging"
+)
+
+// slogHandler is wrapping our internal logging adapter with support for 
cloudwatch debug field.
+// cloudwatch exporter will inspect check for debug logging level and pass 
that to aws sdk that perform
+// it's own logging without going through the logger we pass.
+type slogHandler struct {
+       debug bool
+       *logging.SlogGoKitHandler
+}
+
+func newSlogHandler(logger *logging.SlogGoKitHandler, debug bool) *slogHandler 
{
+       return &slogHandler{
+               debug:            debug,
+               SlogGoKitHandler: logger,
+       }
+}
+
+func (s *slogHandler) Enabled(ctx context.Context, level slog.Level) bool {
+       if level == slog.LevelDebug {
+               return s.debug
+       }
+       return s.SlogGoKitHandler.Enabled(ctx, level)
+}

++++++ alloy.obsinfo ++++++
--- /var/tmp/diff_new_pack.qEmF8j/_old  2025-10-13 15:37:30.214682179 +0200
+++ /var/tmp/diff_new_pack.qEmF8j/_new  2025-10-13 15:37:30.234683020 +0200
@@ -1,5 +1,5 @@
 name: alloy
-version: 1.11.0
-mtime: 1759237857
-commit: 7bd7f7f9a3e8262ae54c2bb4ccc491ae1d7f480e
+version: 1.11.2
+mtime: 1760101603
+commit: 4a8f93cd1755dd8205c3a9d6c35b136c476cff92
 

++++++ ui-1.11.0.tar.gz -> ui-1.11.2.tar.gz ++++++
/work/SRC/openSUSE:Factory/alloy/ui-1.11.0.tar.gz 
/work/SRC/openSUSE:Factory/.alloy.new.18484/ui-1.11.2.tar.gz differ: char 5, 
line 1

++++++ vendor.tar.gz ++++++
/work/SRC/openSUSE:Factory/alloy/vendor.tar.gz 
/work/SRC/openSUSE:Factory/.alloy.new.18484/vendor.tar.gz differ: char 13, line 
1

Reply via email to