Script 'mail_helper' called by obssrc Hello community, here is the log from the commit of package kubeshark-cli for openSUSE:Factory checked in at 2026-05-04 12:52:56 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Comparing /work/SRC/openSUSE:Factory/kubeshark-cli (Old) and /work/SRC/openSUSE:Factory/.kubeshark-cli.new.30200 (New) ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "kubeshark-cli" Mon May 4 12:52:56 2026 rev:33 rq:1350409 version:53.2.5 Changes: -------- --- /work/SRC/openSUSE:Factory/kubeshark-cli/kubeshark-cli.changes 2026-04-21 12:45:17.303590470 +0200 +++ /work/SRC/openSUSE:Factory/.kubeshark-cli.new.30200/kubeshark-cli.changes 2026-05-04 12:55:26.454230244 +0200 @@ -1,0 +2,94 @@ +Fri May 01 21:27:40 UTC 2026 - Johannes Kastl <[email protected]> + +- Update to version 53.2.5 (.4 was not released): + Kubeshark 53.2.5 adds native MySQL and PostgreSQL protocol + dissection — both enabled out of the box — fixes server-side TLS + decryption for pre-fork servers like PostgreSQL, and introduces a + configurable dashboard entries limit. The release also includes + stream-history playback, a 4× heap memory reduction in the + dashboard, and several reliability fixes across the stack. + * New Features + - MySQL protocol dissector — Full wire-protocol dissection for + MySQL traffic on port 3306, enabled by default. Includes + worker dissector, hub dependency update, front-end UI + support, and e2e tests + - PostgreSQL protocol dissector — Full wire-protocol dissection + for PostgreSQL traffic on port 5432, enabled by default. + Includes worker dissector, hub dependency update, front-end + UI support, and e2e tests + - Stream history — New API and UI for streaming historical (raw + capture) traffic, enabling retrospective browsing of + previously captured data with a time-picker range control + - Dashboard entries limit — New tap.dashboard.entriesLimit Helm + value (default 300000) controls the maximum number of entries + the dashboard holds in memory + * Improvements + - Columnar entry storage — 4× heap memory reduction — Replaces + sparse per-entry maps with dictionary-backed columnar + storage, dramatically reducing browser memory usage + - Decode multi-message gRPC payloads — Request and response + views now decode concatenated gRPC frames instead of showing + raw bytes + - Surface grpc_method / grpc_status as KFL queries in UI — + Clicking the Path row on gRPC entries emits grpc_method == + "..." and clicking Grpc-Status emits grpc_status == N + - Show both entry namespaces by default — Source and + destination namespaces are now always visible in the entry + list + - Copy button for snapshot failure reason — Adds a clipboard + copy button to snapshot error messages for easier debugging + - Document gRPC KFL fields in MCP schema — grpc, grpc_method, + grpc_status are now documented in the MCP KFL schema + - Adjust KFL input boxes — Visual refinements to KFL filter + input areas + - Release pipeline overhaul — Split the monolithic release-pr + Makefile target into three independent, idempotent targets + (release-siblings, release-pr-kubeshark, release-pr-helm) + that can be rerun individually without side effects + - Fix Chart.yaml sync to kubeshark.github.io — The helm PR + target was switching back to master before copying the chart, + shipping the pre-bump version + * Bug Fixes + - Fix server-side TLS decryption for pre-fork servers + (PostgreSQL) — Adds dual-key connection_context and + accept-symbol fallback for servers that fork before accepting + connections + - Fix TLS stop-capturing — Prevent closing uprobe hooks for TLS + workloads that are still being traced + - Fix Istio one-leg HTTP — Corrects single-leg HTTP capture in + Istio service mesh environments + - Fix HTTP api-server one-leg — Resolves single-leg API server + traffic capture + - Remove sliding-window TLS heuristic from all L7 dissectors — + Eliminates false-positive TLS detection that could + misclassify plaintext traffic + - Fix request/response matcher in MongoDB and Kafka — Corrects + pairing logic in the MongoDB and Kafka protocol dissectors + - Fix Pebble use-after-close — Resolves a crash from accessing + Pebble DB after it has been closed (worker#1114) + - Fix re-running dissection on failure — Snapshot dissection + can now be retried after a failure without manual + intervention + - Skip incomplete dissections from cloud upload — Prevents + partially dissected snapshots from being uploaded to cloud + storage + - Ensure auth credentials on all API requests — Mirrors auth + headers to cookies for consistent authentication across all + request types + * Infrastructure & Dependencies + - Revert time-boundaries display above snapshots table — + Reverted pending a redesign + - Update Go to 1.26.2 + - Update build environment to latest version + - Bump deps to close Scout CVEs — Updates moby v2, go-jose, + jsonparser, and OpenTelemetry dependencies) + - Update spdystream — Addresses CVEs + - Bump KFL2 and lock in gRPC trailer merge + - Drop arm64 race builds due to toolchain limitations + - Update registry offsets — Refreshes embedded Go TLS offset + bundle + - Reduce noisy parse-packet log messages + - Update gRPC in e2e tests to address Dependabot issues + - Update hub dependencies for Docker Scout compliance + +------------------------------------------------------------------- Old: ---- kubeshark-cli-53.2.3.obscpio New: ---- kubeshark-cli-53.2.5.obscpio ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Other differences: ------------------ ++++++ kubeshark-cli.spec ++++++ --- /var/tmp/diff_new_pack.K3xcRw/_old 2026-05-04 12:55:29.130340386 +0200 +++ /var/tmp/diff_new_pack.K3xcRw/_new 2026-05-04 12:55:29.130340386 +0200 @@ -19,7 +19,7 @@ %define executable_name kubeshark Name: kubeshark-cli -Version: 53.2.3 +Version: 53.2.5 Release: 0 Summary: CLI for the API traffic analyzer for Kubernetes License: Apache-2.0 ++++++ _service ++++++ --- /var/tmp/diff_new_pack.K3xcRw/_old 2026-05-04 12:55:29.174342197 +0200 +++ /var/tmp/diff_new_pack.K3xcRw/_new 2026-05-04 12:55:29.178342361 +0200 @@ -3,7 +3,7 @@ <param name="url">https://github.com/kubeshark/kubeshark</param> <param name="scm">git</param> <param name="exclude">.git</param> - <param name="revision">v53.2.3</param> + <param name="revision">v53.2.5</param> <param name="versionformat">@PARENT_TAG@</param> <param name="versionrewrite-pattern">v(.*)</param> <param name="changesgenerate">enable</param> ++++++ _servicedata ++++++ --- /var/tmp/diff_new_pack.K3xcRw/_old 2026-05-04 12:55:29.214343843 +0200 +++ /var/tmp/diff_new_pack.K3xcRw/_new 2026-05-04 12:55:29.222344172 +0200 @@ -1,6 +1,6 @@ <servicedata> <service name="tar_scm"> <param name="url">https://github.com/kubeshark/kubeshark</param> - <param name="changesrevision">863be8f47ab4a9f0e20967b87fc7b4dfaf95178d</param></service></servicedata> + <param name="changesrevision">ab81b0c3a75bc07c19e5092e94775a633408c170</param></service></servicedata> (No newline at EOF) ++++++ kubeshark-cli-53.2.3.obscpio -> kubeshark-cli-53.2.5.obscpio ++++++ diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/kubeshark-cli-53.2.3/Makefile new/kubeshark-cli-53.2.5/Makefile --- old/kubeshark-cli-53.2.3/Makefile 2026-04-20 15:39:25.000000000 +0200 +++ new/kubeshark-cli-53.2.5/Makefile 2026-05-01 22:36:38.000000000 +0200 @@ -253,52 +253,111 @@ kubectl port-forward $$(kubectl get pods | awk '$$1 ~ /^$(POD_PREFIX)/' | awk 'END {print $$1}') $(SRC_PORT):$(DST_PORT) release: ## Print release workflow instructions. - @echo "Release workflow (2 steps):" + @echo "Release workflow — each step is idempotent and can be rerun on its own:" @echo "" - @echo " 1. make release-pr VERSION=x.y.z" - @echo " Tags sibling repos, bumps version, creates PRs" - @echo " (kubeshark + kubeshark.github.io helm chart)." - @echo " Review and merge both PRs manually." - @echo "" - @echo " 2. (automatic) Tag is created when release PR merges." - @echo " Fallback: make release-tag VERSION=x.y.z" - -release-pr: ## Step 1: Tag sibling repos, bump version, create release PR. - @cd ../worker && git checkout master && git pull && git tag -d v$(VERSION); git tag v$(VERSION) && git push origin --tags - @cd ../hub && git checkout master && git pull && git tag -d v$(VERSION); git tag v$(VERSION) && git push origin --tags - @cd ../front && git checkout master && git pull && git tag -d v$(VERSION); git tag v$(VERSION) && git push origin --tags + @echo " 1. make release-siblings VERSION=x.y.z" + @echo " Tag worker, hub, front with vx.y.z. Also run standalone when" + @echo " rebuilding docker images without cutting a full release." + @echo "" + @echo " 2. make release-pr-kubeshark VERSION=x.y.z" + @echo " Bump Helm Chart.yaml, build, open release PR on kubeshark." + @echo "" + @echo " 3. make release-pr-helm VERSION=x.y.z" + @echo " Sync helm-chart/ into kubeshark.github.io, open helm PR." + @echo " Requires release/vx.y.z branch (created by step 2)." + @echo "" + @echo " Shortcut: make release-pr VERSION=x.y.z runs 1 → 2 → 3." + @echo "" + @echo " After both PRs merge: tag is created automatically," + @echo " or run: make release-tag VERSION=x.y.z" + +# Internal: validate VERSION before any release-* target runs. +_release-check-version: + @if [ -z "$(VERSION)" ]; then echo "ERROR: VERSION is required. Usage: make <target> VERSION=x.y.z"; exit 1; fi + @echo "$(VERSION)" | grep -Eq '^[0-9]+\.[0-9]+\.[0-9]+' || { echo "ERROR: VERSION must be semver (e.g. 53.2.4)"; exit 1; } + +release-siblings: _release-check-version ## Tag worker, hub, front with v$(VERSION). Idempotent; standalone for docker-image-only updates. + @for repo in worker hub front; do \ + echo "==> $$repo: ensuring v$(VERSION) tag"; \ + (cd ../$$repo && git checkout master && git pull) || exit 1; \ + if (cd ../$$repo && git ls-remote --tags origin "refs/tags/v$(VERSION)" | grep -q .); then \ + echo " v$(VERSION) already on origin — skipping"; \ + else \ + (cd ../$$repo && git tag -d v$(VERSION) 2>/dev/null; git tag v$(VERSION) && git push origin "refs/tags/v$(VERSION)") || exit 1; \ + fi; \ + done + +release-pr-kubeshark: _release-check-version ## Bump Chart.yaml, build, open release PR on kubeshark. @cd ../kubeshark && git checkout master && git pull - @sed -i '' "s/^version:.*/version: \"$(shell echo $(VERSION) | sed -E 's/^([0-9]+\.[0-9]+\.[0-9]+)\..*/\1/')\"/" helm-chart/Chart.yaml + @NEW=$$(echo $(VERSION) | sed -E 's/^([0-9]+\.[0-9]+\.[0-9]+).*/\1/'); \ + CUR=$$(awk '/^version:/ {gsub(/"/,"",$$2); print $$2; exit}' helm-chart/Chart.yaml); \ + if [ "$$CUR" != "$$NEW" ]; then \ + sed -i '' "s/^version:.*/version: \"$$NEW\"/" helm-chart/Chart.yaml; \ + else \ + echo "Chart.yaml already at $$NEW"; \ + fi @$(MAKE) build VER=$(VERSION) @if [ "$(shell uname)" = "Darwin" ]; then \ codesign --sign - --force --preserve-metadata=entitlements,requirements,flags,runtime ./bin/kubeshark__; \ fi @$(MAKE) generate-helm-values && $(MAKE) generate-manifests + @if git show-ref --verify --quiet refs/heads/release/v$(VERSION); then \ + git branch -D release/v$(VERSION); \ + fi @git checkout -b release/v$(VERSION) @git add -A . - @git commit -m ":bookmark: Bump the Helm chart version to $(VERSION)" - @git push -u origin release/v$(VERSION) - @gh pr create --title ":bookmark: Release v$(VERSION)" \ - --body "Automated release PR for v$(VERSION)." \ - --base master \ - --reviewer corest - @git checkout master && git pull - @cd ../kubeshark.github.io \ - && git checkout master && git pull \ - && rm -rf charts/chart \ - && mkdir charts/chart \ - && cp -r ../kubeshark/helm-chart/ charts/chart/ \ - && git checkout -b helm-v$(VERSION) \ - && git add -A . \ - && git commit -m ":sparkles: Update the Helm chart to v$(VERSION)" \ - && git push -u origin helm-v$(VERSION) \ - && gh pr create --title ":sparkles: Helm chart v$(VERSION)" \ + @if ! git diff --cached --quiet; then \ + git commit -m ":bookmark: Bump the Helm chart version to $(VERSION)"; \ + else \ + echo "nothing to commit"; \ + fi + @git push --force-with-lease -u origin release/v$(VERSION) + @if gh pr view release/v$(VERSION) --json number >/dev/null 2>&1; then \ + echo "PR already exists for release/v$(VERSION)"; \ + else \ + gh pr create --title ":bookmark: Release v$(VERSION)" \ + --body "Automated release PR for v$(VERSION)." \ + --base master \ + --reviewer corest; \ + fi + +release-pr-helm: _release-check-version ## Sync helm-chart/ to kubeshark.github.io and open the helm PR. Requires release/v$(VERSION) branch (step 2). + @git fetch origin "refs/heads/release/v$(VERSION):refs/heads/release/v$(VERSION)" 2>/dev/null || true + @if ! git show-ref --verify --quiet refs/heads/release/v$(VERSION); then \ + echo "ERROR: release/v$(VERSION) branch not found locally or on origin."; \ + echo "Run 'make release-pr-kubeshark VERSION=$(VERSION)' first."; \ + exit 1; \ + fi + @git checkout release/v$(VERSION) + @cd ../kubeshark.github.io && git checkout master && git pull \ + && rm -rf charts/chart && mkdir -p charts/chart \ + && cp -r ../kubeshark/helm-chart/ charts/chart/ + @cd ../kubeshark.github.io && \ + if git show-ref --verify --quiet refs/heads/helm-v$(VERSION); then \ + git branch -D helm-v$(VERSION); \ + fi && \ + git checkout -b helm-v$(VERSION) && \ + git add -A . && \ + if ! git diff --cached --quiet; then \ + git commit -m ":sparkles: Update the Helm chart to v$(VERSION)"; \ + else \ + echo "nothing to commit"; \ + fi && \ + git push --force-with-lease -u origin helm-v$(VERSION) && \ + if ! gh pr view helm-v$(VERSION) --json number >/dev/null 2>&1; then \ + gh pr create --title ":sparkles: Helm chart v$(VERSION)" \ --body "Update Helm chart for release v$(VERSION)." \ --base master \ - --reviewer corest \ - && git checkout master + --reviewer corest; \ + else \ + echo "PR already exists for helm-v$(VERSION)"; \ + fi && \ + git checkout master + @cd ../kubeshark && git checkout master && git pull + +release-pr: release-siblings release-pr-kubeshark release-pr-helm ## Run release-siblings, release-pr-kubeshark, and release-pr-helm in sequence. @echo "" - @echo "Release PRs created:" + @echo "Release PRs created (or already present):" @echo " - kubeshark: Review and merge the release PR." @echo " - kubeshark.github.io: Review and merge the helm chart PR." @echo "Tag will be created automatically, or run: make release-tag VERSION=$(VERSION)" diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/kubeshark-cli-53.2.3/config/configStruct.go new/kubeshark-cli-53.2.5/config/configStruct.go --- old/kubeshark-cli-53.2.3/config/configStruct.go 2026-04-20 15:39:25.000000000 +0200 +++ new/kubeshark-cli-53.2.5/config/configStruct.go 2026-05-01 22:36:38.000000000 +0200 @@ -129,6 +129,8 @@ "icmp", "kafka", "mongodb", + "mysql", + "postgresql", "redis", // "sctp", // "syscall", @@ -149,7 +151,9 @@ AMQP: []uint16{5671, 5672}, KAFKA: []uint16{9092}, MONGODB: []uint16{27017}, - REDIS: []uint16{6379}, + MYSQL: []uint16{3306}, + POSTGRESQL: []uint16{5432}, + REDIS: []uint16{6379}, LDAP: []uint16{389}, DIAMETER: []uint16{3868}, }, diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/kubeshark-cli-53.2.3/config/configStructs/tapConfig.go new/kubeshark-cli-53.2.5/config/configStructs/tapConfig.go --- old/kubeshark-cli-53.2.3/config/configStructs/tapConfig.go 2026-04-20 15:39:25.000000000 +0200 +++ new/kubeshark-cli-53.2.5/config/configStructs/tapConfig.go 2026-05-01 22:36:38.000000000 +0200 @@ -203,6 +203,7 @@ StreamingType string `yaml:"streamingType" json:"streamingType" default:"connect-rpc"` CompleteStreamingEnabled bool `yaml:"completeStreamingEnabled" json:"completeStreamingEnabled" default:"true"` ClusterWideMapEnabled bool `yaml:"clusterWideMapEnabled" json:"clusterWideMapEnabled" default:"false"` + EntriesLimit string `yaml:"entriesLimit" json:"entriesLimit" default:"300000"` } type FrontRoutingConfig struct { @@ -283,7 +284,9 @@ AMQP []uint16 `yaml:"amqp" json:"amqp"` KAFKA []uint16 `yaml:"kafka" json:"kafka"` MONGODB []uint16 `yaml:"mongodb" json:"mongodb"` - REDIS []uint16 `yaml:"redis" json:"redis"` + MYSQL []uint16 `yaml:"mysql" json:"mysql"` + POSTGRESQL []uint16 `yaml:"postgresql" json:"postgresql"` + REDIS []uint16 `yaml:"redis" json:"redis"` LDAP []uint16 `yaml:"ldap" json:"ldap"` DIAMETER []uint16 `yaml:"diameter" json:"diameter"` } diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/kubeshark-cli-53.2.3/helm-chart/Chart.yaml new/kubeshark-cli-53.2.5/helm-chart/Chart.yaml --- old/kubeshark-cli-53.2.3/helm-chart/Chart.yaml 2026-04-20 15:39:25.000000000 +0200 +++ new/kubeshark-cli-53.2.5/helm-chart/Chart.yaml 2026-05-01 22:36:38.000000000 +0200 @@ -1,6 +1,6 @@ apiVersion: v2 name: kubeshark -version: "53.2.3" +version: "53.2.5" description: The API Traffic Analyzer for Kubernetes home: https://kubeshark.com keywords: diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/kubeshark-cli-53.2.3/helm-chart/templates/06-front-deployment.yaml new/kubeshark-cli-53.2.5/helm-chart/templates/06-front-deployment.yaml --- old/kubeshark-cli-53.2.3/helm-chart/templates/06-front-deployment.yaml 2026-04-20 15:39:25.000000000 +0200 +++ new/kubeshark-cli-53.2.5/helm-chart/templates/06-front-deployment.yaml 2026-05-01 22:36:38.000000000 +0200 @@ -92,6 +92,8 @@ value: '{{ default false (((.Values).tap).dashboard).clusterWideMapEnabled }}' - name: REACT_APP_RAW_CAPTURE_ENABLED value: '{{ .Values.tap.capture.raw.enabled | ternary "true" "false" }}' + - name: REACT_APP_ENTRIES_LIMIT + value: '{{ default 300000 (((.Values).tap).dashboard).entriesLimit }}' - name: REACT_APP_SENTRY_ENABLED value: '{{ (include "sentry.enabled" .) }}' - name: REACT_APP_SENTRY_ENVIRONMENT diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/kubeshark-cli-53.2.3/helm-chart/values.yaml new/kubeshark-cli-53.2.5/helm-chart/values.yaml --- old/kubeshark-cli-53.2.3/helm-chart/values.yaml 2026-04-20 15:39:25.000000000 +0200 +++ new/kubeshark-cli-53.2.5/helm-chart/values.yaml 2026-05-01 22:36:38.000000000 +0200 @@ -188,6 +188,7 @@ streamingType: connect-rpc completeStreamingEnabled: true clusterWideMapEnabled: false + entriesLimit: "300000" telemetry: enabled: true resourceGuard: @@ -208,6 +209,8 @@ - icmp - kafka - mongodb + - mysql + - postgresql - redis - ws - ldap @@ -229,6 +232,10 @@ - 9092 mongodb: - 27017 + mysql: + - 3306 + postgresql: + - 5432 redis: - 6379 ldap: diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/kubeshark-cli-53.2.3/manifests/complete.yaml new/kubeshark-cli-53.2.5/manifests/complete.yaml --- old/kubeshark-cli-53.2.3/manifests/complete.yaml 2026-04-20 15:39:25.000000000 +0200 +++ new/kubeshark-cli-53.2.5/manifests/complete.yaml 2026-05-01 22:36:38.000000000 +0200 @@ -4,10 +4,10 @@ kind: NetworkPolicy metadata: labels: - helm.sh/chart: kubeshark-53.2.3 + helm.sh/chart: kubeshark-53.2.5 app.kubernetes.io/name: kubeshark app.kubernetes.io/instance: kubeshark - app.kubernetes.io/version: "53.2.3" + app.kubernetes.io/version: "53.2.5" app.kubernetes.io/managed-by: Helm name: kubeshark-hub-network-policy namespace: default @@ -33,10 +33,10 @@ kind: NetworkPolicy metadata: labels: - helm.sh/chart: kubeshark-53.2.3 + helm.sh/chart: kubeshark-53.2.5 app.kubernetes.io/name: kubeshark app.kubernetes.io/instance: kubeshark - app.kubernetes.io/version: "53.2.3" + app.kubernetes.io/version: "53.2.5" app.kubernetes.io/managed-by: Helm annotations: name: kubeshark-front-network-policy @@ -60,10 +60,10 @@ kind: NetworkPolicy metadata: labels: - helm.sh/chart: kubeshark-53.2.3 + helm.sh/chart: kubeshark-53.2.5 app.kubernetes.io/name: kubeshark app.kubernetes.io/instance: kubeshark - app.kubernetes.io/version: "53.2.3" + app.kubernetes.io/version: "53.2.5" app.kubernetes.io/managed-by: Helm annotations: name: kubeshark-dex-network-policy @@ -87,10 +87,10 @@ kind: NetworkPolicy metadata: labels: - helm.sh/chart: kubeshark-53.2.3 + helm.sh/chart: kubeshark-53.2.5 app.kubernetes.io/name: kubeshark app.kubernetes.io/instance: kubeshark - app.kubernetes.io/version: "53.2.3" + app.kubernetes.io/version: "53.2.5" app.kubernetes.io/managed-by: Helm annotations: name: kubeshark-worker-network-policy @@ -116,10 +116,10 @@ kind: ServiceAccount metadata: labels: - helm.sh/chart: kubeshark-53.2.3 + helm.sh/chart: kubeshark-53.2.5 app.kubernetes.io/name: kubeshark app.kubernetes.io/instance: kubeshark - app.kubernetes.io/version: "53.2.3" + app.kubernetes.io/version: "53.2.5" app.kubernetes.io/managed-by: Helm name: kubeshark-service-account namespace: default @@ -132,10 +132,10 @@ namespace: default labels: app.kubeshark.com/app: hub - helm.sh/chart: kubeshark-53.2.3 + helm.sh/chart: kubeshark-53.2.5 app.kubernetes.io/name: kubeshark app.kubernetes.io/instance: kubeshark - app.kubernetes.io/version: "53.2.3" + app.kubernetes.io/version: "53.2.5" app.kubernetes.io/managed-by: Helm stringData: LICENSE: '' @@ -151,10 +151,10 @@ namespace: default labels: app.kubeshark.com/app: hub - helm.sh/chart: kubeshark-53.2.3 + helm.sh/chart: kubeshark-53.2.5 app.kubernetes.io/name: kubeshark app.kubernetes.io/instance: kubeshark - app.kubernetes.io/version: "53.2.3" + app.kubernetes.io/version: "53.2.5" app.kubernetes.io/managed-by: Helm stringData: AUTH_SAML_X509_CRT: | @@ -167,10 +167,10 @@ namespace: default labels: app.kubeshark.com/app: hub - helm.sh/chart: kubeshark-53.2.3 + helm.sh/chart: kubeshark-53.2.5 app.kubernetes.io/name: kubeshark app.kubernetes.io/instance: kubeshark - app.kubernetes.io/version: "53.2.3" + app.kubernetes.io/version: "53.2.5" app.kubernetes.io/managed-by: Helm stringData: AUTH_SAML_X509_KEY: | @@ -182,10 +182,10 @@ name: kubeshark-nginx-config-map namespace: default labels: - helm.sh/chart: kubeshark-53.2.3 + helm.sh/chart: kubeshark-53.2.5 app.kubernetes.io/name: kubeshark app.kubernetes.io/instance: kubeshark - app.kubernetes.io/version: "53.2.3" + app.kubernetes.io/version: "53.2.5" app.kubernetes.io/managed-by: Helm data: default.conf: | @@ -252,10 +252,10 @@ namespace: default labels: app.kubeshark.com/app: hub - helm.sh/chart: kubeshark-53.2.3 + helm.sh/chart: kubeshark-53.2.5 app.kubernetes.io/name: kubeshark app.kubernetes.io/instance: kubeshark - app.kubernetes.io/version: "53.2.3" + app.kubernetes.io/version: "53.2.5" app.kubernetes.io/managed-by: Helm data: POD_REGEX: '.*' @@ -293,7 +293,7 @@ TIMEZONE: ' ' CLOUD_LICENSE_ENABLED: 'true' DUPLICATE_TIMEFRAME: '200ms' - ENABLED_DISSECTORS: 'amqp,dns,http,icmp,kafka,mongodb,redis,ws,ldap,radius,diameter,udp-flow,tcp-flow,udp-conn,tcp-conn' + ENABLED_DISSECTORS: 'amqp,dns,http,icmp,kafka,mongodb,mysql,postgresql,redis,ws,ldap,radius,diameter,udp-flow,tcp-flow,udp-conn,tcp-conn' CUSTOM_MACROS: '{"https":"tls and (http or http2)"}' DISSECTORS_UPDATING_ENABLED: 'true' SNAPSHOTS_UPDATING_ENABLED: 'true' @@ -303,7 +303,7 @@ PCAP_TIME_INTERVAL: '1m' PCAP_MAX_TIME: '1h' PCAP_MAX_SIZE: '500MB' - PORT_MAPPING: '{"amqp":[5671,5672],"diameter":[3868],"http":[80,443,8080],"kafka":[9092],"ldap":[389],"mongodb":[27017],"redis":[6379]}' + PORT_MAPPING: '{"amqp":[5671,5672],"diameter":[3868],"http":[80,443,8080],"kafka":[9092],"ldap":[389],"mongodb":[27017],"mysql":[3306],"postgresql":[5432],"redis":[6379]}' RAW_CAPTURE_ENABLED: 'true' RAW_CAPTURE_STORAGE_SIZE: '1Gi' --- @@ -312,10 +312,10 @@ kind: ClusterRole metadata: labels: - helm.sh/chart: kubeshark-53.2.3 + helm.sh/chart: kubeshark-53.2.5 app.kubernetes.io/name: kubeshark app.kubernetes.io/instance: kubeshark - app.kubernetes.io/version: "53.2.3" + app.kubernetes.io/version: "53.2.5" app.kubernetes.io/managed-by: Helm name: kubeshark-cluster-role-default namespace: default @@ -359,10 +359,10 @@ kind: ClusterRoleBinding metadata: labels: - helm.sh/chart: kubeshark-53.2.3 + helm.sh/chart: kubeshark-53.2.5 app.kubernetes.io/name: kubeshark app.kubernetes.io/instance: kubeshark - app.kubernetes.io/version: "53.2.3" + app.kubernetes.io/version: "53.2.5" app.kubernetes.io/managed-by: Helm name: kubeshark-cluster-role-binding-default namespace: default @@ -380,10 +380,10 @@ kind: Role metadata: labels: - helm.sh/chart: kubeshark-53.2.3 + helm.sh/chart: kubeshark-53.2.5 app.kubernetes.io/name: kubeshark app.kubernetes.io/instance: kubeshark - app.kubernetes.io/version: "53.2.3" + app.kubernetes.io/version: "53.2.5" app.kubernetes.io/managed-by: Helm annotations: name: kubeshark-self-config-role @@ -439,10 +439,10 @@ kind: RoleBinding metadata: labels: - helm.sh/chart: kubeshark-53.2.3 + helm.sh/chart: kubeshark-53.2.5 app.kubernetes.io/name: kubeshark app.kubernetes.io/instance: kubeshark - app.kubernetes.io/version: "53.2.3" + app.kubernetes.io/version: "53.2.5" app.kubernetes.io/managed-by: Helm annotations: name: kubeshark-self-config-role-binding @@ -462,10 +462,10 @@ metadata: labels: app.kubeshark.com/app: hub - helm.sh/chart: kubeshark-53.2.3 + helm.sh/chart: kubeshark-53.2.5 app.kubernetes.io/name: kubeshark app.kubernetes.io/instance: kubeshark - app.kubernetes.io/version: "53.2.3" + app.kubernetes.io/version: "53.2.5" app.kubernetes.io/managed-by: Helm name: kubeshark-hub namespace: default @@ -483,10 +483,10 @@ kind: Service metadata: labels: - helm.sh/chart: kubeshark-53.2.3 + helm.sh/chart: kubeshark-53.2.5 app.kubernetes.io/name: kubeshark app.kubernetes.io/instance: kubeshark - app.kubernetes.io/version: "53.2.3" + app.kubernetes.io/version: "53.2.5" app.kubernetes.io/managed-by: Helm name: kubeshark-front namespace: default @@ -504,10 +504,10 @@ apiVersion: v1 metadata: labels: - helm.sh/chart: kubeshark-53.2.3 + helm.sh/chart: kubeshark-53.2.5 app.kubernetes.io/name: kubeshark app.kubernetes.io/instance: kubeshark - app.kubernetes.io/version: "53.2.3" + app.kubernetes.io/version: "53.2.5" app.kubernetes.io/managed-by: Helm annotations: prometheus.io/scrape: 'true' @@ -517,10 +517,10 @@ spec: selector: app.kubeshark.com/app: worker - helm.sh/chart: kubeshark-53.2.3 + helm.sh/chart: kubeshark-53.2.5 app.kubernetes.io/name: kubeshark app.kubernetes.io/instance: kubeshark - app.kubernetes.io/version: "53.2.3" + app.kubernetes.io/version: "53.2.5" app.kubernetes.io/managed-by: Helm ports: - name: metrics @@ -533,10 +533,10 @@ apiVersion: v1 metadata: labels: - helm.sh/chart: kubeshark-53.2.3 + helm.sh/chart: kubeshark-53.2.5 app.kubernetes.io/name: kubeshark app.kubernetes.io/instance: kubeshark - app.kubernetes.io/version: "53.2.3" + app.kubernetes.io/version: "53.2.5" app.kubernetes.io/managed-by: Helm annotations: prometheus.io/scrape: 'true' @@ -546,10 +546,10 @@ spec: selector: app.kubeshark.com/app: hub - helm.sh/chart: kubeshark-53.2.3 + helm.sh/chart: kubeshark-53.2.5 app.kubernetes.io/name: kubeshark app.kubernetes.io/instance: kubeshark - app.kubernetes.io/version: "53.2.3" + app.kubernetes.io/version: "53.2.5" app.kubernetes.io/managed-by: Helm ports: - name: metrics @@ -564,10 +564,10 @@ labels: app.kubeshark.com/app: worker sidecar.istio.io/inject: "false" - helm.sh/chart: kubeshark-53.2.3 + helm.sh/chart: kubeshark-53.2.5 app.kubernetes.io/name: kubeshark app.kubernetes.io/instance: kubeshark - app.kubernetes.io/version: "53.2.3" + app.kubernetes.io/version: "53.2.5" app.kubernetes.io/managed-by: Helm name: kubeshark-worker-daemon-set namespace: default @@ -581,10 +581,10 @@ metadata: labels: app.kubeshark.com/app: worker - helm.sh/chart: kubeshark-53.2.3 + helm.sh/chart: kubeshark-53.2.5 app.kubernetes.io/name: kubeshark app.kubernetes.io/instance: kubeshark - app.kubernetes.io/version: "53.2.3" + app.kubernetes.io/version: "53.2.5" app.kubernetes.io/managed-by: Helm name: kubeshark-worker-daemon-set namespace: kubeshark @@ -805,10 +805,10 @@ metadata: labels: app.kubeshark.com/app: hub - helm.sh/chart: kubeshark-53.2.3 + helm.sh/chart: kubeshark-53.2.5 app.kubernetes.io/name: kubeshark app.kubernetes.io/instance: kubeshark - app.kubernetes.io/version: "53.2.3" + app.kubernetes.io/version: "53.2.5" app.kubernetes.io/managed-by: Helm name: kubeshark-hub namespace: default @@ -823,10 +823,10 @@ metadata: labels: app.kubeshark.com/app: hub - helm.sh/chart: kubeshark-53.2.3 + helm.sh/chart: kubeshark-53.2.5 app.kubernetes.io/name: kubeshark app.kubernetes.io/instance: kubeshark - app.kubernetes.io/version: "53.2.3" + app.kubernetes.io/version: "53.2.5" app.kubernetes.io/managed-by: Helm spec: dnsPolicy: ClusterFirstWithHostNet @@ -936,10 +936,10 @@ metadata: labels: app.kubeshark.com/app: front - helm.sh/chart: kubeshark-53.2.3 + helm.sh/chart: kubeshark-53.2.5 app.kubernetes.io/name: kubeshark app.kubernetes.io/instance: kubeshark - app.kubernetes.io/version: "53.2.3" + app.kubernetes.io/version: "53.2.5" app.kubernetes.io/managed-by: Helm name: kubeshark-front namespace: default @@ -954,10 +954,10 @@ metadata: labels: app.kubeshark.com/app: front - helm.sh/chart: kubeshark-53.2.3 + helm.sh/chart: kubeshark-53.2.5 app.kubernetes.io/name: kubeshark app.kubernetes.io/instance: kubeshark - app.kubernetes.io/version: "53.2.3" + app.kubernetes.io/version: "53.2.5" app.kubernetes.io/managed-by: Helm spec: containers: @@ -1006,6 +1006,8 @@ value: 'false' - name: REACT_APP_RAW_CAPTURE_ENABLED value: 'true' + - name: REACT_APP_ENTRIES_LIMIT + value: '300000' - name: REACT_APP_SENTRY_ENABLED value: 'false' - name: REACT_APP_SENTRY_ENVIRONMENT diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/kubeshark-cli-53.2.3/skills/kfl/SKILL.md new/kubeshark-cli-53.2.5/skills/kfl/SKILL.md --- old/kubeshark-cli-53.2.3/skills/kfl/SKILL.md 2026-04-20 15:39:25.000000000 +0200 +++ new/kubeshark-cli-53.2.5/skills/kfl/SKILL.md 2026-05-01 22:36:38.000000000 +0200 @@ -88,13 +88,15 @@ |------|----------|------|----------| | `http` | HTTP/1.1, HTTP/2 | `redis` | Redis | | `dns` | DNS | `kafka` | Kafka | -| `tls` | TLS/SSL | `amqp` | AMQP | +| `tls` | eBPF TLS interception | `amqp` | AMQP | | `tcp` | TCP | `ldap` | LDAP | | `udp` | UDP | `ws` | WebSocket | | `sctp` | SCTP | `gql` | GraphQL (v1+v2) | | `icmp` | ICMP | `gqlv1` / `gqlv2` | GraphQL version-specific | -| `radius` | RADIUS | `conn` / `flow` | L4 connection/flow tracking | -| `diameter` | Diameter | `tcp_conn` / `udp_conn` | Transport-specific connections | +| `grpc` | gRPC (HTTP/2 sub-protocol) | `mongodb` | MongoDB | +| `mysql` | MySQL | `radius` | RADIUS | +| `diameter` | Diameter | `conn` / `flow` | L4 connection/flow tracking | +| | | `tcp_conn` / `udp_conn` | Transport-specific connections | ## Kubernetes Context @@ -112,6 +114,17 @@ Pod fields fall back to service data when pod info is unavailable, so `dst.pod.namespace` works even for service-level entries. +### Summary Name and Namespace + +Convenience variables that pick the best available identity for a peer: + +``` +src.name == "api-gateway" // pod > service > dns > process +dst.name.contains("payment") // works across identity types +src.namespace == "production" // pod namespace, falls back to service +dst.namespace != "kube-system" // exclude system namespace +``` + ### Aggregate Collections Match against any direction (src or dst): @@ -192,8 +205,14 @@ // GraphQL (subset of HTTP) gql && method == "POST" && status_code >= 400 + +// Only eBPF-intercepted TLS traffic (decrypted HTTPS) +tls && http && status_code >= 500 ``` +> **Note on `tls`**: The `tls` flag is an alias for `capture_source == "ebpf_tls"`. +> It indicates traffic captured via eBPF TLS interception, not TLS protocol dissection. + ## DNS Filtering DNS issues are often the hidden root cause of outages. @@ -235,6 +254,40 @@ kafka && kafka_size > 10000 // Large messages ``` +### MongoDB + +``` +mongodb && mongodb_command == "find" // Find operations +mongodb && mongodb_collection == "users" // Collection filtering +mongodb && mongodb_database == "mydb" // Database filtering +mongodb && !mongodb_success // Failed operations +mongodb && mongodb_error_code != 0 // Error code filtering +mongodb && mongodb_total_size > 10000 // Large operations +``` + +### MySQL + +``` +mysql && mysql_command == "COM_QUERY" // SQL queries +mysql && mysql_query.contains("SELECT") // SELECT statements +mysql && mysql_database == "orders_db" // Database filtering +mysql && !mysql_success // Failed queries +mysql && mysql_error_code != 0 // Error code filtering +mysql && mysql_total_size > 10000 // Large queries +``` + +### gRPC + +gRPC is a sub-protocol of HTTP/2. All HTTP variables are also available on gRPC entries. + +``` +grpc && grpc_method == "SayHello" // Method filtering +grpc && grpc_status != 0 // Non-OK status codes +grpc && grpc_status == 14 // UNAVAILABLE +grpc && grpc_method.contains("Create") // Method pattern +grpc && elapsed_time > 1000000 // Slow gRPC calls (>1s) +``` + ### AMQP, LDAP, RADIUS, Diameter ``` @@ -288,7 +341,7 @@ timestamp > timestamp("2026-03-14T22:00:00Z") timestamp >= timestamp("2026-03-14T22:00:00Z") && timestamp <= timestamp("2026-03-14T23:00:00Z") timestamp > now() - duration("5m") // Last 5 minutes -elapsed_time > 2000000 // Older than 2 seconds +elapsed_time > 2000000 // Latency > 2 seconds ``` ## Building Filters: Progressive Narrowing diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/kubeshark-cli-53.2.3/skills/kfl/references/kfl2-reference.md new/kubeshark-cli-53.2.5/skills/kfl/references/kfl2-reference.md --- old/kubeshark-cli-53.2.3/skills/kfl/references/kfl2-reference.md 2026-04-20 15:39:25.000000000 +0200 +++ new/kubeshark-cli-53.2.5/skills/kfl/references/kfl2-reference.md 2026-05-01 22:36:38.000000000 +0200 @@ -39,7 +39,7 @@ | `index` | int | Entry index for stream uniqueness | | `stream` | string | Stream identifier (hex string) | | `timestamp` | timestamp | Event time (UTC), use with `timestamp()` function | -| `elapsed_time` | int | Age since timestamp in microseconds | +| `elapsed_time` | int | Response-request latency in microseconds | | `worker` | string | Worker identifier | ## Cross-Reference Variables @@ -67,13 +67,15 @@ |----------|----------|----------|----------| | `http` | HTTP/1.1, HTTP/2 | `redis` | Redis | | `dns` | DNS | `kafka` | Kafka | -| `tls` | TLS/SSL handshake | `amqp` | AMQP messaging | +| `tls` | eBPF TLS interception | `amqp` | AMQP messaging | | `tcp` | TCP transport | `ldap` | LDAP directory | | `udp` | UDP transport | `ws` | WebSocket | | `sctp` | SCTP streaming | `gql` | GraphQL (v1 or v2) | | `icmp` | ICMP | `gqlv1` | GraphQL v1 only | -| `radius` | RADIUS auth | `gqlv2` | GraphQL v2 only | -| `diameter` | Diameter | `conn` | L4 connection tracking | +| `grpc` | gRPC (HTTP/2 sub-protocol) | `gqlv2` | GraphQL v2 only | +| `mongodb` | MongoDB | `mysql` | MySQL | +| `radius` | RADIUS auth | `diameter` | Diameter | +| | | `conn` | L4 connection tracking | | `flow` | L4 flow tracking | `tcp_conn` | TCP connection tracking | | `tcp_flow` | TCP flow tracking | `udp_conn` | UDP connection tracking | | `udp_flow` | UDP flow tracking | | | @@ -123,7 +125,7 @@ | Variable | Type | Description | Example | |----------|------|-------------|---------| -| `tls` | bool | TLS payload detected | | +| `tls` | bool | eBPF TLS interception (alias for `capture_source == "ebpf_tls"`) | | | `tls_summary` | string | TLS handshake summary | `"ClientHello"`, `"ServerHello"` | | `tls_info` | string | TLS connection details | `"TLS 1.3, AES-256-GCM"` | | `tls_request_size` | int | TLS request size in bytes | | @@ -263,6 +265,55 @@ | `diameter_response_length` | int | Response size (0 if absent) | | `diameter_total_size` | int | Sum of request + response | +## MongoDB Variables + +| Variable | Type | Description | Example | +|----------|------|-------------|---------| +| `mongodb` | bool | MongoDB payload detected | | +| `mongodb_command` | string | Operation type | `"find"`, `"insert"`, `"update"`, `"delete"` | +| `mongodb_database` | string | Database name | `"mydb"` | +| `mongodb_collection` | string | Collection name | `"users"` | +| `mongodb_opcode` | string | Operation opcode name | | +| `mongodb_request_size` | int | Request size in bytes | | +| `mongodb_response_size` | int | Response size in bytes | | +| `mongodb_total_size` | int | Combined request + response size | | +| `mongodb_success` | bool | Operation success status | | +| `mongodb_error_code` | int | Error code | | +| `mongodb_error_message` | string | Error description | | +| `mongodb_error_code_name` | string | Named error code | | + +**Example**: `mongodb && mongodb_command == "find" && mongodb_collection == "users"` + +## MySQL Variables + +| Variable | Type | Description | Example | +|----------|------|-------------|---------| +| `mysql` | bool | MySQL payload detected | | +| `mysql_command` | string | SQL command name | `"COM_QUERY"`, `"COM_STMT_PREPARE"` | +| `mysql_query` | string | Full SQL query text | `"SELECT * FROM users"` | +| `mysql_database` | string | Active database name | `"orders_db"` | +| `mysql_statement_id` | int | Prepared statement identifier | | +| `mysql_request_size` | int | Request payload size in bytes | | +| `mysql_response_size` | int | Response payload size in bytes | | +| `mysql_total_size` | int | Combined request + response size | | +| `mysql_success` | bool | Response OK status | | +| `mysql_error_code` | int | MySQL error code | | +| `mysql_error_message` | string | Error description | | + +**Example**: `mysql && mysql_query.contains("SELECT") && !mysql_success` + +## gRPC Variables + +gRPC is a sub-protocol of HTTP/2. When `grpc` is true, all HTTP variables are also available. + +| Variable | Type | Description | Example | +|----------|------|-------------|---------| +| `grpc` | bool | gRPC payload detected | | +| `grpc_method` | string | Trailing method name from gRPC :path | `"SayHello"` (from `/helloworld.Greeter/SayHello`) | +| `grpc_status` | int | gRPC status code from Grpc-Status trailer | `0`=OK, `5`=NOT_FOUND, `14`=UNAVAILABLE; `-1` on non-gRPC | + +**Example**: `grpc && grpc_status != 0 && grpc_method.contains("Create")` + ## L4 Connection Tracking Variables | Variable | Type | Description | Example | @@ -320,6 +371,15 @@ **Example**: `src.service.name == "api-gateway" && dst.pod.namespace == "production"` +### Summary Name and Namespace + +| Variable | Type | Description | +|----------|------|-------------| +| `src.name` | string | Worker-enriched summary name of source (pod > service > dns > process) | +| `dst.name` | string | Worker-enriched summary name of destination | +| `src.namespace` | string | Source namespace with service fallback | +| `dst.namespace` | string | Destination namespace with service fallback | + ### Aggregate Collections (Non-Directional) | Variable | Type | Description | ++++++ kubeshark-cli.obsinfo ++++++ --- /var/tmp/diff_new_pack.K3xcRw/_old 2026-05-04 12:55:29.534357014 +0200 +++ /var/tmp/diff_new_pack.K3xcRw/_new 2026-05-04 12:55:29.542357343 +0200 @@ -1,5 +1,5 @@ name: kubeshark-cli -version: 53.2.3 -mtime: 1776692365 -commit: 863be8f47ab4a9f0e20967b87fc7b4dfaf95178d +version: 53.2.5 +mtime: 1777667798 +commit: ab81b0c3a75bc07c19e5092e94775a633408c170 ++++++ vendor.tar.gz ++++++ /work/SRC/openSUSE:Factory/kubeshark-cli/vendor.tar.gz /work/SRC/openSUSE:Factory/.kubeshark-cli.new.30200/vendor.tar.gz differ: char 150, line 1
