Messages by Thread
-
-
Re: [PR] [SPARK-46036][SQL] Removing error-class from raise_error function [spark]
via GitHub
-
[PR] [SPARK-56308] Remove invalid `log4j2.contextSelector` from test `log4j2.properties` [spark-kubernetes-operator]
via GitHub
-
[PR] [SPARK-56307] Upgrade `log4j` to 2.25.4 [spark]
via GitHub
-
[I] Spark Declarative Pipelines and Unity Catalog [spark]
via GitHub
-
Re: [PR] Change volumeMounts and volumes types to array [spark-kubernetes-operator]
via GitHub
-
Re: [PR] [SPARK-55441][SQL] Types Framework - Phase 1c - Client Integration [spark]
via GitHub
-
[I] Cannot use deploy mode: cluster from python code. [spark-kubernetes-operator]
via GitHub
-
[PR] [SPARK-56303] Use `SparkLauncher.NO_RESOURCE` instead of `Option.empty()` in `SparkAppSubmissionWorker` [spark-kubernetes-operator]
via GitHub
-
[PR] [SPARK-56137][UI] Add regression tests for SQL tab DataTables migration [spark]
via GitHub
-
[PR] [SPARK-56303][K8S] Add Java-friendly factory methods to `JavaMainAppResource` [spark]
via GitHub
-
Re: [PR] [SPARK-55450][SS][PYTHON][DOCS] Document admission control in PySpark streaming data sources [spark]
via GitHub
-
[PR] [WIP][SPARK-56302] Free task result memory eagerly during serialization/deserialization [spark]
via GitHub
-
Re: [PR] [SPARK-56155][SQL] Collect_list/collect_set sql() function includes "RESPECT NULLS" [spark]
via GitHub
-
[PR] Collation-aware PIVOT [spark]
via GitHub
-
[PR] [SPARK-56176][SPARK-56232][SQL] V2-native ANALYZE TABLE/COLUMN with stats propagation to FileScan [spark]
via GitHub
-
[PR] [SPARK-56279][CORE][4.1] Enable zero-copy sendfile for FileRegion in native Netty transports [spark]
via GitHub
-
[PR] [SPARK-56300] Add Java-friendly factory method to `KubernetesDriverSpec` [spark]
via GitHub
-
[PR] [SPARK-56299] Fix `Dockerfile` for `jdeps` to use 21 [spark-kubernetes-operator]
via GitHub
-
[PR] [MINOR][PYTHON] Fix typos in `error-conditions.json` [spark]
via GitHub
-
[PR] [SPARK-49793][PYTHON][TESTS] Reenable test_cachine for predict_batch_udf [spark]
via GitHub
-
[PR] claude learnings [spark]
via GitHub
-
[PR] [SPARK-56298] Enable `spark.master.rest.virtualThread.enabled` by default [spark]
via GitHub
-
[PR] [SPARK-56297] Use Virtual Threads (JEP 444) for unbounded reconciliation thread pool [spark-kubernetes-operator]
via GitHub
-
[PR] [SPARK-56296][SQL] Pivot createTableLike to pass full TableInfo including schema, partitioning, constraints, and owner [spark]
via GitHub
-
Re: [PR] [SPARK-56296][SQL] Pivot createTableLike to pass full TableInfo including schema, partitioning, constraints, and owner [spark]
via GitHub
-
Re: [PR] [SPARK-56296][SQL] Pivot createTableLike to pass full TableInfo including schema, partitioning, constraints, and owner [spark]
via GitHub
-
Re: [PR] [SPARK-56296][SQL] Pivot createTableLike to pass full TableInfo including schema, partitioning, constraints, and owner [spark]
via GitHub
-
Re: [PR] [SPARK-56296][SQL] Pivot createTableLike to pass full TableInfo including schema, partitioning, constraints, and owner [spark]
via GitHub
-
[PR] [SPARK-56295][PYTHON] Add Java error classes to Python side [spark]
via GitHub
-
[PR] [SPARK-49543][SQL] Add SHOW COLLATIONS command [spark]
via GitHub
-
[PR] [SPARK-56294] Apply exhaustive `switch` for `ResourceRetainPolicy` and `ClusterStateSummary` [spark-kubernetes-operator]
via GitHub
-
[PR] [SPARK-56293] Make `BaseAppDriverObserver` abstract class `sealed` [spark-kubernetes-operator]
via GitHub
-
[PR] [SPARK-56292] Make `(App|Cluster)ReconcileStep` abstract classes `sealed` [spark-kubernetes-operator]
via GitHub
-
[PR] [SPARK-56291] Make `BaseStateSummary` a `sealed` interface [spark-kubernetes-operator]
via GitHub
-
[PR] [SPARK-56290] Rename `build-macos-26-swift(62 -> 63)` in CI [spark-connect-swift]
via GitHub
-
[PR] [SPARK-56289] Upgrade minimum Java runtime version to 21 [spark-kubernetes-operator]
via GitHub
-
[PR] [SPARK-56288] Upgrade `gRPC Swift NIO Transport` to 2.6.2 [spark-connect-swift]
via GitHub
-
[PR] [SPARK-56250][SQL][FOLLOW][4.1] Remove confusing defensive code in SortExec.rowSorter and add warning comment [spark]
via GitHub
-
[PR] [SPARK-56253][PYTHON][CONNECT] Make spark.read.json accept DataFrame input [spark]
via GitHub