GitHub user NiharS opened a pull request:
https://github.com/apache/spark/pull/23026
[SPARK-25960][k8s] Support subpath mounting with Kubernetes
## What changes were proposed in this pull request?
This PR adds configurations to use subpaths with Spark on k8s. Subpaths
Github user NiharS commented on a diff in the pull request:
https://github.com/apache/spark/pull/22504#discussion_r223512824
--- Diff: core/src/main/scala/org/apache/spark/internal/Logging.scala ---
@@ -192,7 +211,15 @@ private[spark] object Logging
Github user NiharS commented on the issue:
https://github.com/apache/spark/pull/22539
Could you edit your title to include the jira number and component?
e.g. [SPARK-25517][Core] Detect ...
Helps with bookkeeping, plus it'll add a link to the jira so people can see
Github user NiharS commented on the issue:
https://github.com/apache/spark/pull/22192
Hi, just want to follow up on this and see if anyone has additional
comments/questions/issues. Please feel free to ping me and I'll respond as soon
as I can
Github user NiharS commented on the issue:
https://github.com/apache/spark/pull/22192
I feel like it should be unrelated as well. It's strange that I failed
tests in the same suite twice in a row, and that no other recent build has
failed that suite, but I tried running locally
Github user NiharS commented on the issue:
https://github.com/apache/spark/pull/22192
Logs suddenly cut off again without any exceptions, don't think this one is
a code error as well. Could someone retest this please
Github user NiharS commented on a diff in the pull request:
https://github.com/apache/spark/pull/22192#discussion_r216210046
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -136,6 +136,26 @@ private[spark] class Executor(
// for fetching remote
Github user NiharS commented on a diff in the pull request:
https://github.com/apache/spark/pull/22192#discussion_r216209462
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -240,6 +240,19 @@ private[spark] object Utils extends Logging
Github user NiharS commented on a diff in the pull request:
https://github.com/apache/spark/pull/22192#discussion_r216112535
--- Diff: core/src/test/java/org/apache/spark/ExecutorPluginSuite.java ---
@@ -0,0 +1,122 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF
Github user NiharS commented on the issue:
https://github.com/apache/spark/pull/22192
If I want to add more testing plugins (i.e. to test that a plugin failing
to shut down doesn't affect other plugins), is it stylistically ok for me to
keep throwing them at the bottom
Github user NiharS commented on a diff in the pull request:
https://github.com/apache/spark/pull/22192#discussion_r216053464
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -219,6 +236,13 @@ private[spark] class Executor(
heartbeater.shutdown
Github user NiharS commented on the issue:
https://github.com/apache/spark/pull/22192
^This commit is to show what that looks like. I'm still looking into how to
properly barrier the plugins after calling `init` from a new thread (since
`join`ing the thread requires the thread to die
Github user NiharS commented on the issue:
https://github.com/apache/spark/pull/22192
I tested @vanzin's suggestion of storing the classloader, changing it to
replClassLoader, and changing back after init (and then changing again for
shutdown). It's working well (and this time I made
Github user NiharS commented on a diff in the pull request:
https://github.com/apache/spark/pull/22192#discussion_r215760787
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -218,6 +244,8 @@ private[spark] class Executor
Github user NiharS commented on the issue:
https://github.com/apache/spark/pull/22192
@mridulm I'm inclined to agree. I was doing some extra testing on separate
thread and it seems to be performing well. I'll push this shortly. I'm going to
also test setting the class loader
Github user NiharS commented on the issue:
https://github.com/apache/spark/pull/22192
I'll change the config documentation to specify YARN only, hopefully it's
not a huge issue.
It seems like the line
`Thread.currentThread().setContextClassLoader(replClassLoader)` is causing
Github user NiharS commented on the issue:
https://github.com/apache/spark/pull/22192
You're right, I ran in local-cluster and it exited very quickly citing
executors shutting down after not being able to find my test plugin. Although,
the logs say that it does use
Github user NiharS commented on the issue:
https://github.com/apache/spark/pull/22192
I believe this is another glitch, not from my changes. Could someone retest
this please?
---
-
To unsubscribe, e-mail: reviews
Github user NiharS commented on the issue:
https://github.com/apache/spark/pull/22192
Tried on a different machine and those tests pass, here's hoping
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user NiharS commented on the issue:
https://github.com/apache/spark/pull/22192
These tests are failing locally for me, both with my code and without on a
clean pull of master
---
-
To unsubscribe, e-mail
Github user NiharS commented on the issue:
https://github.com/apache/spark/pull/22192
Can't be a coincidence...it's failing two pyspark streaming tests,
test_binary_records_stream and test_text_file_stream (same two earlier, I
misanalyzed it when I said it was ML stuff). I'm running
Github user NiharS commented on a diff in the pull request:
https://github.com/apache/spark/pull/22192#discussion_r214418189
--- Diff: core/src/main/java/org/apache/spark/ExecutorPlugin.java ---
@@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under
Github user NiharS commented on the issue:
https://github.com/apache/spark/pull/22192
I'm going to commit the fixes recommended by @vanzin soon. Then, I think
the biggest remaining concern is the threads. I'm okay with adding a `start`
and `stop`, but I'm curious as to whether those
Github user NiharS commented on a diff in the pull request:
https://github.com/apache/spark/pull/22192#discussion_r213837582
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -130,6 +130,14 @@ private[spark] class Executor(
private val
Github user NiharS commented on a diff in the pull request:
https://github.com/apache/spark/pull/22192#discussion_r213831980
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -130,6 +130,14 @@ private[spark] class Executor(
private val
Github user NiharS commented on a diff in the pull request:
https://github.com/apache/spark/pull/22192#discussion_r213828422
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -130,6 +130,14 @@ private[spark] class Executor(
private val
Github user NiharS commented on a diff in the pull request:
https://github.com/apache/spark/pull/22192#discussion_r213820297
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -130,6 +130,14 @@ private[spark] class Executor(
private val
Github user NiharS commented on the issue:
https://github.com/apache/spark/pull/22192
I don't think this test failure was caused by my changes, it failed a ML
test in python due to a worker crashing. Couldn't find anything in the logs
indicating that my changes led to this issue
Github user NiharS commented on a diff in the pull request:
https://github.com/apache/spark/pull/22192#discussion_r213150133
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -130,6 +130,16 @@ private[spark] class Executor(
private val
Github user NiharS commented on a diff in the pull request:
https://github.com/apache/spark/pull/22192#discussion_r213140764
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -130,6 +130,16 @@ private[spark] class Executor(
private val
Github user NiharS commented on a diff in the pull request:
https://github.com/apache/spark/pull/22192#discussion_r212781171
--- Diff: core/src/test/java/org/apache/spark/ExecutorPluginSuite.java ---
@@ -0,0 +1,104 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF
Github user NiharS commented on a diff in the pull request:
https://github.com/apache/spark/pull/22192#discussion_r212780252
--- Diff: core/src/test/java/org/apache/spark/ExecutorPluginSuite.java ---
@@ -0,0 +1,104 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF
Github user NiharS commented on a diff in the pull request:
https://github.com/apache/spark/pull/22192#discussion_r212778867
--- Diff: core/src/test/java/org/apache/spark/ExecutorPluginSuite.java ---
@@ -0,0 +1,104 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF
Github user NiharS commented on a diff in the pull request:
https://github.com/apache/spark/pull/22192#discussion_r212777617
--- Diff: core/src/main/java/org/apache/spark/ExecutorPlugin.java ---
@@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under
Github user NiharS commented on a diff in the pull request:
https://github.com/apache/spark/pull/22192#discussion_r212777267
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -130,6 +130,16 @@ private[spark] class Executor(
private val
Github user NiharS commented on the issue:
https://github.com/apache/spark/pull/22192
(to my understanding this issue isn't part of my change, I checked other
pulls that had this error around the same time and those also had similar
outputs and lack of specific errors
Github user NiharS commented on the issue:
https://github.com/apache/spark/pull/22192
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
GitHub user NiharS opened a pull request:
https://github.com/apache/spark/pull/22192
[SPARK-24918] Executor Plugin API
A continuation of @squito's executor plugin task. By his request I took his
code and added testing and moved the plugin initialization to a separate thread
Github user NiharS commented on the issue:
https://github.com/apache/spark/pull/22114
They pass on my machine :(
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user NiharS commented on the issue:
https://github.com/apache/spark/pull/22114
Tried with a significantly larger input, both with and without the change.
They ran in just about the same time
Github user NiharS commented on the issue:
https://github.com/apache/spark/pull/22114
Working on a better test now! With my small program there was definitely no
noticeable change in runtime. The memory monitor was running too, though, so I
did have some external factors. I'll try
GitHub user NiharS opened a pull request:
https://github.com/apache/spark/pull/22114
[SPARK-24938][Core] Prevent Netty from using onheap memory for headers
without regard for configuration
â¦ffer type instead of immediately opening a pool of onheap memory for
headers
Github user NiharS commented on the issue:
https://github.com/apache/spark/pull/21885
Chatted with @squito about this. From what I understood from that
discussion, ExternalShuffleService shouldn't be controlled by configurations
passed into a spark application as it is its own
Github user NiharS commented on the issue:
https://github.com/apache/spark/pull/21885
Thanks for the review and feedback! I made the changes, except for the
moving the if clause to the same line as "yarn", unfortunately that does make
the line 104 chara
GitHub user NiharS opened a pull request:
https://github.com/apache/spark/pull/21885
[SPARK-24926] Ensure numCores is used consistently in all netty
configurations
## What changes were proposed in this pull request?
Netty could just ignore user-provided configurations
Github user NiharS commented on the issue:
https://github.com/apache/spark/pull/21807
Hey @mauropalsgraaf just wanted to check in on this. Have you run into any
additional issues or have any questions for this fix
Github user NiharS commented on the issue:
https://github.com/apache/spark/pull/21807
New to SQL, but it seems like the query
`SELECT 1 LIMIT CAST('1' AS INT)`
should work, right? I tried both on Spark without to your change and the
W3Schools SQL tester and it's
Github user NiharS commented on the issue:
https://github.com/apache/spark/pull/21765
Rebased onto SPARK-24813 so hopefully tests will work now
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
GitHub user NiharS opened a pull request:
https://github.com/apache/spark/pull/21765
[MINOR][CORE] Add test cases for RDD.cartesian
## What changes were proposed in this pull request?
While looking through the codebase, it appeared that the scala code for
RDD.cartesian
49 matches
Mail list logo