[ https://issues.apache.org/jira/browse/SPARK-24918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16961672#comment-16961672 ]
Brandon edited comment on SPARK-24918 at 10/29/19 6:09 AM: ----------------------------------------------------------- [~nsheth] placing the plugin class inside a jar and passing as `–jars` to spark-submit should be sufficient, right? It seems this is not enough to make the class visible to the executor. I have had to explicitly add this jar to `spark.executor.extraClassPath` for plugins to load correctly. (Using Spark 2.4.4) was (Author: brandonvin): [~nsheth] placing the plugin class inside a jar and passing as `–jars` to spark-submit should be sufficient, right? It seems this is not enough to make the class visible to the executor. I have had to explicitly add this jar to `spark.executor.extraClassPath` for plugins to load correctly. > Executor Plugin API > ------------------- > > Key: SPARK-24918 > URL: https://issues.apache.org/jira/browse/SPARK-24918 > Project: Spark > Issue Type: New Feature > Components: Spark Core > Affects Versions: 2.4.0 > Reporter: Imran Rashid > Assignee: Nihar Sheth > Priority: Major > Labels: SPIP, memory-analysis > Fix For: 2.4.0 > > > It would be nice if we could specify an arbitrary class to run within each > executor for debugging and instrumentation. Its hard to do this currently > because: > a) you have no idea when executors will come and go with DynamicAllocation, > so don't have a chance to run custom code before the first task > b) even with static allocation, you'd have to change the code of your spark > app itself to run a special task to "install" the plugin, which is often > tough in production cases when those maintaining regularly running > applications might not even know how to make changes to the application. > For example, https://github.com/squito/spark-memory could be used in a > debugging context to understand memory use, just by re-running an application > with extra command line arguments (as opposed to rebuilding spark). > I think one tricky part here is just deciding the api, and how its versioned. > Does it just get created when the executor starts, and thats it? Or does it > get more specific events, like task start, task end, etc? Would we ever add > more events? It should definitely be a {{DeveloperApi}}, so breaking > compatibility would be allowed ... but still should be avoided. We could > create a base class that has no-op implementations, or explicitly version > everything. > Note that this is not needed in the driver as we already have SparkListeners > (even if you don't care about the SparkListenerEvents and just want to > inspect objects in the JVM, its still good enough). -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org