[jira] [Created] (SPARK-21070) Pick up cloudpickle upgrades from cloudpickle python module
Kyle Kelley created SPARK-21070: --- Summary: Pick up cloudpickle upgrades from cloudpickle python module Key: SPARK-21070 URL: https://issues.apache.org/jira/browse/SPARK-21070 Project: Spark Issue Type: Improvement Components: PySpark Affects Versions: 2.1.1, 2.1.0, 2.0.0 Reporter: Kyle Kelley Priority: Minor -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-20360) Create repr functions for interpreters to use
Kyle Kelley created SPARK-20360: --- Summary: Create repr functions for interpreters to use Key: SPARK-20360 URL: https://issues.apache.org/jira/browse/SPARK-20360 Project: Spark Issue Type: Improvement Components: PySpark Affects Versions: 2.1.0, 2.0.0 Reporter: Kyle Kelley Priority: Minor Create `_repr_html_` for SparkContext, DataFrames, and other objects to target rich display in IPython. This will improve the user experience in Jupyter, Hydrogen, nteract, and any other frontends that use this namespace. http://ipython.readthedocs.io/en/stable/config/integrating.html I made this issue only target 2.x since it's an enhancement on the current experience. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-19094) Plumb through logging/error messages from the JVM to Jupyter PySpark
[ https://issues.apache.org/jira/browse/SPARK-19094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898114#comment-15898114 ] Kyle Kelley commented on SPARK-19094: - Super interested in this, as it's been confusing for our users. I've thought about making an alternate endpoint for a kernel to get logs out of, it would be much better to re-route these logs so that the python kernel can handle them directly. > Plumb through logging/error messages from the JVM to Jupyter PySpark > > > Key: SPARK-19094 > URL: https://issues.apache.org/jira/browse/SPARK-19094 > Project: Spark > Issue Type: Improvement > Components: PySpark >Reporter: holdenk >Priority: Trivial > > Jupyter/IPython notebooks works by overriding sys.stdout & sys.stderr, as > such the error messages that show up in IJupyter/IPython are often missing > the related logs - which is often more useful than the exception its self. > This could make it easier for Python developers getting started with Spark on > their local laptops to debug their applications, since otherwise they need to > remember to keep going to the terminal where they launched the notebook from. > One counterpoint to this is that Spark's logging is fairly verbose, but since > we provide the ability for the user to tune the log messages from within the > notebook that should be OK. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-6883) Fork pyspark's cloudpickle as a separate dependency
Kyle Kelley created SPARK-6883: -- Summary: Fork pyspark's cloudpickle as a separate dependency Key: SPARK-6883 URL: https://issues.apache.org/jira/browse/SPARK-6883 Project: Spark Issue Type: Improvement Components: PySpark Reporter: Kyle Kelley IPython, pyspark, picloud/multyvac/cloudpipe all rely on cloudpickle from various sources (cloud, pyspark, and multyvac correspondingly). It would be great to have this as a separately maintained project that can: * Work with Python3 * Add tests! * Use higher order pickling (when on Python3) * Be installed with pip We're starting this off at the PyCon sprints under https://github.com/cloudpipe/cloudpickle. We'd like to coordinate with PySpark to make it work across all the above mentioned projects. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org