Github user HyukjinKwon commented on the issue:

    https://github.com/apache/spark/pull/19630
  
    OK, mine was, with this diff:
    
    ```diff
    --- 
a/core/src/main/scala/org/apache/spark/api/python/PythonWorkerFactory.scala
    +++ 
b/core/src/main/scala/org/apache/spark/api/python/PythonWorkerFactory.scala
    @@ -38,7 +38,7 @@ private[spark] class PythonWorkerFactory(pythonExec: 
String, envVars: Map[String
       // (pyspark/daemon.py) and tell it to fork new workers for our tasks. 
This daemon currently
       // only works on UNIX-based systems now because it uses signals for 
child management, so we can
       // also fall back to launching workers (pyspark/worker.py) directly.
    -  val useDaemon = !System.getProperty("os.name").startsWith("Windows")
    +  val useDaemon = false
    
       var daemon: Process = null
       val daemonHost = InetAddress.getByAddress(Array(127, 0, 0, 1))
    ```
    
    ```bash
    pip install coverage
    # Build Spark (http://spark.apache.org/docs/latest/building-spark.html)
    rm python/lib/pyspark.zip
    rm -fr .coverage
    rm -fr coverage_html
    echo "
    #!/usr/bin/env bash
    coverage run -p \$@
    " > coverage_python
    chmod 755 coverage_python
    PATH=`pwd`:$PATH PYSPARK_PYTHON=coverage_python SPARK_TESTING=1 bin/pyspark 
pyspark.sql.tests VectorizedUDFTests
    coverage combine
    coverage html -d coverage_html -i
    open coverage_html
    # Open up index.html in your browser.
    ```


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to