Github user falaki commented on the issue:

    https://github.com/apache/spark/pull/17941
  
    @felixcheung we all know that SparkR (and in general R) API is not perfect 
when it comes to ETLing unstructured data. For example we don't have a great 
story for nested data, etc. To overcome these limitations many ETL their data 
in Python or Scala and then analyze them in R.
    
    With introduction of sessions that workflow is partially broken. You can 
still do it but you need to persist the table. The global temp view is to solve 
that problem. It exists in PySpark, so I think it deserves to exist in SparkR 
as well.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to