[ 
https://issues.apache.org/jira/browse/SPARK-27124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16789388#comment-16789388
 ] 

Hyukjin Kwon commented on SPARK-27124:
--------------------------------------

The way of reaching it will be same as the Python implementation. Py4J allows 
JVM access fully. Of course, it's hacky - I wasn't trying to say this is an 
official way of using it.

{code}
>>> spark._jvm.org.apache.spark.sql.avro.SchemaConverters.toSqlType(spark._jvm.org.apache.avro.Schema.Parser().parse("""{"type":
>>>  "int", "name": "fieldA"}""")).toString()
u'SchemaType(IntegerType,false)'
{code}

Usually the signatures are matched between Scala and Python sides. I suspect 
that you'd open a function that takes JSON-formatted schema in Avro in PySpark 
side, right?


> Expose org.apache.spark.sql.avro.SchemaConverters as developer API
> ------------------------------------------------------------------
>
>                 Key: SPARK-27124
>                 URL: https://issues.apache.org/jira/browse/SPARK-27124
>             Project: Spark
>          Issue Type: Improvement
>          Components: PySpark, SQL
>    Affects Versions: 3.0.0
>            Reporter: Gabor Somogyi
>            Priority: Minor
>
> org.apache.spark.sql.avro.SchemaConverters provides extremely useful APIs to 
> convert schema between Spark SQL and avro. This is reachable from scala side 
> but not from pyspark. I suggest to add this as a developer API to ease 
> development for pyspark users.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to