[jira] [Commented] (SPARK-26494) 【spark sql】Use spark to read oracle TIMESTAMP(6) WITH LOCAL TIME ZONE type can't be found,

2020-01-10 Thread Jeff Evans (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-26494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17013093#comment-17013093
 ] 

Jeff Evans commented on SPARK-26494:


To be clear, this type represents an instant in time.  From [the 
docs|https://docs.oracle.com/database/121/SUTIL/GUID-CB5D2124-D9AE-4C71-A83D-DFE071FE3542.htm]:

{quote}The TIMESTAMP WITH LOCAL TIME ZONE data type is another variant of 
TIMESTAMP that includes a time zone offset in its value. Data stored in the 
database is normalized to the database time zone, and time zone displacement is 
not stored as part of the column data. When the data is retrieved, it is 
returned in the user's local session time zone. It is specified as 
follows:{quote}

So it's really almost the same as a {{TIMESTAMP}}, just that it does some kind 
of automatic TZ conversion (converting from the offset given by the client to 
the DB server's offset automatically).  But that conversion is orthogonal to 
Spark entirely; it should just be treated like a {{TIMESTAMP}}.

> 【spark sql】Use spark to read oracle TIMESTAMP(6) WITH LOCAL TIME ZONE type 
> can't be found,
> --
>
> Key: SPARK-26494
> URL: https://issues.apache.org/jira/browse/SPARK-26494
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: kun'qin 
>Priority: Minor
>
> Use spark to read oracle TIMESTAMP(6) WITH LOCAL TIME ZONE type can't be 
> found,
> When the data type is TIMESTAMP(6) WITH LOCAL TIME ZONE
> At this point, the sqlType value of the function getCatalystType in the 
> JdbcUtils class is -102.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-26494) 【spark sql】Use spark to read oracle TIMESTAMP(6) WITH LOCAL TIME ZONE type can't be found,

2020-01-01 Thread Jarek Jarcec Cecho (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-26494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17006564#comment-17006564
 ] 

Jarek Jarcec Cecho commented on SPARK-26494:


I  believe that can explain this JIRA a bit further as I've recently hit it 
myself. If one is reading over JDBC from Oracle and the source table have type 
{{TIMESTAMP WITH LOCAL TIME ZONE}}, Spark will end up with exception:

{code}
Unrecognized SQL type -102
java.sql.SQLException: Unrecognized SQL type -102
at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$getCatalystType(JdbcUtils.scala:246)
at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$8.apply(JdbcUtils.scala:316)
at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$8.apply(JdbcUtils.scala:316)
at scala.Option.getOrElse(Option.scala:121)
at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.getSchema(JdbcUtils.scala:315)
at 
org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:63)
at 
org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation$.getSchema(JDBCRelation.scala:210)
at 
org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation$.apply(JDBCRelation.scala:225)
at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:312)
{code}

The use case is that I'm a user who does not own the source table (and thus 
have no control over it's schema), yet it needs to be loaded it into Spark 
environment. I wonder what are your thoughts on why the type makes no sense in 
Spark [~srowen]?

> 【spark sql】Use spark to read oracle TIMESTAMP(6) WITH LOCAL TIME ZONE type 
> can't be found,
> --
>
> Key: SPARK-26494
> URL: https://issues.apache.org/jira/browse/SPARK-26494
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: kun'qin 
>Priority: Minor
>
> Use spark to read oracle TIMESTAMP(6) WITH LOCAL TIME ZONE type can't be 
> found,
> When the data type is TIMESTAMP(6) WITH LOCAL TIME ZONE
> At this point, the sqlType value of the function getCatalystType in the 
> JdbcUtils class is -102.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-26494) 【spark sql】Use spark to read oracle TIMESTAMP(6) WITH LOCAL TIME ZONE type can't be found,

2019-03-01 Thread Sean Owen (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-26494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782193#comment-16782193
 ] 

Sean Owen commented on SPARK-26494:
---

Correct me if I'm wrong, but does a timestamp without local time zone make 
sense in Spark?

> 【spark sql】Use spark to read oracle TIMESTAMP(6) WITH LOCAL TIME ZONE type 
> can't be found,
> --
>
> Key: SPARK-26494
> URL: https://issues.apache.org/jira/browse/SPARK-26494
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 2.4.0
>Reporter: kun'qin 
>Priority: Minor
>
> Use spark to read oracle TIMESTAMP(6) WITH LOCAL TIME ZONE type can't be 
> found,
> When the data type is TIMESTAMP(6) WITH LOCAL TIME ZONE
> At this point, the sqlType value of the function getCatalystType in the 
> JdbcUtils class is -102.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org