[ https://issues.apache.org/jira/browse/SPARK-10162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14707526#comment-14707526 ]
Kevin Cox edited comment on SPARK-10162 at 8/21/15 10:05 PM: ------------------------------------------------------------- This is probably because the filter argument is never passed though {{TimestampType.toInternal()}} (which I have monkey patched to handle timezones properly) was (Author: kevincox): This is probably because the filter argument is never passed though {{TimestampType.toInternal()}} which I have monkey patched to handle timezones properly. > PySpark filters with datetimes mess up when datetimes have timezones. > --------------------------------------------------------------------- > > Key: SPARK-10162 > URL: https://issues.apache.org/jira/browse/SPARK-10162 > Project: Spark > Issue Type: Bug > Components: PySpark > Reporter: Kevin Cox > > PySpark appears to ignore timezone information when filtering on (and working > in general with) datetimes. > Please see the example below. The generated filter in the query plan is 5 > hours off (my computer is EST). > {code} > In [1]: df = sc.sql.createDataFrame([], StructType([StructField("dt", > TimestampType())])) > In [2]: df.filter(df.dt > datetime(2000, 01, 01, tzinfo=UTC)).explain() > Filter (dt#9 > 946702800000000) > Scan PhysicalRDD[dt#9] > {code} > Note that 946702800000000 == Sat 1 Jan 2000 05:00:00 UTC -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org