westonpace commented on a change in pull request #11358:
URL: https://github.com/apache/arrow/pull/11358#discussion_r733175188
##########
File path: docs/source/cpp/csv.rst
##########
@@ -190,6 +190,70 @@ dictionary-encoded string-like array. It switches to a
plain string-like
array when the threshold in :member:`ConvertOptions::auto_dict_max_cardinality`
is reached.
+Timestamp inference/parsing
+---------------------------
+
+If type inference is enabled, the CSV reader first tries to interpret
+string-like columns as timestamps. If all rows have some zone offset
+(e.g. ``Z`` or ``+0100``), even if the offsets are inconsistent, then the
+inferred type will be UTC timestamp. If no rows have a zone offset, then the
+inferred type will be timestamp without timezone. A mix of rows with/without
+offsets will result in a string column.
+
+If the type is explicitly specified as a timestamp without timezone ("naive"),
+then the reader will error on values with zone offsets in that column. Else, if
+the type is timestamp with timezone, the column values must either all have
+zone offsets or all lack zone offsets. In the former case, values are
+unambiguous, since each row specifies a precise time in UTC, but in the latter
+case, Arrow will currently interpret the timestamps as specifying values in UTC
+(i.e. as if they had the zone offset "Z" or "+0000"), *not* as values in the
+local time of the timezone.
Review comment:
I think my problem is that, in most places, the timestamp's timezone
parameter can be interpreted as "The time zone to use when displaying these
timestamps" and it isn't an input parameter. Here we are saying it is
sometimes an input parameter and sometimes not. It's ok, I'm not -1, just more
in the +0 range.
What happens if a column of naive timestamps is inferred because not type is
given? I think, if I am looking correctly, we convert it to a naive timestamp
type. So this means that
`read_csv(...).column("start_time").cast(pa.timestamp('s', 'Europe/Brussels'))`
would give a different answer than `read_csv(..., types={'start_time':
pa.timestamp('s', 'Europe/Brussels')}).column("start_time")`. Again, I don't
know if that is necessarily a bad thing or not.
Another (admittedly not great) option is to have a separate
"default_input_timezone" parameter as part of the conversion / parsing logic.
So then we would do:
```
naive timestamp -> assume_timezone(default_input_timezone) ->
cast(target_type)
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]