[jira] [Commented] (HIVE-24693) Parquet Timestamp Values Read/Write Very Slow

2021-02-16 Thread David Mollitor (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-24693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17285449#comment-17285449
 ] 

David Mollitor commented on HIVE-24693:
---

In working on this ticket, I learned something interesting:

{java|title=Timestamp.java}
private static final DateTimeFormatter PARSE_FORMATTER = new 
DateTimeFormatterBuilder()
  // Date
  .appendValue(YEAR, 1, 10, 
SignStyle.NORMAL).appendLiteral('-').appendValue(MONTH_OF_YEAR, 1, 2, 
SignStyle.NORMAL) ...

private static final DateTimeFormatter PRINT_FORMATTER = new 
DateTimeFormatterBuilder()
  // Date and Time Parts
  .append(DateTimeFormatter.ofPattern("-MM-dd HH:mm:ss")) ...
{code}

When the *PARSING* code is built, it uses *YEAR*.  However the *FORMATTER* code 
is using **.  The equivalence should be:

{{ChornoField.YEAR}} = ""
{{ChronoField.YEAR_OF_ERA}} = ""

So, what I ran into in my work on skipping the timestamp parsing is that I 
stumbled on the fact that Hive is reading YEAR but displaying YEAR_OF_ERA, 
which are not the same things.  YEAR has negative dates, YEAR_OF_ERA does not 
usually have negative dates, for example a "negative date" in YEAR_OF_ERA would 
be +2000 BCE whereas YEAR would be -2000.  So, Hive is kinda whacky and out of 
sync currently for negative years.

> Parquet Timestamp Values Read/Write Very Slow
> -
>
> Key: HIVE-24693
> URL: https://issues.apache.org/jira/browse/HIVE-24693
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> Parquet {{DataWriteableWriter}} relias on {{NanoTimeUtils}} to convert a 
> timestamp object into a binary value.  The way in which it does this,... it 
> calls {{toString()}} on the timestamp object, and then parses the String.  
> This particular timestamp do not carry a timezone, so the string is something 
> like:
> {{2021-21-03 12:32:23....}}
> The parse code tries to parse the string assuming there is a time zone, and 
> if not, falls-back and applies the provided "default time zone".  As was 
> noted in [HIVE-24353], if something fails to parse, it is very expensive to 
> try to parse again.  So, for each timestamp in the Parquet file, it:
> * Builds a string from the time stamp
> * Parses it (throws an exception, parses again)
> There is no need to do this kind of string manipulations/parsing, it should 
> just be using the epoch millis/seconds/time stored internal to the Timestamp 
> object.
> {code:java}
>   // Converts Timestamp to TimestampTZ.
>   public static TimestampTZ convert(Timestamp ts, ZoneId defaultTimeZone) {
> return parse(ts.toString(), defaultTimeZone);
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-24693) Parquet Timestamp Values Read/Write Very Slow

2021-02-02 Thread David Mollitor (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-24693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17277390#comment-17277390
 ] 

David Mollitor commented on HIVE-24693:
---

https://github.com/apache/hive/pull/1938

> Parquet Timestamp Values Read/Write Very Slow
> -
>
> Key: HIVE-24693
> URL: https://issues.apache.org/jira/browse/HIVE-24693
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Parquet {{DataWriteableWriter}} relias on {{NanoTimeUtils}} to convert a 
> timestamp object into a binary value.  The way in which it does this,... it 
> calls {{toString()}} on the timestamp object, and then parses the String.  
> This particular timestamp do not carry a timezone, so the string is something 
> like:
> {{2021-21-03 12:32:23....}}
> The parse code tries to parse the string assuming there is a time zone, and 
> if not, falls-back and applies the provided "default time zone".  As was 
> noted in [HIVE-24353], if something fails to parse, it is very expensive to 
> try to parse again.  So, for each timestamp in the Parquet file, it:
> * Builds a string from the time stamp
> * Parses it (throws an exception, parses again)
> There is no need to do this kind of string manipulations/parsing, it should 
> just be using the epoch millis/seconds/time stored internal to the Timestamp 
> object.
> {code:java}
>   // Converts Timestamp to TimestampTZ.
>   public static TimestampTZ convert(Timestamp ts, ZoneId defaultTimeZone) {
> return parse(ts.toString(), defaultTimeZone);
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-24693) Parquet Timestamp Values Read/Write Very Slow

2021-02-01 Thread David Mollitor (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-24693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17276595#comment-17276595
 ] 

David Mollitor commented on HIVE-24693:
---

[~klcopp] That may be the case, but there was a unit test that was generating 
negative dates.  That's what broke my work. Ugh.

> Parquet Timestamp Values Read/Write Very Slow
> -
>
> Key: HIVE-24693
> URL: https://issues.apache.org/jira/browse/HIVE-24693
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Parquet {{DataWriteableWriter}} relias on {{NanoTimeUtils}} to convert a 
> timestamp object into a binary value.  The way in which it does this,... it 
> calls {{toString()}} on the timestamp object, and then parses the String.  
> This particular timestamp do not carry a timezone, so the string is something 
> like:
> {{2021-21-03 12:32:23....}}
> The parse code tries to parse the string assuming there is a time zone, and 
> if not, falls-back and applies the provided "default time zone".  As was 
> noted in [HIVE-24353], if something fails to parse, it is very expensive to 
> try to parse again.  So, for each timestamp in the Parquet file, it:
> * Builds a string from the time stamp
> * Parses it (throws an exception, parses again)
> There is no need to do this kind of string manipulations/parsing, it should 
> just be using the epoch millis/seconds/time stored internal to the Timestamp 
> object.
> {code:java}
>   // Converts Timestamp to TimestampTZ.
>   public static TimestampTZ convert(Timestamp ts, ZoneId defaultTimeZone) {
> return parse(ts.toString(), defaultTimeZone);
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-24693) Parquet Timestamp Values Read/Write Very Slow

2021-02-01 Thread Karen Coppage (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-24693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17276566#comment-17276566
 ] 

Karen Coppage commented on HIVE-24693:
--

[~belugabehr] Per the the Wiki, Hive can handle years 0001-. However it 
doesn't really complain about years outside of that range. I once tried to get 
Hive to enforce this range but didn't get very far. FYI :)

BTW let me know if/when you want a review!

> Parquet Timestamp Values Read/Write Very Slow
> -
>
> Key: HIVE-24693
> URL: https://issues.apache.org/jira/browse/HIVE-24693
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Parquet {{DataWriteableWriter}} relias on {{NanoTimeUtils}} to convert a 
> timestamp object into a binary value.  The way in which it does this,... it 
> calls {{toString()}} on the timestamp object, and then parses the String.  
> This particular timestamp do not carry a timezone, so the string is something 
> like:
> {{2021-21-03 12:32:23....}}
> The parse code tries to parse the string assuming there is a time zone, and 
> if not, falls-back and applies the provided "default time zone".  As was 
> noted in [HIVE-24353], if something fails to parse, it is very expensive to 
> try to parse again.  So, for each timestamp in the Parquet file, it:
> * Builds a string from the time stamp
> * Parses it (throws an exception, parses again)
> There is no need to do this kind of string manipulations/parsing, it should 
> just be using the epoch millis/seconds/time stored internal to the Timestamp 
> object.
> {code:java}
>   // Converts Timestamp to TimestampTZ.
>   public static TimestampTZ convert(Timestamp ts, ZoneId defaultTimeZone) {
> return parse(ts.toString(), defaultTimeZone);
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-24693) Parquet Timestamp Values Read/Write Very Slow

2021-01-29 Thread David Mollitor (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-24693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17275257#comment-17275257
 ] 

David Mollitor commented on HIVE-24693:
---

Ok, date formater does not handle negative years.  needs to be {{}} instead 
of {{}}.  Ouch.  This never worked.

> Parquet Timestamp Values Read/Write Very Slow
> -
>
> Key: HIVE-24693
> URL: https://issues.apache.org/jira/browse/HIVE-24693
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Parquet {{DataWriteableWriter}} relias on {{NanoTimeUtils}} to convert a 
> timestamp object into a binary value.  The way in which it does this,... it 
> calls {{toString()}} on the timestamp object, and then parses the String.  
> This particular timestamp do not carry a timezone, so the string is something 
> like:
> {{2021-21-03 12:32:23....}}
> The parse code tries to parse the string assuming there is a time zone, and 
> if not, falls-back and applies the provided "default time zone".  As was 
> noted in [HIVE-24353], if something fails to parse, it is very expensive to 
> try to parse again.  So, for each timestamp in the Parquet file, it:
> * Builds a string from the time stamp
> * Parses it (throws an exception, parses again)
> There is no need to do this kind of string manipulations/parsing, it should 
> just be using the epoch millis/seconds/time stored internal to the Timestamp 
> object.
> {code:java}
>   // Converts Timestamp to TimestampTZ.
>   public static TimestampTZ convert(Timestamp ts, ZoneId defaultTimeZone) {
> return parse(ts.toString(), defaultTimeZone);
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-24693) Parquet Timestamp Values Read/Write Very Slow

2021-01-29 Thread David Mollitor (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-24693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17275251#comment-17275251
 ] 

David Mollitor commented on HIVE-24693:
---

Latest:

{code:java}
  public static void main(String[] args) throws IOException, URISyntaxException
  {
DateTimeFormatterBuilder builder = new DateTimeFormatterBuilder();
// Date and time parts
builder.append(DateTimeFormatter.ofPattern("-MM-dd HH:mm:ss"));
// Fractional part
builder.optionalStart().appendFraction(ChronoField.NANO_OF_SECOND, 0, 9, 
true).optionalEnd();
builder.appendZoneOrOffsetId();
DateTimeFormatter PRINT_FORMATTER = builder.toFormatter();

int daysSinceEpoch = -1133938638;

LocalDate localDate = LocalDate.ofEpochDay(daysSinceEpoch);
long epochMillis = 
localDate.atStartOfDay(ZoneOffset.UTC).toInstant().toEpochMilli();
ZonedDateTime localDateTime = 
ZonedDateTime.ofInstant(Instant.ofEpochMilli(epochMillis), ZoneOffset.UTC);

System.out.println(epochMillis);
System.out.println(localDate);
System.out.println(localDateTime);
System.out.println(localDateTime.format(PRINT_FORMATTER));
  }
{code}

{code}
-9797229832320
-3102649-06-17
-3102649-06-17T00:00Z
+3102650-06-17 00:00:00Z
{code}

> Parquet Timestamp Values Read/Write Very Slow
> -
>
> Key: HIVE-24693
> URL: https://issues.apache.org/jira/browse/HIVE-24693
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Parquet {{DataWriteableWriter}} relias on {{NanoTimeUtils}} to convert a 
> timestamp object into a binary value.  The way in which it does this,... it 
> calls {{toString()}} on the timestamp object, and then parses the String.  
> This particular timestamp do not carry a timezone, so the string is something 
> like:
> {{2021-21-03 12:32:23....}}
> The parse code tries to parse the string assuming there is a time zone, and 
> if not, falls-back and applies the provided "default time zone".  As was 
> noted in [HIVE-24353], if something fails to parse, it is very expensive to 
> try to parse again.  So, for each timestamp in the Parquet file, it:
> * Builds a string from the time stamp
> * Parses it (throws an exception, parses again)
> There is no need to do this kind of string manipulations/parsing, it should 
> just be using the epoch millis/seconds/time stored internal to the Timestamp 
> object.
> {code:java}
>   // Converts Timestamp to TimestampTZ.
>   public static TimestampTZ convert(Timestamp ts, ZoneId defaultTimeZone) {
> return parse(ts.toString(), defaultTimeZone);
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-24693) Parquet Timestamp Values Read/Write Very Slow

2021-01-29 Thread David Mollitor (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-24693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17275222#comment-17275222
 ] 

David Mollitor commented on HIVE-24693:
---

Hitting a weird issue.  There's a unit test that goes from Date to Timestamp 
that is failing.  It seems that the Timestamp value and the toString do not 
agree with each other:

 
{code:java}
  public static void main(String[] args) throws IOException, URISyntaxException
  {
DateTimeFormatterBuilder builder = new DateTimeFormatterBuilder();
// Date and time parts
builder.append(DateTimeFormatter.ofPattern("-MM-dd HH:mm:ss"));
// Fractional part
builder.optionalStart().appendFraction(ChronoField.NANO_OF_SECOND, 0, 9, 
true).optionalEnd();
DateTimeFormatter PRINT_FORMATTER = builder.toFormatter();

int daysSinceEpoch = -1133938638;

LocalDate localDate = LocalDate.ofEpochDay(daysSinceEpoch);
long epochMillis = 
localDate.atStartOfDay().toInstant(ZoneOffset.UTC).toEpochMilli();

LocalDateTime localDateTime = 
LocalDateTime.ofInstant(Instant.ofEpochMilli(epochMillis), ZoneOffset.UTC);

System.out.println(epochMillis);
System.out.println(localDate);
System.out.println(localDateTime.format(PRINT_FORMATTER));
  }
{code}

It's a big negative number.  It should be a negative date, however,:

{code:none}
-9797229832320
-3102649-06-17
+3102650-06-17 00:00:00
{code}

For some reason, the print formatted date is different that the toString value. 
 Hmmm.

> Parquet Timestamp Values Read/Write Very Slow
> -
>
> Key: HIVE-24693
> URL: https://issues.apache.org/jira/browse/HIVE-24693
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Parquet {{DataWriteableWriter}} relias on {{NanoTimeUtils}} to convert a 
> timestamp object into a binary value.  The way in which it does this,... it 
> calls {{toString()}} on the timestamp object, and then parses the String.  
> This particular timestamp do not carry a timezone, so the string is something 
> like:
> {{2021-21-03 12:32:23....}}
> The parse code tries to parse the string assuming there is a time zone, and 
> if not, falls-back and applies the provided "default time zone".  As was 
> noted in [HIVE-24353], if something fails to parse, it is very expensive to 
> try to parse again.  So, for each timestamp in the Parquet file, it:
> * Builds a string from the time stamp
> * Parses it (throws an exception, parses again)
> There is no need to do this kind of string manipulations/parsing, it should 
> just be using the epoch millis/seconds/time stored internal to the Timestamp 
> object.
> {code:java}
>   // Converts Timestamp to TimestampTZ.
>   public static TimestampTZ convert(Timestamp ts, ZoneId defaultTimeZone) {
> return parse(ts.toString(), defaultTimeZone);
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)