[ 
https://issues.apache.org/jira/browse/BEAM-4862?focusedWorklogId=128806&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-128806
 ]

ASF GitHub Bot logged work on BEAM-4862:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 30/Jul/18 17:18
            Start Date: 30/Jul/18 17:18
    Worklog Time Spent: 10m 
      Work Description: chamikaramj closed pull request #6077: [BEAM-4862] 
Fixes bug in Spanner's MutationGroupEncoder by converting timestamps into Long 
and not Int.
URL: https://github.com/apache/beam/pull/6077
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/spanner/MutationGroupEncoder.java
 
b/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/spanner/MutationGroupEncoder.java
index 77ede3ea058..4c97fac5074 100644
--- 
a/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/spanner/MutationGroupEncoder.java
+++ 
b/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/spanner/MutationGroupEncoder.java
@@ -478,7 +478,7 @@ private void decodePrimitive(
           if (isNull) {
             m.set(fieldName).to((Timestamp) null);
           } else {
-            int seconds = VarInt.decodeInt(bis);
+            long seconds = VarInt.decodeLong(bis);
             int nanoseconds = VarInt.decodeInt(bis);
             m.set(fieldName).to(Timestamp.ofTimeSecondsAndNanos(seconds, 
nanoseconds));
           }
diff --git 
a/sdks/java/io/google-cloud-platform/src/test/java/org/apache/beam/sdk/io/gcp/spanner/MutationGroupEncoderTest.java
 
b/sdks/java/io/google-cloud-platform/src/test/java/org/apache/beam/sdk/io/gcp/spanner/MutationGroupEncoderTest.java
index 2509f4d4c35..a600551ed76 100644
--- 
a/sdks/java/io/google-cloud-platform/src/test/java/org/apache/beam/sdk/io/gcp/spanner/MutationGroupEncoderTest.java
+++ 
b/sdks/java/io/google-cloud-platform/src/test/java/org/apache/beam/sdk/io/gcp/spanner/MutationGroupEncoderTest.java
@@ -528,6 +528,36 @@ public void dateKeys() throws Exception {
     verifyEncodedOrdering(schema, "test", keys);
   }
 
+  @Test
+  public void decodeBasicTimestampMutationGroup() {
+    SpannerSchema spannerSchemaTimestamp =
+        SpannerSchema.builder().addColumn("timestampTest", "timestamp", 
"TIMESTAMP").build();
+    Timestamp timestamp1 = Timestamp.now();
+    Mutation mutation1 =
+        
Mutation.newInsertOrUpdateBuilder("timestampTest").set("timestamp").to(timestamp1).build();
+    encodeAndVerify(g(mutation1), spannerSchemaTimestamp);
+
+    Timestamp timestamp2 = Timestamp.parseTimestamp("2001-01-01T00:00:00Z");
+    Mutation mutation2 =
+        
Mutation.newInsertOrUpdateBuilder("timestampTest").set("timestamp").to(timestamp2).build();
+    encodeAndVerify(g(mutation2), spannerSchemaTimestamp);
+  }
+
+  @Test
+  public void decodeMinAndMaxTimestampMutationGroup() {
+    SpannerSchema spannerSchemaTimestamp =
+        SpannerSchema.builder().addColumn("timestampTest", "timestamp", 
"TIMESTAMP").build();
+    Timestamp timestamp1 = Timestamp.MIN_VALUE;
+    Mutation mutation1 =
+        
Mutation.newInsertOrUpdateBuilder("timestampTest").set("timestamp").to(timestamp1).build();
+    encodeAndVerify(g(mutation1), spannerSchemaTimestamp);
+
+    Timestamp timestamp2 = Timestamp.MAX_VALUE;
+    Mutation mutation2 =
+        
Mutation.newInsertOrUpdateBuilder("timestampTest").set("timestamp").to(timestamp2).build();
+    encodeAndVerify(g(mutation2), spannerSchemaTimestamp);
+  }
+
   @Test
   public void timestampKeys() throws Exception {
     SpannerSchema.Builder builder = SpannerSchema.builder();


 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

            Worklog Id:     (was: 128806)
            Time Spent: 2h 10m  (was: 2h)
    Remaining Estimate: 21h 50m  (was: 22h)

> varint overflow -62135596800 exception with Cloud Spanner Timestamp 
> 0001-01-01T00:00:00Z
> ----------------------------------------------------------------------------------------
>
>                 Key: BEAM-4862
>                 URL: https://issues.apache.org/jira/browse/BEAM-4862
>             Project: Beam
>          Issue Type: Bug
>          Components: io-java-gcp
>    Affects Versions: 2.5.0
>            Reporter: Eric Beach
>            Assignee: Chamikara Jayalath
>            Priority: Minor
>   Original Estimate: 24h
>          Time Spent: 2h 10m
>  Remaining Estimate: 21h 50m
>
> tl;dr - If you try to write a Timestamp of value "0001-01-01T00:00:00Z" as a 
> Spanner Mutation, you get an overflow error.
>  
> The crux of the issue appears to be that 0001-01-01T00:00:00Z, which is a 
> valid Timestamp per 
> [https://cloud.google.com/spanner/docs/data-types#timestamp-type], is too 
> large for an integer. See the two lines of code below. 
> [https://github.com/apache/beam/blob/release-2.5.0/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/spanner/MutationGroupEncoder.java#L453]
> [https://github.com/apache/beam/blob/279a05604b83a54e8e5a79e13d8761f94841f326/sdks/java/core/src/main/java/org/apache/beam/sdk/util/VarInt.java#L58]
>  
>  
> Stack Trade
> {{Caused by: java.io.IOException: varint overflow -62135596800 at 
> org.apache.beam.sdk.util.VarInt.decodeInt(VarInt.java:65) at 
> org.apache.beam.sdk.io.gcp.spanner.MutationGroupEncoder.decodePrimitive(MutationGroupEncoder.java:453)
>  at 
> org.apache.beam.sdk.io.gcp.spanner.MutationGroupEncoder.decodeModification(MutationGroupEncoder.java:326)
>  at 
> org.apache.beam.sdk.io.gcp.spanner.MutationGroupEncoder.decodeMutation(MutationGroupEncoder.java:280)
>  at 
> org.apache.beam.sdk.io.gcp.spanner.MutationGroupEncoder.decode(MutationGroupEncoder.java:264)
>  at 
> org.apache.beam.sdk.io.gcp.spanner.SpannerIO$BatchFn.processElement(SpannerIO.java:1030)
>  at 
> org.apache.beam.sdk.io.gcp.spanner.SpannerIO$BatchFn$DoFnInvoker.invokeProcessElement(Unknown
>  Source) at 
> org.apache.beam.runners.core.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:185)
>  at 
> org.apache.beam.runners.core.SimpleDoFnRunner.processElement(SimpleDoFnRunner.java:146)
>  at 
> com.google.cloud.dataflow.worker.SimpleParDoFn.processElement(SimpleParDoFn.java:323)
>  at 
> com.google.cloud.dataflow.worker.util.common.worker.ParDoOperation.process(ParDoOperation.java:43)
>  at 
> com.google.cloud.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:48)
>  at 
> com.google.cloud.dataflow.worker.GroupAlsoByWindowsParDoFn$1.output(GroupAlsoByWindowsParDoFn.java:181)
>  at 
> com.google.cloud.dataflow.worker.GroupAlsoByWindowFnRunner$1.outputWindowedValue(GroupAlsoByWindowFnRunner.java:102)
>  at 
> com.google.cloud.dataflow.worker.util.BatchGroupAlsoByWindowViaIteratorsFn.processElement(BatchGroupAlsoByWindowViaIteratorsFn.java:124)
>  at 
> com.google.cloud.dataflow.worker.util.BatchGroupAlsoByWindowViaIteratorsFn.processElement(BatchGroupAlsoByWindowViaIteratorsFn.java:53)
>  at 
> com.google.cloud.dataflow.worker.GroupAlsoByWindowFnRunner.invokeProcessElement(GroupAlsoByWindowFnRunner.java:115)
>  at 
> com.google.cloud.dataflow.worker.GroupAlsoByWindowFnRunner.processElement(GroupAlsoByWindowFnRunner.java:73)
>  at 
> com.google.cloud.dataflow.worker.GroupAlsoByWindowsParDoFn.processElement(GroupAlsoByWindowsParDoFn.java:113)
>  at 
> com.google.cloud.dataflow.worker.util.common.worker.ParDoOperation.process(ParDoOperation.java:43)
>  at 
> com.google.cloud.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:48)
>  at 
> com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:200)
>  at 
> com.google.cloud.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:158)
>  at 
> com.google.cloud.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:75)
>  at 
> com.google.cloud.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:391)
>  at 
> com.google.cloud.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:360)
>  at 
> com.google.cloud.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:288)
>  at 
> com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:134)
>  at 
> com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:114)
>  at 
> com.google.cloud.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:101)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thread.run(Thread.java:745)}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to