[jira] [Commented] (BEAM-1946) Add IO module for restful api

2017-04-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/BEAM-1946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15980734#comment-15980734
 ] 

Jean-Baptiste Onofré commented on BEAM-1946:


FYI, Sergey and I are working in the same team and we discussed about CXF-RS 
with Spark ;)

I will push my branch on github, like this you will see what I did.

> Add IO module for restful api
> -
>
> Key: BEAM-1946
> URL: https://issues.apache.org/jira/browse/BEAM-1946
> Project: Beam
>  Issue Type: New Feature
>  Components: sdk-java-extensions
>Reporter: JiJun Tang
>Assignee: Jean-Baptiste Onofré
>  Labels: IO, features, restful
>
> Create a RestIO for read or write data by resful api.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (BEAM-1998) Update json_values_test.py for ValueProvider

2017-04-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-1998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15980723#comment-15980723
 ] 

ASF GitHub Bot commented on BEAM-1998:
--

GitHub user mariapython opened a pull request:

https://github.com/apache/beam/pull/2658

[BEAM-1998] Add ValueProvider tests to json_value_test.py




You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mariapython/incubator-beam ppp_after2545

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/beam/pull/2658.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2658


commit aa9a25fd57fd6f0b6ce97af21170e436453c2609
Author: Maria Garcia Herrero 
Date:   2017-04-24T04:22:35Z

Add ValueProvider tests to json_value_test.py




> Update json_values_test.py for ValueProvider
> 
>
> Key: BEAM-1998
> URL: https://issues.apache.org/jira/browse/BEAM-1998
> Project: Beam
>  Issue Type: Bug
>  Components: sdk-py
>Reporter: Ahmet Altay
>Assignee: María GH
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] beam pull request #2658: [BEAM-1998] Add ValueProvider tests to json_value_t...

2017-04-23 Thread mariapython
GitHub user mariapython opened a pull request:

https://github.com/apache/beam/pull/2658

[BEAM-1998] Add ValueProvider tests to json_value_test.py




You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mariapython/incubator-beam ppp_after2545

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/beam/pull/2658.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2658


commit aa9a25fd57fd6f0b6ce97af21170e436453c2609
Author: Maria Garcia Herrero 
Date:   2017-04-24T04:22:35Z

Add ValueProvider tests to json_value_test.py




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : beam_PostCommit_Python_Verify #1966

2017-04-23 Thread Apache Jenkins Server
See 




[jira] [Assigned] (BEAM-1758) Option to disable metrics reporting (Metrics API)

2017-04-23 Thread Aviem Zur (JIRA)

 [ 
https://issues.apache.org/jira/browse/BEAM-1758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aviem Zur reassigned BEAM-1758:
---

Assignee: Ismaël Mejía  (was: Aviem Zur)

> Option to disable metrics reporting (Metrics API)
> -
>
> Key: BEAM-1758
> URL: https://issues.apache.org/jira/browse/BEAM-1758
> Project: Beam
>  Issue Type: New Feature
>  Components: sdk-java-core
>Reporter: Ismaël Mejía
>Assignee: Ismaël Mejía
>Priority: Minor
>
> It could be useful to disable metrics reporting to reduce overhead in running 
> pipelines.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (BEAM-1919) Standard IO Metrics

2017-04-23 Thread Aviem Zur (JIRA)

 [ 
https://issues.apache.org/jira/browse/BEAM-1919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aviem Zur reassigned BEAM-1919:
---

Assignee: (was: Aviem Zur)

> Standard IO Metrics
> ---
>
> Key: BEAM-1919
> URL: https://issues.apache.org/jira/browse/BEAM-1919
> Project: Beam
>  Issue Type: New Feature
>  Components: sdk-ideas
>Reporter: Aviem Zur
>
> A foundation for standard IO metrics, which could be used by all IOs.
> A standard IO metric is a namespace + name combination which all IOs which 
> report a metric in the same vein will adhere to.
> Also, supply factories and members for these metrics, accessible to IOs via 
> the SDK of the language they were written in.
> [Proposal document|https://s.apache.org/standard-io-metrics]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (BEAM-1575) Add ValidatesRunner test to PipelineTest.test_metrics_in_source

2017-04-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-1575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15980683#comment-15980683
 ] 

ASF GitHub Bot commented on BEAM-1575:
--

Github user asfgit closed the pull request at:

https://github.com/apache/beam/pull/2593


> Add ValidatesRunner test to PipelineTest.test_metrics_in_source
> ---
>
> Key: BEAM-1575
> URL: https://issues.apache.org/jira/browse/BEAM-1575
> Project: Beam
>  Issue Type: Bug
>  Components: sdk-py
>Reporter: Pablo Estrada
>Assignee: Pablo Estrada
>
> Currently, the source does not work other than in unittest. Need a source 
> that can be used in all runners.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] beam pull request #2593: [BEAM-1575] Adding validatesrunner test for metrics...

2017-04-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/beam/pull/2593


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[1/2] beam git commit: Adding validatesrunner test for sources

2017-04-23 Thread chamikara
Repository: beam
Updated Branches:
  refs/heads/master a67019739 -> e5507d827


Adding validatesrunner test for sources


Project: http://git-wip-us.apache.org/repos/asf/beam/repo
Commit: http://git-wip-us.apache.org/repos/asf/beam/commit/9b6e74e8
Tree: http://git-wip-us.apache.org/repos/asf/beam/tree/9b6e74e8
Diff: http://git-wip-us.apache.org/repos/asf/beam/diff/9b6e74e8

Branch: refs/heads/master
Commit: 9b6e74e8bd0ab5c357e09c0b8ea245ba8dc7ad5c
Parents: a670197
Author: Pablo 
Authored: Wed Apr 19 09:44:54 2017 -0700
Committer: chamik...@google.com 
Committed: Sun Apr 23 20:23:20 2017 -0700

--
 .../apache_beam/examples/snippets/snippets.py   | 98 ++--
 .../apache_beam/transforms/ptransform_test.py   | 27 ++
 2 files changed, 78 insertions(+), 47 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/beam/blob/9b6e74e8/sdks/python/apache_beam/examples/snippets/snippets.py
--
diff --git a/sdks/python/apache_beam/examples/snippets/snippets.py 
b/sdks/python/apache_beam/examples/snippets/snippets.py
index 85ab864..c566914 100644
--- a/sdks/python/apache_beam/examples/snippets/snippets.py
+++ b/sdks/python/apache_beam/examples/snippets/snippets.py
@@ -570,6 +570,57 @@ def examples_wordcount_debugging(renames):
   p.run()
 
 
+import apache_beam as beam
+from apache_beam.io import iobase
+from apache_beam.io.range_trackers import OffsetRangeTracker
+from apache_beam.transforms.core import PTransform
+from apache_beam.utils.pipeline_options import PipelineOptions
+
+
+# Defining a new source.
+# [START model_custom_source_new_source]
+class CountingSource(iobase.BoundedSource):
+
+  def __init__(self, count):
+self.records_read = Metrics.counter(self.__class__, 'recordsRead')
+self._count = count
+
+  def estimate_size(self):
+return self._count
+
+  def get_range_tracker(self, start_position, stop_position):
+if start_position is None:
+  start_position = 0
+if stop_position is None:
+  stop_position = self._count
+
+return OffsetRangeTracker(start_position, stop_position)
+
+  def read(self, range_tracker):
+for i in range(self._count):
+  if not range_tracker.try_claim(i):
+return
+  self.records_read.inc()
+  yield i
+
+  def split(self, desired_bundle_size, start_position=None,
+stop_position=None):
+if start_position is None:
+  start_position = 0
+if stop_position is None:
+  stop_position = self._count
+
+bundle_start = start_position
+while bundle_start < self._count:
+  bundle_stop = max(self._count, bundle_start + desired_bundle_size)
+  yield iobase.SourceBundle(weight=(bundle_stop - bundle_start),
+source=self,
+start_position=bundle_start,
+stop_position=bundle_stop)
+  bundle_start = bundle_stop
+# [END model_custom_source_new_source]
+
+
 def model_custom_source(count):
   """Demonstrates creating a new custom source and using it in a pipeline.
 
@@ -595,53 +646,6 @@ def model_custom_source(count):
 
   """
 
-  import apache_beam as beam
-  from apache_beam.io import iobase
-  from apache_beam.io.range_trackers import OffsetRangeTracker
-  from apache_beam.transforms.core import PTransform
-  from apache_beam.utils.pipeline_options import PipelineOptions
-
-  # Defining a new source.
-  # [START model_custom_source_new_source]
-  class CountingSource(iobase.BoundedSource):
-
-def __init__(self, count):
-  self._count = count
-
-def estimate_size(self):
-  return self._count
-
-def get_range_tracker(self, start_position, stop_position):
-  if start_position is None:
-start_position = 0
-  if stop_position is None:
-stop_position = self._count
-
-  return OffsetRangeTracker(start_position, stop_position)
-
-def read(self, range_tracker):
-  for i in range(self._count):
-if not range_tracker.try_claim(i):
-  return
-yield i
-
-def split(self, desired_bundle_size, start_position=None,
-  stop_position=None):
-  if start_position is None:
-start_position = 0
-  if stop_position is None:
-stop_position = self._count
-
-  bundle_start = start_position
-  while bundle_start < self._count:
-bundle_stop = max(self._count, bundle_start + desired_bundle_size)
-yield iobase.SourceBundle(weight=(bundle_stop - bundle_start),
-  source=self,
-  start_position=bundle_start,
-  stop_position=bundle_stop)
-bundle_start = bundle_stop
-  # [END model_custom_source_new_source]
-
   # Using the source in an 

[2/2] beam git commit: This closes #2593

2017-04-23 Thread chamikara
This closes #2593


Project: http://git-wip-us.apache.org/repos/asf/beam/repo
Commit: http://git-wip-us.apache.org/repos/asf/beam/commit/e5507d82
Tree: http://git-wip-us.apache.org/repos/asf/beam/tree/e5507d82
Diff: http://git-wip-us.apache.org/repos/asf/beam/diff/e5507d82

Branch: refs/heads/master
Commit: e5507d82772c645fcf9024067089a4fe5c67e1f2
Parents: a670197 9b6e74e
Author: chamik...@google.com 
Authored: Sun Apr 23 20:25:03 2017 -0700
Committer: chamik...@google.com 
Committed: Sun Apr 23 20:25:03 2017 -0700

--
 .../apache_beam/examples/snippets/snippets.py   | 98 ++--
 .../apache_beam/transforms/ptransform_test.py   | 27 ++
 2 files changed, 78 insertions(+), 47 deletions(-)
--




[jira] [Assigned] (BEAM-1944) Add Source Watermark Metrics in Spark runner

2017-04-23 Thread Jingsong Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/BEAM-1944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned BEAM-1944:
--

Assignee: Jingsong Lee

> Add Source Watermark Metrics in Spark runner
> 
>
> Key: BEAM-1944
> URL: https://issues.apache.org/jira/browse/BEAM-1944
> Project: Beam
>  Issue Type: Sub-task
>  Components: runner-spark
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (BEAM-1942) Add Source Watermark Metrics in Flink Runner

2017-04-23 Thread Jingsong Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/BEAM-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned BEAM-1942:
--

Assignee: Jingsong Lee

> Add Source Watermark Metrics in Flink Runner
> 
>
> Key: BEAM-1942
> URL: https://issues.apache.org/jira/browse/BEAM-1942
> Project: Beam
>  Issue Type: Sub-task
>  Components: runner-flink
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (BEAM-1941) Add Source Watermark Metrics in Runners

2017-04-23 Thread Jingsong Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/BEAM-1941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned BEAM-1941:
--

Assignee: Jingsong Lee

> Add Source Watermark Metrics in Runners
> ---
>
> Key: BEAM-1941
> URL: https://issues.apache.org/jira/browse/BEAM-1941
> Project: Beam
>  Issue Type: New Feature
>  Components: runner-ideas
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>
> The source watermark metrics show the consumer latency of Source. 
> It allows the user to know the health of the job, or it can be used to 
> monitor and alarm.
> Since each runner is likely already tracking a watermark, another option here 
> is to just have the runner report it appropriately, rather than having the 
> source report it using metrics. This also addresses the fact that even if the 
> source has advanced to 8:00, the runner may still know about buffered 
> elements at 7:00, and so not advance the watermark all the way to 8:00. 
> [~bchambers]
> Includes:
> 1.Source watermark (`min` amongst all splits):
>type = Gauge, namespace = io, name = source_watermark
> 2.Source watermark per split:
>type = Gauge, namespace = io.splits, name = .source_watermark



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (BEAM-644) Primitive to shift the watermark while assigning timestamps

2017-04-23 Thread Pei He (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15980645#comment-15980645
 ] 

Pei He commented on BEAM-644:
-

I think this also applies to a more general issue: data unorderedness/delay in 
the user application level.
For example, I have a upstream system collecting data and inject to Kafka, then 
it is processed by a Beam pipeline.

User application level unorderedness has to depends on the SLA of the upstream 
system. I think this issue is similar with the one discussed in here.

(Beam system level unorderedness could be solved by Kafka source who estimate 
watermarks.)

> Primitive to shift the watermark while assigning timestamps
> ---
>
> Key: BEAM-644
> URL: https://issues.apache.org/jira/browse/BEAM-644
> Project: Beam
>  Issue Type: New Feature
>  Components: beam-model, beam-model-runner-api
>Reporter: Kenneth Knowles
>
> There is a general need, especially important in the presence of 
> SplittableDoFn, to be able to assign new timestamps to elements without 
> making them late or droppable.
>  - DoFn.withAllowedTimestampSkew is inadequate, because it simply allows one 
> to produce late data, but does not allow one to shift the watermark so the 
> new data is on-time.
>  - For a SplittableDoFn, one may receive an element such as the name of a log 
> file that contains elements for the day preceding the log file. The timestamp 
> on the filename must currently be the beginning of the log. If such elements 
> are constantly flowing, it may be OK, but since we don't know that element is 
> coming, in that absence of data, the watermark may advance. We need a way to 
> keep it far enough back even in the absence of data holding it back.
> One idea is a new primitive ShiftWatermark / AdjustTimestamps with the 
> following pieces:
>  - A constant duration (positive or negative) D by which to shift the 
> watermark.
>  - A function from TimestampedElement to new timestamp that is >= t + D
> So, for example, AdjustTimestamps(<-60 minutes>, f) would allow f to make 
> timestamps up to 60 minutes earlier.
> With this primitive added, outputWithTimestamp and withAllowedTimestampSkew 
> could be removed, simplifying DoFn.
> Alternatively, all of this functionality could be bolted on to DoFn.
> This ticket is not a proposal, but a record of the issue and ideas that were 
> mentioned.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Build failed in Jenkins: beam_PerformanceTests_JDBC #144

2017-04-23 Thread Apache Jenkins Server
See 


Changes:

[altay] [BEAM-1988] Migrate from utils.path to BFS

--
[...truncated 843.58 KB...]
at 
com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory$3.typedApply(MapTaskExecutorFactory.java:261)
at 
com.google.cloud.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:55)
at 
com.google.cloud.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:43)
at 
com.google.cloud.dataflow.worker.graph.Networks.replaceDirectedNetworkNodes(Networks.java:78)
at 
com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory.create(MapTaskExecutorFactory.java:152)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.doWork(DataflowWorker.java:272)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.getAndPerformWork(DataflowWorker.java:244)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:127)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:94)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
(a398c18872b50437): java.lang.RuntimeException: 
org.apache.beam.sdk.util.UserCodeException: org.postgresql.util.PSQLException: 
The connection attempt failed.
at 
com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory$3.typedApply(MapTaskExecutorFactory.java:289)
at 
com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory$3.typedApply(MapTaskExecutorFactory.java:261)
at 
com.google.cloud.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:55)
at 
com.google.cloud.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:43)
at 
com.google.cloud.dataflow.worker.graph.Networks.replaceDirectedNetworkNodes(Networks.java:78)
at 
com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory.create(MapTaskExecutorFactory.java:152)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.doWork(DataflowWorker.java:272)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.getAndPerformWork(DataflowWorker.java:244)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:127)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:94)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.beam.sdk.util.UserCodeException: 
org.postgresql.util.PSQLException: The connection attempt failed.
at 
org.apache.beam.sdk.util.UserCodeException.wrap(UserCodeException.java:36)
at 
org.apache.beam.sdk.io.jdbc.JdbcIO$Read$ReadFn$auxiliary$322nJwqQ.invokeSetup(Unknown
 Source)
at 
com.google.cloud.dataflow.worker.runners.worker.DoFnInstanceManagers$ConcurrentQueueInstanceManager.deserializeCopy(DoFnInstanceManagers.java:66)
at 
com.google.cloud.dataflow.worker.runners.worker.DoFnInstanceManagers$ConcurrentQueueInstanceManager.peek(DoFnInstanceManagers.java:48)
at 
com.google.cloud.dataflow.worker.runners.worker.UserParDoFnFactory.create(UserParDoFnFactory.java:99)
at 
com.google.cloud.dataflow.worker.runners.worker.DefaultParDoFnFactory.create(DefaultParDoFnFactory.java:70)
at 
com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory.createParDoOperation(MapTaskExecutorFactory.java:363)
at 
com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory$3.typedApply(MapTaskExecutorFactory.java:278)
... 14 more
Caused by: org.postgresql.util.PSQLException: The connection attempt failed.
at 
org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:272)
at 
org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:51)
at 

Build failed in Jenkins: beam_PerformanceTests_Dataflow #338

2017-04-23 Thread Apache Jenkins Server
See 


Changes:

[altay] [BEAM-1988] Migrate from utils.path to BFS

--
[...truncated 270.92 KB...]
error: unable to resolve reference refs/remotes/origin/pr/2505/merge: No such 
file or directory
 ! 0f11265...780e3cd refs/pull/2505/merge -> origin/pr/2505/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2507/merge: No such 
file or directory
 ! b448bef...16e9c65 refs/pull/2507/merge -> origin/pr/2507/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2508/merge: No such 
file or directory
 ! a9fedd2...10b9545 refs/pull/2508/merge -> origin/pr/2508/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2511/merge: No such 
file or directory
 ! afe991b...c82ff73 refs/pull/2511/merge -> origin/pr/2511/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2514/merge: No such 
file or directory
 ! 0ae3202...ec1b6e0 refs/pull/2514/merge -> origin/pr/2514/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2517/merge: No such 
file or directory
 ! fa0c57f...f442410 refs/pull/2517/merge -> origin/pr/2517/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2518/merge: No such 
file or directory
 ! 10a8893...54a9356 refs/pull/2518/merge -> origin/pr/2518/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2519/merge: No such 
file or directory
 ! 98ea9f6...aad7043 refs/pull/2519/merge -> origin/pr/2519/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2520/merge: No such 
file or directory
 ! fc04459...07f8717 refs/pull/2520/merge -> origin/pr/2520/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2522/merge: No such 
file or directory
 ! c99fa26...c740b2d refs/pull/2522/merge -> origin/pr/2522/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2525/merge: No such 
file or directory
 ! 441d1b2...7e01908 refs/pull/2525/merge -> origin/pr/2525/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2527/merge: No such 
file or directory
 ! 216da5f...feee33b refs/pull/2527/merge -> origin/pr/2527/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2528/merge: No such 
file or directory
 ! 01b3137...6ee7dcc refs/pull/2528/merge -> origin/pr/2528/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2531/merge: No such 
file or directory
 ! 49d5bde...e435821 refs/pull/2531/merge -> origin/pr/2531/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2534/merge: No such 
file or directory
 ! 081af1f...ef9e0fc refs/pull/2534/merge -> origin/pr/2534/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2535/merge: No such 
file or directory
 ! 7d4bdcb...4ad387e refs/pull/2535/merge -> origin/pr/2535/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2538/head: No such 
file or directory
 ! e1f3b03...142e469 refs/pull/2538/head -> origin/pr/2538/head  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2538/merge: No such 
file or directory
 ! 8c865f5...f824c27 refs/pull/2538/merge -> origin/pr/2538/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2541/merge: No such 
file or directory
 ! a7fdea3...8b9aa21 refs/pull/2541/merge -> origin/pr/2541/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2544/merge: No such 
file or directory
 ! e52973a...4c36b29 refs/pull/2544/merge -> origin/pr/2544/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2555/merge: No such 
file or directory
 ! 219db6f...04900c3 refs/pull/2555/merge -> origin/pr/2555/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2573/merge: No such 
file or directory
 ! ff7c24f...c92f372 refs/pull/2573/merge -> origin/pr/2573/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2578/merge: No such 
file or directory
 ! 9b3d521...2a90ded refs/pull/2578/merge -> origin/pr/2578/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2581/merge: No such 
file or directory
 ! b905438...b6f4753 refs/pull/2581/merge -> origin/pr/2581/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2591/merge: No such 
file or directory
 ! 

[jira] [Commented] (BEAM-1946) Add IO module for restful api

2017-04-23 Thread JiJun Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-1946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15980604#comment-15980604
 ] 

JiJun Tang commented on BEAM-1946:
--

Another example:
http://sberyozkin.blogspot.com/2016/09/cxf-jax-rs-20-perfect-http-spark.html

> Add IO module for restful api
> -
>
> Key: BEAM-1946
> URL: https://issues.apache.org/jira/browse/BEAM-1946
> Project: Beam
>  Issue Type: New Feature
>  Components: sdk-java-extensions
>Reporter: JiJun Tang
>Assignee: Jean-Baptiste Onofré
>  Labels: IO, features, restful
>
> Create a RestIO for read or write data by resful api.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (BEAM-2025) WordCount Example lacks a link to Python full source code

2017-04-23 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/BEAM-2025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismaël Mejía updated BEAM-2025:
---
Summary: WordCount Example lacks a link to Python full source code  (was: 
WordCount Example lacks a link to Pythn full source code)

> WordCount Example lacks a link to Python full source code
> -
>
> Key: BEAM-2025
> URL: https://issues.apache.org/jira/browse/BEAM-2025
> Project: Beam
>  Issue Type: Task
>  Components: website
>Reporter: Mitar
>Assignee: Davor Bonaci
>
> I am reading WorldCount Example 
> (https://beam.apache.org/get-started/wordcount-example/) but despite me 
> switching to Python SDK, the "To run this example, follow the instructions in 
> the Beam Examples README. To view the full code, see MinimalWordCount." text 
> points only to Java example. I could not find whole code in Python for this 
> example.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Jenkins build is back to stable : beam_PostCommit_Java_ValidatesRunner_Spark #1755

2017-04-23 Thread Apache Jenkins Server
See 




[jira] [Resolved] (BEAM-715) Migrate AvroHDFSFileSource/HDFSFileSource/HDFSFileSink to use AutoValue to reduce boilerplate

2017-04-23 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/BEAM-715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismaël Mejía resolved BEAM-715.
---
   Resolution: Implemented
Fix Version/s: Not applicable

I checked the implementation and somebody else already fixed this.

> Migrate AvroHDFSFileSource/HDFSFileSource/HDFSFileSink to use AutoValue to 
> reduce boilerplate
> -
>
> Key: BEAM-715
> URL: https://issues.apache.org/jira/browse/BEAM-715
> Project: Beam
>  Issue Type: Improvement
>  Components: sdk-java-extensions
>Reporter: Luke Cwik
>Assignee: James Malone
>Priority: Minor
>  Labels: io, simple, starter
> Fix For: Not applicable
>
>
> Use the AutoValue functionality to reduce boilerplate.
> See this PR for an example:
> https://github.com/apache/incubator-beam/pull/1054



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (BEAM-2058) BigQuery load job id should be generated at run time, not submission time

2017-04-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-2058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15980519#comment-15980519
 ] 

ASF GitHub Bot commented on BEAM-2058:
--

GitHub user reuvenlax opened a pull request:

https://github.com/apache/beam/pull/2657

[BEAM-2058] Generate BigQuery load job at runtime

Generate the load job at run time instead of submission time.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/reuvenlax/incubator-beam repeatable_bq_sink

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/beam/pull/2657.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2657


commit 01b3a0f89c7e81a655afe375e7450cf094cd139a
Author: Reuven Lax 
Date:   2017-04-23T19:53:00Z

Make batch loads repeatable across different invocations of the same 
template job.




> BigQuery load job id should be generated at run time, not submission time
> -
>
> Key: BEAM-2058
> URL: https://issues.apache.org/jira/browse/BEAM-2058
> Project: Beam
>  Issue Type: Bug
>  Components: sdk-java-gcp
>Reporter: Reuven Lax
>Assignee: Daniel Halperin
>
> Currently the job id is generated at submission time, which means that 
> rerunning template jobs will produce the same job id. Generate at run time 
> instead, so a different job id is generated on each execution.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] beam pull request #2657: [BEAM-2058] Generate BigQuery load job at runtime

2017-04-23 Thread reuvenlax
GitHub user reuvenlax opened a pull request:

https://github.com/apache/beam/pull/2657

[BEAM-2058] Generate BigQuery load job at runtime

Generate the load job at run time instead of submission time.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/reuvenlax/incubator-beam repeatable_bq_sink

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/beam/pull/2657.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2657


commit 01b3a0f89c7e81a655afe375e7450cf094cd139a
Author: Reuven Lax 
Date:   2017-04-23T19:53:00Z

Make batch loads repeatable across different invocations of the same 
template job.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (BEAM-2058) BigQuery load job id should be generated at run time, not submission time

2017-04-23 Thread Reuven Lax (JIRA)
Reuven Lax created BEAM-2058:


 Summary: BigQuery load job id should be generated at run time, not 
submission time
 Key: BEAM-2058
 URL: https://issues.apache.org/jira/browse/BEAM-2058
 Project: Beam
  Issue Type: Bug
  Components: sdk-java-gcp
Reporter: Reuven Lax
Assignee: Daniel Halperin


Currently the job id is generated at submission time, which means that 
rerunning template jobs will produce the same job id. Generate at run time 
instead, so a different job id is generated on each execution.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Build failed in Jenkins: beam_PostCommit_Python_Verify #1965

2017-04-23 Thread Apache Jenkins Server
See 


--
[...truncated 247.33 KB...]
 x [deleted] (none) -> origin/pr/942/head
 x [deleted] (none) -> origin/pr/942/merge
 x [deleted] (none) -> origin/pr/943/head
 x [deleted] (none) -> origin/pr/943/merge
 x [deleted] (none) -> origin/pr/944/head
 x [deleted] (none) -> origin/pr/945/head
 x [deleted] (none) -> origin/pr/945/merge
 x [deleted] (none) -> origin/pr/946/head
 x [deleted] (none) -> origin/pr/946/merge
 x [deleted] (none) -> origin/pr/947/head
 x [deleted] (none) -> origin/pr/947/merge
 x [deleted] (none) -> origin/pr/948/head
 x [deleted] (none) -> origin/pr/948/merge
 x [deleted] (none) -> origin/pr/949/head
 x [deleted] (none) -> origin/pr/949/merge
 x [deleted] (none) -> origin/pr/95/head
 x [deleted] (none) -> origin/pr/95/merge
 x [deleted] (none) -> origin/pr/950/head
 x [deleted] (none) -> origin/pr/951/head
 x [deleted] (none) -> origin/pr/951/merge
 x [deleted] (none) -> origin/pr/952/head
 x [deleted] (none) -> origin/pr/952/merge
 x [deleted] (none) -> origin/pr/953/head
 x [deleted] (none) -> origin/pr/954/head
 x [deleted] (none) -> origin/pr/954/merge
 x [deleted] (none) -> origin/pr/955/head
 x [deleted] (none) -> origin/pr/955/merge
 x [deleted] (none) -> origin/pr/956/head
 x [deleted] (none) -> origin/pr/957/head
 x [deleted] (none) -> origin/pr/958/head
 x [deleted] (none) -> origin/pr/959/head
 x [deleted] (none) -> origin/pr/959/merge
 x [deleted] (none) -> origin/pr/96/head
 x [deleted] (none) -> origin/pr/96/merge
 x [deleted] (none) -> origin/pr/960/head
 x [deleted] (none) -> origin/pr/960/merge
 x [deleted] (none) -> origin/pr/961/head
 x [deleted] (none) -> origin/pr/962/head
 x [deleted] (none) -> origin/pr/962/merge
 x [deleted] (none) -> origin/pr/963/head
 x [deleted] (none) -> origin/pr/963/merge
 x [deleted] (none) -> origin/pr/964/head
 x [deleted] (none) -> origin/pr/965/head
 x [deleted] (none) -> origin/pr/965/merge
 x [deleted] (none) -> origin/pr/966/head
 x [deleted] (none) -> origin/pr/967/head
 x [deleted] (none) -> origin/pr/967/merge
 x [deleted] (none) -> origin/pr/968/head
 x [deleted] (none) -> origin/pr/968/merge
 x [deleted] (none) -> origin/pr/969/head
 x [deleted] (none) -> origin/pr/969/merge
 x [deleted] (none) -> origin/pr/97/head
 x [deleted] (none) -> origin/pr/97/merge
 x [deleted] (none) -> origin/pr/970/head
 x [deleted] (none) -> origin/pr/970/merge
 x [deleted] (none) -> origin/pr/971/head
 x [deleted] (none) -> origin/pr/971/merge
 x [deleted] (none) -> origin/pr/972/head
 x [deleted] (none) -> origin/pr/973/head
 x [deleted] (none) -> origin/pr/974/head
 x [deleted] (none) -> origin/pr/974/merge
 x [deleted] (none) -> origin/pr/975/head
 x [deleted] (none) -> origin/pr/975/merge
 x [deleted] (none) -> origin/pr/976/head
 x [deleted] (none) -> origin/pr/976/merge
 x [deleted] (none) -> origin/pr/977/head
 x [deleted] (none) -> origin/pr/977/merge
 x [deleted] (none) -> origin/pr/978/head
 x [deleted] (none) -> origin/pr/978/merge
 x [deleted] (none) -> origin/pr/979/head
 x [deleted] (none) -> origin/pr/979/merge
 x [deleted] (none) -> origin/pr/98/head
 x [deleted] (none) -> origin/pr/980/head
 x [deleted] (none) -> origin/pr/980/merge
 x [deleted] (none) -> origin/pr/981/head
 x [deleted] (none) -> origin/pr/982/head
 x [deleted] (none) -> origin/pr/982/merge
 x [deleted] (none) -> origin/pr/983/head
 x [deleted] (none) -> origin/pr/983/merge
 x [deleted] (none) -> origin/pr/984/head
 x [deleted] (none) -> origin/pr/984/merge
 x [deleted] (none) -> origin/pr/985/head
 x [deleted] (none) -> origin/pr/985/merge
 x [deleted] (none) -> origin/pr/986/head
 x [deleted] (none) -> origin/pr/986/merge
 x [deleted] (none) -> origin/pr/987/head
 x [deleted] (none) -> origin/pr/988/head
 x [deleted] (none) -> origin/pr/988/merge
 x [deleted] (none) -> origin/pr/989/head
 

[jira] [Commented] (BEAM-1988) utils.path.join does not correctly handle GCS bucket roots

2017-04-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/BEAM-1988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15980517#comment-15980517
 ] 

ASF GitHub Bot commented on BEAM-1988:
--

Github user asfgit closed the pull request at:

https://github.com/apache/beam/pull/2643


> utils.path.join does not correctly handle GCS bucket roots
> --
>
> Key: BEAM-1988
> URL: https://issues.apache.org/jira/browse/BEAM-1988
> Project: Beam
>  Issue Type: Bug
>  Components: sdk-py
>Reporter: Ahmet Altay
>Assignee: Sourabh Bajaj
> Fix For: First stable release
>
>
> Here:
> https://github.com/apache/beam/blob/master/sdks/python/apache_beam/utils/path.py#L22
> Joining a bucket root with a filename e.g. (gs://mybucket/ , myfile) results 
> in invalid 'gs://mybucket//myfile', notice the double // between mybucket and 
> myfile. (It actually does not handle anything that already ends with {{/}} 
> correctly)
> [~sb2nov] could you take this one? Also, should the `join` operation move to 
> a BeamFileSystem level code.
> (cc: [~chamikara])



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] beam pull request #2643: [BEAM-1988] Migrate from utils.path to BFS

2017-04-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/beam/pull/2643


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[1/2] beam git commit: [BEAM-1988] Migrate from utils.path to BFS

2017-04-23 Thread altay
Repository: beam
Updated Branches:
  refs/heads/master 1dce98f07 -> a67019739


[BEAM-1988] Migrate from utils.path to BFS


Project: http://git-wip-us.apache.org/repos/asf/beam/repo
Commit: http://git-wip-us.apache.org/repos/asf/beam/commit/9b7bbc2d
Tree: http://git-wip-us.apache.org/repos/asf/beam/tree/9b7bbc2d
Diff: http://git-wip-us.apache.org/repos/asf/beam/diff/9b7bbc2d

Branch: refs/heads/master
Commit: 9b7bbc2dbce01605fa4a4e8ddebaf2bc648d6f9b
Parents: 1dce98f
Author: Sourabh Bajaj 
Authored: Fri Apr 21 16:39:34 2017 -0700
Committer: Ahmet Altay 
Committed: Sun Apr 23 12:54:46 2017 -0700

--
 sdks/python/apache_beam/io/gcp/gcsfilesystem.py |  5 +-
 .../apache_beam/io/gcp/gcsfilesystem_test.py| 10 ++-
 .../runners/dataflow/internal/apiclient.py  |  8 ++-
 .../runners/dataflow/internal/dependency.py | 23 ---
 .../dataflow/internal/dependency_test.py|  5 +-
 sdks/python/apache_beam/utils/__init__.py   |  4 --
 sdks/python/apache_beam/utils/path.py   | 46 -
 sdks/python/apache_beam/utils/path_test.py  | 70 
 8 files changed, 31 insertions(+), 140 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/beam/blob/9b7bbc2d/sdks/python/apache_beam/io/gcp/gcsfilesystem.py
--
diff --git a/sdks/python/apache_beam/io/gcp/gcsfilesystem.py 
b/sdks/python/apache_beam/io/gcp/gcsfilesystem.py
index 99f27f8..fdc4757 100644
--- a/sdks/python/apache_beam/io/gcp/gcsfilesystem.py
+++ b/sdks/python/apache_beam/io/gcp/gcsfilesystem.py
@@ -46,10 +46,7 @@ class GCSFileSystem(FileSystem):
   raise ValueError('Basepath %r must be GCS path.', basepath)
 path = basepath
 for p in paths:
-  if path == '' or path.endswith('/'):
-path += p
-  else:
-path += '/' + p
+  path = path.rstrip('/') + '/' + p.lstrip('/')
 return path
 
   def mkdirs(self, path):

http://git-wip-us.apache.org/repos/asf/beam/blob/9b7bbc2d/sdks/python/apache_beam/io/gcp/gcsfilesystem_test.py
--
diff --git a/sdks/python/apache_beam/io/gcp/gcsfilesystem_test.py 
b/sdks/python/apache_beam/io/gcp/gcsfilesystem_test.py
index d6a8fd7..0669bf2 100644
--- a/sdks/python/apache_beam/io/gcp/gcsfilesystem_test.py
+++ b/sdks/python/apache_beam/io/gcp/gcsfilesystem_test.py
@@ -42,8 +42,16 @@ class GCSFileSystemTest(unittest.TestCase):
  file_system.join('gs://bucket/path', 'to', 'file'))
 self.assertEqual('gs://bucket/path/to/file',
  file_system.join('gs://bucket/path', 'to/file'))
-self.assertEqual('gs://bucket/path//to/file',
+self.assertEqual('gs://bucket/path/to/file',
  file_system.join('gs://bucket/path', '/to/file'))
+self.assertEqual('gs://bucket/path/to/file',
+ file_system.join('gs://bucket/path/', 'to', 'file'))
+self.assertEqual('gs://bucket/path/to/file',
+ file_system.join('gs://bucket/path/', 'to/file'))
+self.assertEqual('gs://bucket/path/to/file',
+ file_system.join('gs://bucket/path/', '/to/file'))
+with self.assertRaises(ValueError):
+  file_system.join('/bucket/path/', '/to/file')
 
   @mock.patch('apache_beam.io.gcp.gcsfilesystem.gcsio')
   def test_match_multiples(self, mock_gcsio):

http://git-wip-us.apache.org/repos/asf/beam/blob/9b7bbc2d/sdks/python/apache_beam/runners/dataflow/internal/apiclient.py
--
diff --git a/sdks/python/apache_beam/runners/dataflow/internal/apiclient.py 
b/sdks/python/apache_beam/runners/dataflow/internal/apiclient.py
index 24e1129..d95b33f 100644
--- a/sdks/python/apache_beam/runners/dataflow/internal/apiclient.py
+++ b/sdks/python/apache_beam/runners/dataflow/internal/apiclient.py
@@ -30,9 +30,9 @@ from datetime import datetime
 from apitools.base.py import encoding
 from apitools.base.py import exceptions
 
-from apache_beam import utils
 from apache_beam.internal.gcp.auth import get_service_credentials
 from apache_beam.internal.gcp.json_value import to_json_value
+from apache_beam.io.filesystems_util import get_filesystem
 from apache_beam.io.gcp.internal.clients import storage
 from apache_beam.runners.dataflow.internal import dependency
 from apache_beam.runners.dataflow.internal.clients import dataflow
@@ -336,10 +336,12 @@ class Job(object):
 # for GCS staging locations where the potential for such clashes is high.
 if self.google_cloud_options.staging_location.startswith('gs://'):
   path_suffix = '%s.%f' % (self.google_cloud_options.job_name, time.time())
-  self.google_cloud_options.staging_location = utils.path.join(
+  filesystem = 

[2/2] beam git commit: This closes #2643

2017-04-23 Thread altay
This closes #2643


Project: http://git-wip-us.apache.org/repos/asf/beam/repo
Commit: http://git-wip-us.apache.org/repos/asf/beam/commit/a6701973
Tree: http://git-wip-us.apache.org/repos/asf/beam/tree/a6701973
Diff: http://git-wip-us.apache.org/repos/asf/beam/diff/a6701973

Branch: refs/heads/master
Commit: a67019739dc7f09a8336b9606c3726ad5d546f51
Parents: 1dce98f 9b7bbc2
Author: Ahmet Altay 
Authored: Sun Apr 23 12:54:48 2017 -0700
Committer: Ahmet Altay 
Committed: Sun Apr 23 12:54:48 2017 -0700

--
 sdks/python/apache_beam/io/gcp/gcsfilesystem.py |  5 +-
 .../apache_beam/io/gcp/gcsfilesystem_test.py| 10 ++-
 .../runners/dataflow/internal/apiclient.py  |  8 ++-
 .../runners/dataflow/internal/dependency.py | 23 ---
 .../dataflow/internal/dependency_test.py|  5 +-
 sdks/python/apache_beam/utils/__init__.py   |  4 --
 sdks/python/apache_beam/utils/path.py   | 46 -
 sdks/python/apache_beam/utils/path_test.py  | 70 
 8 files changed, 31 insertions(+), 140 deletions(-)
--




[jira] [Resolved] (BEAM-1836) Reopen BigQuery utils after #2271

2017-04-23 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/BEAM-1836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismaël Mejía resolved BEAM-1836.

   Resolution: Fixed
Fix Version/s: Not applicable

Resolving since the fix is already merged.

> Reopen BigQuery utils after #2271
> -
>
> Key: BEAM-1836
> URL: https://issues.apache.org/jira/browse/BEAM-1836
> Project: Beam
>  Issue Type: Bug
>  Components: sdk-java-core
>Reporter: Rafal Wojdyla
>Assignee: Rafal Wojdyla
> Fix For: Not applicable
>
>
> https://github.com/apache/beam/pull/2271 splits BigQueryIO, should not change 
> any functionality but closes some of the useful BigQuery utils like:
>  * {{toTableSpec}}
>  * {{parseTableSpec}}
> we in scio use those and would prefer not to reimplement them.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (BEAM-1520) Implement TFRecordIO (Reading/writing Tensorflow Standard format)

2017-04-23 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/BEAM-1520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismaël Mejía resolved BEAM-1520.

   Resolution: Fixed
Fix Version/s: First stable release

Marking as fixed since this seem to have been already merged.

> Implement TFRecordIO (Reading/writing Tensorflow Standard format)
> -
>
> Key: BEAM-1520
> URL: https://issues.apache.org/jira/browse/BEAM-1520
> Project: Beam
>  Issue Type: Improvement
>  Components: sdk-java-core
>Affects Versions: 0.5.0
>Reporter: Neville Li
>Assignee: Neville Li
>Priority: Minor
> Fix For: First stable release
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] beam pull request #2645: Fix dataflow staging path to be unique

2017-04-23 Thread sb2nov
Github user sb2nov closed the pull request at:

https://github.com/apache/beam/pull/2645


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : beam_PostCommit_Python_Verify #1964

2017-04-23 Thread Apache Jenkins Server
See 




Jenkins build became unstable: beam_PostCommit_Java_ValidatesRunner_Spark #1754

2017-04-23 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PerformanceTests_JDBC #143

2017-04-23 Thread Apache Jenkins Server
See 


Changes:

[lcwik] Validates that input and output GCS paths specify a bucket

[kirpichov] Makes cachedSplitResult transient in BigQuerySourceBase

[lcwik] Update Dataflow Worker Version

[tgroh] Coder.structuralValue(T) should never throw

[chamikara] [BEAM-1988] Add join operation to the filesystem

[chamikara] Change dataflow Job log from info to debug

[tgroh] Remove DeterministicStandardCoder

[lcwik] [BEAM-1786] Post Dataflow worker CoderRegistry clean-up

[lcwik] [BEAM-1871] Remove unnecessary runtime dependencies for Google Cloud

[lcwik] [BEAM-1871] Move Xml IO and related classes to new sdks/java/io/xml

[kirpichov] Introduces GenerateSequence transform

[kirpichov] Replaces all usages of CountingInput with GenerateSequence

[kirpichov] Deletes CountingInput

[kirpichov] Replaces fromTo() with from().to()

[tgroh] Make SimplePCollectionView Visible

[jbonofre] [BEAM-2044] Upgrade HBaseIO to use HBase client version 1.3.1

[kirpichov] [BEAM-1428] KinesisIO should comply with PTransform style guide

[jbonofre] [BEAM-2044] Downgrade HBaseIO to use the stable HBase client version

--
[...truncated 1.86 MB...]
at 
com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory$3.typedApply(MapTaskExecutorFactory.java:261)
at 
com.google.cloud.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:55)
at 
com.google.cloud.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:43)
at 
com.google.cloud.dataflow.worker.graph.Networks.replaceDirectedNetworkNodes(Networks.java:78)
at 
com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory.create(MapTaskExecutorFactory.java:152)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.doWork(DataflowWorker.java:272)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.getAndPerformWork(DataflowWorker.java:244)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:127)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:94)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
(dd4f0246ea2fbe13): java.lang.RuntimeException: 
org.apache.beam.sdk.util.UserCodeException: org.postgresql.util.PSQLException: 
The connection attempt failed.
at 
com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory$3.typedApply(MapTaskExecutorFactory.java:289)
at 
com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory$3.typedApply(MapTaskExecutorFactory.java:261)
at 
com.google.cloud.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:55)
at 
com.google.cloud.dataflow.worker.graph.Networks$TypeSafeNodeFunction.apply(Networks.java:43)
at 
com.google.cloud.dataflow.worker.graph.Networks.replaceDirectedNetworkNodes(Networks.java:78)
at 
com.google.cloud.dataflow.worker.runners.worker.MapTaskExecutorFactory.create(MapTaskExecutorFactory.java:152)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.doWork(DataflowWorker.java:272)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowWorker.getAndPerformWork(DataflowWorker.java:244)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:127)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
at 
com.google.cloud.dataflow.worker.runners.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:94)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.beam.sdk.util.UserCodeException: 
org.postgresql.util.PSQLException: The connection attempt failed.
at 
org.apache.beam.sdk.util.UserCodeException.wrap(UserCodeException.java:36)
at 
org.apache.beam.sdk.io.jdbc.JdbcIO$Read$ReadFn$auxiliary$PJhwRObb.invokeSetup(Unknown
 Source)
at 

Build failed in Jenkins: beam_PerformanceTests_Dataflow #337

2017-04-23 Thread Apache Jenkins Server
See 


Changes:

[lcwik] Validates that input and output GCS paths specify a bucket

[kirpichov] Makes cachedSplitResult transient in BigQuerySourceBase

[lcwik] Update Dataflow Worker Version

[tgroh] Coder.structuralValue(T) should never throw

[chamikara] [BEAM-1988] Add join operation to the filesystem

[chamikara] Change dataflow Job log from info to debug

[tgroh] Remove DeterministicStandardCoder

[lcwik] [BEAM-1786] Post Dataflow worker CoderRegistry clean-up

[lcwik] [BEAM-1871] Remove unnecessary runtime dependencies for Google Cloud

[lcwik] [BEAM-1871] Move Xml IO and related classes to new sdks/java/io/xml

[kirpichov] Introduces GenerateSequence transform

[kirpichov] Replaces all usages of CountingInput with GenerateSequence

[kirpichov] Deletes CountingInput

[kirpichov] Replaces fromTo() with from().to()

[tgroh] Make SimplePCollectionView Visible

[jbonofre] [BEAM-2044] Upgrade HBaseIO to use HBase client version 1.3.1

[kirpichov] [BEAM-1428] KinesisIO should comply with PTransform style guide

[jbonofre] [BEAM-2044] Downgrade HBaseIO to use the stable HBase client version

--
[...truncated 301.79 KB...]
error: unable to resolve reference refs/remotes/origin/pr/2565/merge: No such 
file or directory
 ! 8d8deaa...0d15126 refs/pull/2565/merge -> origin/pr/2565/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2577/merge: No such 
file or directory
 ! c754797...318505c refs/pull/2577/merge -> origin/pr/2577/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2578/merge: No such 
file or directory
 ! 9626f20...9b3d521 refs/pull/2578/merge -> origin/pr/2578/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2579/merge: No such 
file or directory
 ! 612129b...23a6992 refs/pull/2579/merge -> origin/pr/2579/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2583/merge: No such 
file or directory
 ! 10f5731...0036268 refs/pull/2583/merge -> origin/pr/2583/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2585/merge: No such 
file or directory
 ! 9d76f1b...2e46ca2 refs/pull/2585/merge -> origin/pr/2585/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2586/merge: No such 
file or directory
 ! 96ed22a...01309ce refs/pull/2586/merge -> origin/pr/2586/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2587/merge: No such 
file or directory
 ! 5007331...afc4e4d refs/pull/2587/merge -> origin/pr/2587/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2591/head: No such 
file or directory
 ! 09693e1...dddf8ea refs/pull/2591/head -> origin/pr/2591/head  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2591/merge: No such 
file or directory
 ! 8e64258...f0b2095 refs/pull/2591/merge -> origin/pr/2591/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2592/merge: No such 
file or directory
 ! 4a576ac...745acd7 refs/pull/2592/merge -> origin/pr/2592/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2593/merge: No such 
file or directory
 ! a155fdd...88036dc refs/pull/2593/merge -> origin/pr/2593/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2594/merge: No such 
file or directory
 ! f84f6e6...088b3f8 refs/pull/2594/merge -> origin/pr/2594/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2595/merge: No such 
file or directory
 ! 1f9dbb3...a1da297 refs/pull/2595/merge -> origin/pr/2595/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2596/merge: No such 
file or directory
 ! 1632f84...4711963 refs/pull/2596/merge -> origin/pr/2596/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2597/merge: No such 
file or directory
 ! 760bf36...f6d46f5 refs/pull/2597/merge -> origin/pr/2597/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2601/head: No such 
file or directory
 ! a79e43e..10545e5  refs/pull/2601/head -> origin/pr/2601/head  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2601/merge: No such 
file or directory
 ! be457d8...e614410 refs/pull/2601/merge -> origin/pr/2601/merge  (unable to 
update local ref)
error: unable to resolve reference refs/remotes/origin/pr/2602/merge: No such 
file or directory
 ! c392c0a...9b63013 refs/pull/2602/merge -> origin/pr/2602/merge  (unable to 
update local ref)
error: unable to resolve reference 

[jira] [Updated] (BEAM-2057) Test metrics are reported to Spark Metrics sink.

2017-04-23 Thread Aviem Zur (JIRA)

 [ 
https://issues.apache.org/jira/browse/BEAM-2057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aviem Zur updated BEAM-2057:

Description: 
Test that metrics are reported to Spark's metric sink.

Use {{InMemoryMetrics}} and {{InMemoryMetricsSinkRule}} similarly to the 
{{NamedAggregatorsTest}} which tests that aggregators are reported to Spark's 
metrics sink (Aggregators are being removes so this test should be in a 
separate class).

For an example on how to create a pipeline with metrics take a look at 
{{MetricsTest}}.

  was:
Test that metrics are reported to Spark's metric sink.

Use {{InMemoryMetrics}} and {{InMemoryMetricsSinkRule}} similarly to the 
{{NamedAggregatorsTest}} which tests that aggregators are reported to Spark's 
metrics sink (Aggregators are being removes so this test should be in a 
separate class).


> Test metrics are reported to Spark Metrics sink.
> 
>
> Key: BEAM-2057
> URL: https://issues.apache.org/jira/browse/BEAM-2057
> Project: Beam
>  Issue Type: Test
>  Components: runner-spark
>Reporter: Aviem Zur
>  Labels: newbie, starter
>
> Test that metrics are reported to Spark's metric sink.
> Use {{InMemoryMetrics}} and {{InMemoryMetricsSinkRule}} similarly to the 
> {{NamedAggregatorsTest}} which tests that aggregators are reported to Spark's 
> metrics sink (Aggregators are being removes so this test should be in a 
> separate class).
> For an example on how to create a pipeline with metrics take a look at 
> {{MetricsTest}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (BEAM-2057) Test metrics are reported to Spark Metrics sink.

2017-04-23 Thread Aviem Zur (JIRA)

 [ 
https://issues.apache.org/jira/browse/BEAM-2057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aviem Zur reassigned BEAM-2057:
---

Assignee: (was: Amit Sela)

> Test metrics are reported to Spark Metrics sink.
> 
>
> Key: BEAM-2057
> URL: https://issues.apache.org/jira/browse/BEAM-2057
> Project: Beam
>  Issue Type: Test
>  Components: runner-spark
>Reporter: Aviem Zur
>  Labels: newbie, starter
>
> Test that metrics are reported to Spark's metric sink.
> Use {{InMemoryMetrics}} and {{InMemoryMetricsSinkRule}} similarly to the 
> {{NamedAggregatorsTest}} which tests that aggregators are reported to Spark's 
> metrics sink (Aggregators are being removes so this test should be in a 
> separate class).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (BEAM-2057) Test metrics are reported to Spark Metrics sink.

2017-04-23 Thread Aviem Zur (JIRA)
Aviem Zur created BEAM-2057:
---

 Summary: Test metrics are reported to Spark Metrics sink.
 Key: BEAM-2057
 URL: https://issues.apache.org/jira/browse/BEAM-2057
 Project: Beam
  Issue Type: Test
  Components: runner-spark
Reporter: Aviem Zur
Assignee: Amit Sela


Test that metrics are reported to Spark's metric sink.

Use {{InMemoryMetrics}} and {{InMemoryMetricsSinkRule}} similarly to the 
{{NamedAggregatorsTest}} which tests that aggregators are reported to Spark's 
metrics sink (Aggregators are being removes so this test should be in a 
separate class).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (BEAM-2056) Add tests for exporting Beam Metrics to Flink Metrics

2017-04-23 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/BEAM-2056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek reassigned BEAM-2056:
--

Assignee: (was: Aljoscha Krettek)

> Add tests for exporting Beam Metrics to Flink Metrics
> -
>
> Key: BEAM-2056
> URL: https://issues.apache.org/jira/browse/BEAM-2056
> Project: Beam
>  Issue Type: Improvement
>  Components: runner-flink
>Reporter: Aljoscha Krettek
> Fix For: First stable release
>
>
> There are currently no tests that verify that metrics that are reported using 
> the Beam Metrics API are forwarded to Flink and a {{MetricReporter}}.
> A test for this would have to manually configure a Flink "Mini Cluster", as in
> {code}
> // start also a re-usable Flink mini cluster
> flink = new LocalFlinkMiniCluster(getFlinkConfiguration(), false);
> flink.start();
> flinkPort = flink.getLeaderRPCPort();
> {code}
> with {{getFlinkConfiguration()}}:
> {code}
> protected static Configuration getFlinkConfiguration() {
> Configuration flinkConfig = new Configuration();
> flinkConfig.setInteger(ConfigConstants.LOCAL_NUMBER_TASK_MANAGER, 1);
> flinkConfig.setInteger(ConfigConstants.TASK_MANAGER_NUM_TASK_SLOTS, 8);
> flinkConfig.setInteger(ConfigConstants.TASK_MANAGER_MEMORY_SIZE_KEY, 16);
> flinkConfig.setString(ConfigConstants.RESTART_STRATEGY_FIXED_DELAY_DELAY, 
> "0 s");
> flinkConfig.setString(ConfigConstants.METRICS_REPORTERS_LIST, 
> "my_reporter");
> flinkConfig.setString(ConfigConstants.METRICS_REPORTER_PREFIX + 
> "my_reporter." + ConfigConstants.METRICS_REPORTER_CLASS_SUFFIX, 
> MyTestReporter.class.getName());
> return flinkConfig;
> }
> {code}
> where {{MyTestReporter}} is a {{MetricReporter}} that stores metrics being 
> reported to it so we can verify that they are there after the job finishes.
> Running a Pipeline on the mini cluster should be possible by specifying 
> "localhost" and the port we received as a cluster endpoint.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Build failed in Jenkins: beam_PostCommit_Python_Verify #1963

2017-04-23 Thread Apache Jenkins Server
See 


Changes:

[jbonofre] [BEAM-2044] Downgrade HBaseIO to use the stable HBase client version

--
[...truncated 739.35 KB...]
pik.dump(obj)
  File "/usr/lib/python2.7/pickle.py", line 224, in dump
self.save(obj)
  File "/usr/lib/python2.7/pickle.py", line 331, in save
self.save_reduce(obj=obj, *rv)
  File "/usr/lib/python2.7/pickle.py", line 419, in save_reduce
save(state)
  File "/usr/lib/python2.7/pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
  File 
"
 line 160, in new_save_module_dict
return old_save_module_dict(pickler, obj)
  File 
"
 line 841, in save_module_dict
StockPickler.save_dict(pickler, obj)
  File "/usr/lib/python2.7/pickle.py", line 649, in save_dict
self._batch_setitems(obj.iteritems())
  File "/usr/lib/python2.7/pickle.py", line 681, in _batch_setitems
save(v)
  File "/usr/lib/python2.7/pickle.py", line 331, in save
self.save_reduce(obj=obj, *rv)
  File "/usr/lib/python2.7/pickle.py", line 419, in save_reduce
save(state)
  File "/usr/lib/python2.7/pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
  File 
"
 line 160, in new_save_module_dict
return old_save_module_dict(pickler, obj)
  File 
"
 line 841, in save_module_dict
StockPickler.save_dict(pickler, obj)
  File "/usr/lib/python2.7/pickle.py", line 649, in save_dict
self._batch_setitems(obj.iteritems())
  File "/usr/lib/python2.7/pickle.py", line 681, in _batch_setitems
save(v)
  File "/usr/lib/python2.7/pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
  File "/usr/lib/python2.7/pickle.py", line 600, in save_list
self._batch_appends(iter(obj))
  File "/usr/lib/python2.7/pickle.py", line 636, in _batch_appends
save(tmp[0])
  File "/usr/lib/python2.7/pickle.py", line 331, in save
self.save_reduce(obj=obj, *rv)
  File "/usr/lib/python2.7/pickle.py", line 419, in save_reduce
save(state)
  File "/usr/lib/python2.7/pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
  File 
"
 line 160, in new_save_module_dict
return old_save_module_dict(pickler, obj)
  File 
"
 line 841, in save_module_dict
StockPickler.save_dict(pickler, obj)
  File "/usr/lib/python2.7/pickle.py", line 649, in save_dict
self._batch_setitems(obj.iteritems())
  File "/usr/lib/python2.7/pickle.py", line 681, in _batch_setitems
save(v)
  File "/usr/lib/python2.7/pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
  File "/usr/lib/python2.7/pickle.py", line 600, in save_list
self._batch_appends(iter(obj))
  File "/usr/lib/python2.7/pickle.py", line 633, in _batch_appends
save(x)
  File "/usr/lib/python2.7/pickle.py", line 331, in save
self.save_reduce(obj=obj, *rv)
  File "/usr/lib/python2.7/pickle.py", line 419, in save_reduce
save(state)
  File "/usr/lib/python2.7/pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
  File 
"
 line 160, in new_save_module_dict
return old_save_module_dict(pickler, obj)
  File 
"
 line 841, in save_module_dict
StockPickler.save_dict(pickler, obj)
  File "/usr/lib/python2.7/pickle.py", line 649, in save_dict
self._batch_setitems(obj.iteritems())
  File "/usr/lib/python2.7/pickle.py", line 681, in _batch_setitems
save(v)
  File "/usr/lib/python2.7/pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
  File "/usr/lib/python2.7/pickle.py", line 600, in save_list
self._batch_appends(iter(obj))
  File "/usr/lib/python2.7/pickle.py", line 636, in _batch_appends
save(tmp[0])
  File "/usr/lib/python2.7/pickle.py", line 331, in save
self.save_reduce(obj=obj, *rv)
  File "/usr/lib/python2.7/pickle.py", line 419, in save_reduce
save(state)
  File "/usr/lib/python2.7/pickle.py", line 286, in save
f(self, 

Jenkins build is back to stable : beam_PostCommit_Java_ValidatesRunner_Spark #1753

2017-04-23 Thread Apache Jenkins Server
See