[Announcement] Java container Gradle build task changed to versioned task name

2020-11-17 Thread Emily Ye
*tl;dr:* sdks:java:container:docker -> sdks:java:container:java8:docker

Hi dev@,

Starting from https://github.com/apache/beam/pull/13211, we have separated
the Java docker build tasks for Java containers by version. Please use the
newly versioned Java Docker image build tasks
":sdks:java:container:java8:docker"
or ":sdks:java:container:java11:docker".

Previously :sdks:java:container:docker was used to build both containers
(Java 11 provided a flag).

Release of the Java 11 container will also be added soon.

Thanks,
Emily


Re: [REMOTE WORKSHOPS] Introduction to Apache Beam - remote workshops Dec 3rd and Dec 10th

2020-11-17 Thread Pablo Estrada
+dev  so everyone will know.
This is cool. Thanks Karolina! Will these be an introduction to basic Beam
concepts?
Thanks!
-P.

On Mon, Nov 16, 2020 at 11:52 AM Karolina Rosół 
wrote:

> Hello everyone,
>
> You may not know me but I'm Karolina Rosół, Head of Cloud & OSS at Polidea
> and I'm working with great Apache Beam committers Michał Walenia & Kamil
> Wasilewski who will be carrying out the introductory remote workshops to
> Apache Beam on *Dec 3rd* and *Dec 10th*.
>
> If you're interested in taking part in the workshop, feel free to have a
> look at the Warsaw Beam Meetup
>  page
> or enroll directly -> bit.ly/BeamWorkshops 
>
> Thanks,
>
> Karolina Rosół
> Polidea  | Head of Cloud & OSS
>
> M: +48 606 630 236 <+48606630236>
> E: karolina.ro...@polidea.com
> [image: Polidea] 
>
> Check out our projects! 
> [image: Github]  [image: Facebook]
>  [image: Twitter]
>  [image: Linkedin]
>  [image: Instagram]
>  [image: Behance]
>  [image: dribbble]
> 
>


Re: PTransform Annotations Proposal

2020-11-17 Thread Robert Bradshaw
So far we have two distinct usecases for annotations: resource hints
and privacy directives, and I've been trying to figure out how to
reconcile them, but they seem to have very different characteristics.
(It would be nice to come up with other uses as well to see if we're
really coming up with a generally useful mode--I think display data
could fit into this as a new kind of annotation rather than being a
top-level property, and it could make sense on both leaf and composite
transforms.)

To me, resource hints like GPU are inextricably tied to the
environment. A transform tagged with GPU should reference a Fn that
invokes GPU-accelerated code that lives in a particular environment.
Something like high-mem is a bit squishier. Some DoFns take a lot of
memory, but on the other hand one could imagine labeling a CoGBK as
high-mem due to knowing that, in this particular usage, there will be
lots of values with the same key. Ideally runners would be intelligent
enough to automatically learn memory usage, but even in this case it
may be a good hint to try and learn the requirements for DoFn A and
DoFn B separately (which is difficult if they are always colocated,
but valuable if, e.g. A takes a huge amount of memory and B takes a
huge amount of wall time).

Note that tying things to the environment does not preclude using them
in non-portable runners as they'll still have an SDK-level
representation (though I don't think we should have an explicit goal
of feature parity for non-portable runners, e.g. multi-language isn't
happening, and hope that non-portable runners go away soon anyway).

Now let's consider privacy annotations. To make things very concrete,
imagine a transform AverageSpendPerZipCode which takes as input (user,
zip, spend), all users unique, and returns (zip, avg(spend)). In
Python, this is GroupBy('zip').aggregate_field('spend',
MeanCombineFn()). This is not very privacy preserving to those users
who are the only (or one of a few) in a zip code. So we could define a
transform PrivacyPreservingAverageSpendPerZipCode as

@ptransform_fn
def PrivacyPreservingAverageSpendPerZipCode(spend_per_user, threshold)
counts_per_zip = spend_per_user |
GroupBy('zip').aggregate_field('user', CountCombineFn())
spend_per_zip = spend_per_user |
GroupBy('zip').aggregate_field('spend', MeanCombineFn())
filtered = spend_per_zip | beam.Filter(
lambda x, counts: counts[x.zip] > threshold,
counts=AsMap(counts_per_zip))
return filtered

We now have a composite that has privacy preserving properties (i.e.
the input may be quite sensitive, but the output is not, depending on
the value of threshold). What is interesting here is that it is only
the composite that has this property--no individual sub-transform is
itself privacy preserving. Furthermore, an optimizer may notice we're
doing aggregation on the same key twice and rewrite this using
(logically)

GroupBy('zip').aggregate_field('user',
CountCombineFn()).aggregate_field('spend', MeanCombineFn())

and then applying the filter, which is semantically equivalent and
satisfies the privacy annotations (and notably that does not even
require the optimizer to interpret the annotations, just pass them
on). To me, this implies that these annotations belong on the
composites, and *not* on the leaf nodes (where they would be
incorrect).

I'll leave aside most questions of API until we figure out the model
semantics, but wanted to throw one possible idea out (though I am
ambivalent about it). Instead of attaching things to transforms, we
can just wrap transforms in composites that have no role other than
declaring information about their contents. E.g. we could have a
composite transform whose payload is simply an assertion of the
privacy (or resource?) properties of its inner structure. This would
be just as expressive as adding new properties to transforms
themselves (but would add an extra level of nesting, and make
respecting the precice nesting more important).

On Tue, Nov 17, 2020 at 8:12 AM Robert Burke  wrote:
>
> +1 to discussing PCollection annotations on a separate thread. It would be 
> confusing to mix them up.
>
> ---
>
> The question around conflicts is interesting, but confusing to me. I don't 
> think they exist in general. I keep coming back around to that it depends on 
> the annotation and the purpose of composites. Optionality saves us here too.
>
> Composites are nothing without their internal hypergraph structure. 
> Eventually it comes down to executing the leaf nodes. The alternative to 
> executing the leaf nodes is when the composite represents a known transform 
> and is replaced by the runner on submission time.  Lets look at each.
>
> If there's a property that only exists on the leaf nodes, then it's not 
> possible to bubble up that property to the composite in all cases. Afterall, 
> it's not necessarily the case that a privacy preserving transform maintains 
> the property for all output edges as not all such 

Re: CCOSS - English to Spanish translations contribution to open source

2020-11-17 Thread Ahmet Altay
This sounds great and it might be better discussed on dev@. I guess the
question is how can we incorporate different translations to the Beam
website. As far as I understand, the web framework we use has support for
this but we are not using it.

+Griselda Cuevas  @Rose Nguyen  -
might know more.

On Tue, Nov 17, 2020 at 9:12 AM Alma Rinasz  wrote:

> I sent the below email to the u...@beam.apache.org
>  address
> but it bounced back. Please advise.
>

If you are not already, you might try subscribing to the user@ list and try
sending again.


> Thank you.
>
> On Tue, Nov 17, 2020 at 9:20 AM Alma Rinasz  wrote:
>
>>
>> Hi!
>>
>> My name is Alma and I am a developer advocate with Software Guru, Latin
>> America's premier developer relations company.
>>
>> We are also a part of the organizing team for CCOSS, Cumbre de
>> Contribuidores de Open Source and we recently held a contribution sprint
>> where we translated various documents for different open source projects.
>>
>> I am writing to ask for guidance on how to add this contribution to the
>> project and copying Eryx Paredes Camacho who has been helping us follow up
>> with all the pull requests and contributions. Attached you will find the
>> copy of the translation of the original document, which was taken from
>> here:
>> https://github.com/apache/beam/blob/master/website/www/site/content/en/contribute/_index.md
>>  Beam Contribution Guide -Spanish
>> 
>>
>>
>> Please let me know how to proceeed as we are anxious to complete our
>> contribution to the open source community.
>>
>> Looking forward to hearing from you!
>> --
>>
>> *Warm regards, *
>> *¡Saludos!*
>>
>> *Alma Maria Rinasz (she/her)*
>> Developer Advocate | Cybersecurity Education & Awareness
>> US: +1 669 285 9753 <(669)%20285-9753> | MX: +52 443 305 1776
>> <+52%20443%20305%201776> | Linked In
>> | a...@sg.com.mx
>>
>>


Add your use case to the "Powered By" Page

2020-11-17 Thread Gris Cuevas
Hi folks, 

As part of the Beam Website redesign, we're going to revamp the "Powered By" 
page, and this is an opportunity for you to add your use case to the list of 
projects using Apache Beam. 

If you'd wish to add your project, please add a subtask in jira: BEAM-11225 [1] 
with the following information: 

- Company or project name
- One sentence description of your company/project
- One short paragraph of your use case (150 words max)
- Logo to display 
- (optional) Links to any public presentation, blog or publication

If you don't have a Jira account, you can reply to this thread with this 
information. 

Thank you!

[1] https://issues.apache.org/jira/browse/BEAM-11225


Re: CCOSS - English to Spanish translations contribution to open source

2020-11-17 Thread Alma Rinasz
I sent the below email to the u...@beam.apache.org
 address
but it bounced back. Please advise.
Thank you.

On Tue, Nov 17, 2020 at 9:20 AM Alma Rinasz  wrote:

>
> Hi!
>
> My name is Alma and I am a developer advocate with Software Guru, Latin
> America's premier developer relations company.
>
> We are also a part of the organizing team for CCOSS, Cumbre de
> Contribuidores de Open Source and we recently held a contribution sprint
> where we translated various documents for different open source projects.
>
> I am writing to ask for guidance on how to add this contribution to the
> project and copying Eryx Paredes Camacho who has been helping us follow up
> with all the pull requests and contributions. Attached you will find the
> copy of the translation of the original document, which was taken from
> here:
> https://github.com/apache/beam/blob/master/website/www/site/content/en/contribute/_index.md
>  Beam Contribution Guide -Spanish
> 
>
>
> Please let me know how to proceeed as we are anxious to complete our
> contribution to the open source community.
>
> Looking forward to hearing from you!
> --
>
> *Warm regards, *
> *¡Saludos!*
>
> *Alma Maria Rinasz (she/her)*
> Developer Advocate | Cybersecurity Education & Awareness
> US: +1 669 285 9753 | MX: +52 443 305 1776 | Linked In
> | a...@sg.com.mx
>
>


Re: PTransform Annotations Proposal

2020-11-17 Thread Robert Burke
+1 to discussing PCollection annotations on a separate thread. It would be
confusing to mix them up.

---

The question around conflicts is interesting, but confusing to me. I don't
think they exist in general. I keep coming back around to that it depends
on the annotation and the purpose of composites. Optionality saves us here
too.

Composites are nothing without their internal hypergraph structure.
Eventually it comes down to executing the leaf nodes. The alternative to
executing the leaf nodes is when the composite represents a known transform
and is replaced by the runner on submission time.  Lets look at each.

If there's a property that only exists on the leaf nodes, then it's not
possible to bubble up that property to the composite in all cases.
Afterall, it's not necessarily the case that a privacy preserving transform
maintains the property for all output edges as not all such edges pass
through the preserving transform.

On the other hand, with memory or gpu recommendations, that might set a low
bar on the composite level.

But, composites (any transform really) can be runner replaced. I think it's
fair to say that a runner replaced composite is not beholden to the
annotations of the original leaf transforms, especially around physical
requirements. The implementations are different. If a known composite at
the composite level requires GPUs and it's known replacement doesn't, I'd
posit that replacement was a choice the runner made since it can't
provision machines with GPUs.

But, crucially around privacy annotated transforms, a runner likely
shouldn't replace a given subgraph that contains a privacy annotationed
transform unless the replacements provide the same level of privacy.
However, such replacements only happens with well known transforms with
known properties anyway, so this can serve as an additional layer of
validation for a runner aware of the properties.

This brings me back to my position: that the notion of conflicts is very
annotation dependant, and that defining them as optional is the most
important feature to avoid issues. Conflicts don't exist as an inherent
property of annotations on ptransform of the hypergraph structure. Am i
wrong? No one has come up with an actual example of a conflict as far as i
understand the thread.

Even Reuven's original question is more about whether the runner is forced
to look at leaf bodes rather than only looking at the composite. Assuming
the composite isn't replaced, the runner needs to look at the leaf nodes
regardless. And as discussed above there's no generalized semantics that
fit for all kinds of annotations, once replacements are also considered.

On Tue, Nov 17, 2020, 6:35 AM Ismaël Mejía  wrote:

> +1 Nice to see there is finally interest on this. Annotations for
> PTransforms make total sense!
>
> The semantics should be strictly optional for runners and correct
> execution should not be affected by lack of support of any annotation.
> We should however keep the set of annotations small.
>
> > PTransforms are hierarchical - namely a PTransform contains other
> PTransforms, and so on. Is the runner expected to resolve all annotations
> down to leaf nodes? What happens if that results in conflicting annotations?
>
> +1 to this question, This needs to be detailed.
>
> I am curious about how the end user APIs of this will look maybe in
> Java or Python, just an extra method to inject a Map or via Java
> annotations/Python decorators?
>
> We might prefer not to mix the concepts of annotations and
> environments because this will limit the scope of annotations.
> Annotations are different from environments because they serve a more
> general idea: to express an intention and it is up to the runner to
> choose the strategy to accomplish this, for example in the GPU
> assignation case it could be to rewrite resource allocation via
> Environments but it could also just delegate this to a resource
> manager which is what we could do for example for GPU (or data
> locality) cases on the Spark/Flink classic runners. If we tie this to
> environments we will leave classic runners out of the loop for no
> major reason and also not cover use cases not related to resource
> allocation.
>
> I do not understand the use case to justify PCollection annotations
> but to not mix this thread with them, would you be interested to
> elaborate more about them in a different thread Jan?
>
> On Tue, Nov 17, 2020 at 2:28 AM Robert Bradshaw 
> wrote:
> >
> > I agree things like GPU, high-mem, etc. belong to the environment. If
> > annotations are truly advisory, one can imagine merging environments
> > by taking the union of annotations and still producing a correct
> > pipeline. (This would mean that annotations would have to be a
> > multi-map...)
> >
> > On the other hand, this doesn't seem to handle the case of privacy
> > analysis, which could apply to composites without applying to any
> > individual component, and don't really make sense as part 

Re: PTransform Annotations Proposal

2020-11-17 Thread Ismaël Mejía
+1 Nice to see there is finally interest on this. Annotations for
PTransforms make total sense!

The semantics should be strictly optional for runners and correct
execution should not be affected by lack of support of any annotation.
We should however keep the set of annotations small.

> PTransforms are hierarchical - namely a PTransform contains other 
> PTransforms, and so on. Is the runner expected to resolve all annotations 
> down to leaf nodes? What happens if that results in conflicting annotations?

+1 to this question, This needs to be detailed.

I am curious about how the end user APIs of this will look maybe in
Java or Python, just an extra method to inject a Map or via Java
annotations/Python decorators?

We might prefer not to mix the concepts of annotations and
environments because this will limit the scope of annotations.
Annotations are different from environments because they serve a more
general idea: to express an intention and it is up to the runner to
choose the strategy to accomplish this, for example in the GPU
assignation case it could be to rewrite resource allocation via
Environments but it could also just delegate this to a resource
manager which is what we could do for example for GPU (or data
locality) cases on the Spark/Flink classic runners. If we tie this to
environments we will leave classic runners out of the loop for no
major reason and also not cover use cases not related to resource
allocation.

I do not understand the use case to justify PCollection annotations
but to not mix this thread with them, would you be interested to
elaborate more about them in a different thread Jan?

On Tue, Nov 17, 2020 at 2:28 AM Robert Bradshaw  wrote:
>
> I agree things like GPU, high-mem, etc. belong to the environment. If
> annotations are truly advisory, one can imagine merging environments
> by taking the union of annotations and still producing a correct
> pipeline. (This would mean that annotations would have to be a
> multi-map...)
>
> On the other hand, this doesn't seem to handle the case of privacy
> analysis, which could apply to composites without applying to any
> individual component, and don't really make sense as part of a
> fusion/execution story.
>
> On Mon, Nov 16, 2020 at 4:00 PM Robert Burke  wrote:
> >
> > That's good historical context.
> >
> > But then we'd still need to codify the annotation would need to be 
> > optional, and not affect correctness.
> >
> > Conflicts become easier to manage, (as environments with conflicting 
> > annotations simply don't get merged, and stay as distinct environments) but 
> > are still notionally annotation dependant. Do most runners handle 
> > environments so individually though?
> >
> > Reuven's question is a good one though. For the Go SDK, and the proposed 
> > implementation i saw, they only applied to leaf nodes. This is an artifact 
> > of how the Go SDK handles composites. Nothing stops it from being 
> > implemented on the composites Go has, but it didn't make sense otherwise. 
> > AFAICT Composites are generally for organizational convenience and not for 
> > functional aspects. Is this wrong? Afterall, does it make sense for 
> > environments to be on leaves and composites either? It's the same issue 
> > there.
> >
> >
> > On Mon, Nov 16, 2020, 3:45 PM Kenneth Knowles  wrote:
> >>
> >> I am +1 to the proposal but believe it should be moved to the Environment. 
> >> I could be convinced otherwise, but would want to really understand the 
> >> details.
> >>
> >> I think we haven't done a great job communicating the purpose of the 
> >> Environment proto. It was explicitly created for this purpose.
> >>
> >> 1. It tells the runner things it needs to know to interpret the DoFn (or 
> >> other UDF). So these are the existing proto fields like docker image (in 
> >> the payload) and required artifacts that were staged.
> >> 2. It is also the place for additional requirements or hints like "high 
> >> mem" or "GPU" etc.
> >>
> >> Every user function has an associated environment, and environments exist 
> >> only for the purpose of executing user functions. In fact, Environment 
> >> originated as inline requirements/attributes for a user function proto 
> >> message and was separated just to make the proto smaller.
> >>
> >> A PTransform is an abstract concept for organizing the graph, not an 
> >> executable thing. So a hint/capability/requirement on a PTransform only 
> >> really makes sense as a scoping mechanism for applying a hint to a bunch 
> >> of functions within a subgraph. This seems like a user interface concern 
> >> and the SDK should own propagating the hints. If the hint truly applies to 
> >> the whole PTransform and *not* the parts, then I am interested in learning 
> >> about that.
> >>
> >> Kenn
> >>
> >> On Mon, Nov 16, 2020 at 10:54 AM Robert Burke  wrote:
> >>>
> >>> That's a good question.
> >>>
> >>> I think the main difference is a matter of scope. Annotations would apply 
> >>> to a PTransform 

Re: Question about LOGICAL_AND

2020-11-17 Thread Sonam Ramchand
Hi Devs,
Following the guidelines, I implemented CmobineFn Logical_AND as
https://github.com/sonam-vend/beam/commit/9ad8ee1d8fa617aca7fcafc8e7efe8bf388b3afb.
But i am getting


*org.apache.beam.sdk.extensions.sql.zetasql.translation.SqlOperators$1
cannot be cast to
org.apache.beam.vendor.calcite.v1_20_0.org.apache.calcite.sql.SqlAggFunction*
java.lang.ClassCastException:
*org.apache.beam.sdk.extensions.sql.zetasql.translation.SqlOperators$1
cannot be cast to
org.apache.beam.vendor.calcite.v1_20_0.org.apache.calcite.sql.SqlAggFunction*
at
org.apache.beam.sdk.extensions.sql.zetasql.translation.AggregateScanConverter.convertAggCall(AggregateScanConverter.java:202)
at
org.apache.beam.sdk.extensions.sql.zetasql.translation.AggregateScanConverter.convert(AggregateScanConverter.java:94)
at
org.apache.beam.sdk.extensions.sql.zetasql.translation.AggregateScanConverter.convert(AggregateScanConverter.java:50)
at
org.apache.beam.sdk.extensions.sql.zetasql.translation.QueryStatementConverter.convertNode(QueryStatementConverter.java:99)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.Collections$2.tryAdvance(Collections.java:4719)
at java.util.Collections$2.forEachRemaining(Collections.java:4727)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
at
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
at
java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:566)
at
org.apache.beam.sdk.extensions.sql.zetasql.translation.QueryStatementConverter.convertNode(QueryStatementConverter.java:98)
at
org.apache.beam.sdk.extensions.sql.zetasql.translation.QueryStatementConverter.convert(QueryStatementConverter.java:86)
at
org.apache.beam.sdk.extensions.sql.zetasql.translation.QueryStatementConverter.convertRootQuery(QueryStatementConverter.java:52)
at
org.apache.beam.sdk.extensions.sql.zetasql.ZetaSQLPlannerImpl.rel(ZetaSQLPlannerImpl.java:140)
at
org.apache.beam.sdk.extensions.sql.zetasql.ZetaSQLQueryPlanner.convertToBeamRelInternal(ZetaSQLQueryPlanner.java:168)
at
org.apache.beam.sdk.extensions.sql.zetasql.ZetaSQLQueryPlanner.convertToBeamRel(ZetaSQLQueryPlanner.java:156)
at
org.apache.beam.sdk.extensions.sql.zetasql.ZetaSQLQueryPlanner.convertToBeamRel(ZetaSQLQueryPlanner.java:140)
at
org.apache.beam.sdk.extensions.sql.zetasql.ZetaSqlDialectSpecTest.testLogicalAndZetaSQL(ZetaSqlDialectSpecTest.java:4334)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at
org.apache.beam.sdk.testing.TestPipeline$1.evaluate(TestPipeline.java:319)
at
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:266)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
at
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
at
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
at
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at

Re: Question about LOGICAL_AND

2020-11-17 Thread Sonam Ramchand
Hi Devs,
Following the given guidelines, I implemented CmobineFn Logical_AND as
https://github.com/sonam-vend/beam/commit/9ad8ee1d8fa617aca7fcafc8e7efe8bf388b3afb.
But i am getting


org.apache.beam.sdk.extensions.sql.zetasql.translation.SqlOperators$1
cannot be cast to
org.apache.beam.vendor.calcite.v1_20_0.org.apache.calcite.sql.SqlAggFunction
java.lang.ClassCastException:
org.apache.beam.sdk.extensions.sql.zetasql.translation.SqlOperators$1
cannot be cast to
org.apache.beam.vendor.calcite.v1_20_0.org.apache.calcite.sql.SqlAggFunction
at
org.apache.beam.sdk.extensions.sql.zetasql.translation.AggregateScanConverter.convertAggCall(AggregateScanConverter.java:202)
at
org.apache.beam.sdk.extensions.sql.zetasql.translation.AggregateScanConverter.convert(AggregateScanConverter.java:94)
at
org.apache.beam.sdk.extensions.sql.zetasql.translation.AggregateScanConverter.convert(AggregateScanConverter.java:50)
at
org.apache.beam.sdk.extensions.sql.zetasql.translation.QueryStatementConverter.convertNode(QueryStatementConverter.java:99)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.Collections$2.tryAdvance(Collections.java:4719)
at java.util.Collections$2.forEachRemaining(Collections.java:4727)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
at
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
at
java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:566)
at
org.apache.beam.sdk.extensions.sql.zetasql.translation.QueryStatementConverter.convertNode(QueryStatementConverter.java:98)
at
org.apache.beam.sdk.extensions.sql.zetasql.translation.QueryStatementConverter.convert(QueryStatementConverter.java:86)
at
org.apache.beam.sdk.extensions.sql.zetasql.translation.QueryStatementConverter.convertRootQuery(QueryStatementConverter.java:52)
at
org.apache.beam.sdk.extensions.sql.zetasql.ZetaSQLPlannerImpl.rel(ZetaSQLPlannerImpl.java:140)
at
org.apache.beam.sdk.extensions.sql.zetasql.ZetaSQLQueryPlanner.convertToBeamRelInternal(ZetaSQLQueryPlanner.java:168)
at
org.apache.beam.sdk.extensions.sql.zetasql.ZetaSQLQueryPlanner.convertToBeamRel(ZetaSQLQueryPlanner.java:156)
at
org.apache.beam.sdk.extensions.sql.zetasql.ZetaSQLQueryPlanner.convertToBeamRel(ZetaSQLQueryPlanner.java:140)
at
org.apache.beam.sdk.extensions.sql.zetasql.ZetaSqlDialectSpecTest.testLogicalAndZetaSQL(ZetaSqlDialectSpecTest.java:4334)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at
org.apache.beam.sdk.testing.TestPipeline$1.evaluate(TestPipeline.java:319)
at
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:266)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
at
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
at
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
at
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at