Re: How to handle errors in GO SDK in a custom PTransform

2023-04-06 Thread Shivam Singhal
Thanks, Danny! This was what I was looking for.

On Wed, 5 Apr 2023 at 18:40, Danny McCormick via user 
wrote:

> The Go SDK doesn't use tagged outputs, instead it uses positional ordering
> for emitting multiple outputs. So you can do something like:
>
> func processElements(element string, goodEmit, errEmit func(string)) {
>if element.isGood {
>   goodEmit(element)
>} else {
>   errEmit(element)
>}
> }
>
> which could then be consumed as:
>
> goodElements, badElements := beam.ParDo2(s, processElements, inputElements)
>
> See https://beam.apache.org/documentation/programming-guide/#output-tags
> for more details.
>
> Thanks,
> Danny
>
> On Wed, Apr 5, 2023 at 7:31 AM Shivam Singhal 
> wrote:
>
>> Hi folks,
>>
>> In Java SDK, we have robust error handling using tagged outputs and
>> PCollectionTuples.
>>
>> Do we have something similar in Go SDK? I have been unable to locate it
>> in the reference docs
>> <https://pkg.go.dev/github.com/apache/beam/sdks/v2/go/pkg/beam>.
>>
>> *A general usecase for error handling who might not be familiar with
>> error handling in Java SDK:*
>> My custom PTransform can throw error while writing to Redis and I need to
>> know which key-value pairs were not written in the redis. My PTransform
>> will also output the successfully written key-value pairs. I need a way to
>> somehow differentiate between successful and error outputs.
>>
>>


How to handle errors in GO SDK in a custom PTransform

2023-04-05 Thread Shivam Singhal
Hi folks,

In Java SDK, we have robust error handling using tagged outputs and
PCollectionTuples.

Do we have something similar in Go SDK? I have been unable to locate it in
the reference docs
.

*A general usecase for error handling who might not be familiar with error
handling in Java SDK:*
My custom PTransform can throw error while writing to Redis and I need to
know which key-value pairs were not written in the redis. My PTransform
will also output the successfully written key-value pairs. I need a way to
somehow differentiate between successful and error outputs.


Re: Launch Dataflow Flex Templates from Go

2023-02-14 Thread Shivam Singhal
There shouldn’t be much change in the API request irrespective of the SDK
language

On Wed, 15 Feb 2023 at 10:50, Shivam Singhal 
wrote:

> Hey Ashok,
>
> If you already have a flex template file and the docker image built, you
> can use the Dataflow API to run the template.
>
> https://cloud.google.com/dataflow/docs/reference/rest
>
>
> On Wed, 15 Feb 2023 at 04:49, Ashok KS  wrote:
>
>> Hello Beam Community,
>>
>> I have written a Dataflow pipeline using Python SDK and I would be
>> creating a Flex template with it.
>>
>> My task is to launch this Flex Template from Cloud Functions which would
>> be in Go. I found the package below but couldn't find any sample.
>>
>>
>> https://cloud.google.com/dataflow/docs/reference/rest/v1b3/projects.locations.templates/launch
>>
>> I could find examples in Python to launch templates.
>> Can someone please share an example in Go to launch a Dataflow Flex
>> template?
>>
>> Thank you in advance.
>>
>> Regards,
>> Ashok
>>
>


Re: Launch Dataflow Flex Templates from Go

2023-02-14 Thread Shivam Singhal
Hey Ashok,

If you already have a flex template file and the docker image built, you
can use the Dataflow API to run the template.

https://cloud.google.com/dataflow/docs/reference/rest


On Wed, 15 Feb 2023 at 04:49, Ashok KS  wrote:

> Hello Beam Community,
>
> I have written a Dataflow pipeline using Python SDK and I would be
> creating a Flex template with it.
>
> My task is to launch this Flex Template from Cloud Functions which would
> be in Go. I found the package below but couldn't find any sample.
>
>
> https://cloud.google.com/dataflow/docs/reference/rest/v1b3/projects.locations.templates/launch
>
> I could find examples in Python to launch templates.
> Can someone please share an example in Go to launch a Dataflow Flex
> template?
>
> Thank you in advance.
>
> Regards,
> Ashok
>


Re: Go + Apache Beam GCP Dataflow: Could not find the sink for pubsub, Check that the sink library specifies alwayslink = 1

2023-02-06 Thread Shivam Singhal
I will be picking the issue up once the maintainers have triaged the issue.

On Mon, 6 Feb 2023 at 17:43, Shivam Singhal 
wrote:

> Not sure if there is any solution other than fixing the Go pubsubio
> package.
>
> On Mon, 6 Feb 2023 at 17:41, Ashok KS  wrote:
>
>> Yes, that is where Iam getting stuck. I wrote the complete pipeline in
>> Python which reads from the BQ table and published it as a PubSub message.
>> I'm able to force it as a streaming application by passing --streaming=True
>> But for my project, they want it in Go so I had to rewrite the complete
>> logic in Go.
>> I did the same, but stuck at the last point of publishing it to PubSub.
>>
>> On Mon, Feb 6, 2023 at 11:07 PM Shivam Singhal <
>> shivamsinghal5...@gmail.com> wrote:
>>
>>> It depends on the input source: it will decide if your pipeline is a
>>> streaming or a batch pipeline.
>>>
>>> Since you are querying over a BQ table, the input is finite and in
>>> result, your pipeline is a batch pipeline.
>>> I am not sure there is a straightforward way where you can convert this
>>> pipeline into a streaming pipeline.
>>>
>>>
>>> On Mon, 6 Feb 2023 at 17:32, Ashok KS  wrote:
>>>
>>>> Hi Shivam,
>>>>
>>>> Thanks for that. How can run the pipeline as a streaming pipeline?  In
>>>> python I could just run the pipeline by passing —streaming=True in the
>>>> command line, but I couldn’t find anything similar in Go.
>>>>
>>>> Any pointers would be appreciated.
>>>>
>>>> Regards,
>>>> Ashok
>>>>
>>>> On Mon, 6 Feb 2023 at 10:59 pm, Shivam Singhal <
>>>> shivamsinghal5...@gmail.com> wrote:
>>>>
>>>>> The issue is not yet verified by the maintainers but I think the
>>>>> pubsubio connector's Write method doesn't work in Batch pipelines.
>>>>>
>>>>> But I am pretty sure that pubsubio Write doesn't work for Batch
>>>>> Pipelines because it's mentioned in the code comments. Check the below
>>>>> issue for the details:
>>>>> https://github.com/apache/beam/issues/25326
>>>>>
>>>>> On Mon, 6 Feb 2023 at 17:26, Ashok KS  wrote:
>>>>>
>>>>>> Hi Shivam,
>>>>>>
>>>>>> Thanks a lot for your response. Yes it is a batch pipeline. My task
>>>>>> is to read a big query table, process the data and publish the Rows as a
>>>>>> PubSub message.
>>>>>>
>>>>>> Regards,
>>>>>> Ashok
>>>>>>
>>>>>> On Mon, 6 Feb 2023 at 10:52 pm, Shivam Singhal <
>>>>>> shivamsinghal5...@gmail.com> wrote:
>>>>>>
>>>>>>> Hey Ashok KS,
>>>>>>>
>>>>>>> Is this a batch pipeline?
>>>>>>>
>>>>>>> On Mon, 6 Feb 2023 at 09:27, Ashok KS  wrote:
>>>>>>>
>>>>>>>> Hi All,
>>>>>>>>
>>>>>>>> Just sending a reminder in case anyone could help. I haven't
>>>>>>>> received any response to my issue.
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>> Ashok
>>>>>>>>
>>>>>>>> On Fri, Feb 3, 2023 at 12:23 AM Ashok KS 
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Hi All,
>>>>>>>>>
>>>>>>>>> I'm new to using Apache Beam using Go.
>>>>>>>>>
>>>>>>>>> pubsubio.Write(scope, "project", "topic", ppMessages)
>>>>>>>>> When I try to publish a message in a topic I get an error message
>>>>>>>>> "Could not find the sink for pubsub, Check that the sink library
>>>>>>>>> specifies alwayslink = 1
>>>>>>>>>
>>>>>>>>> I found a StackOverFlow post for the same issue but it doesn't
>>>>>>>>> solve the problem.
>>>>>>>>>
>>>>>>>>> Stackoverflow Link
>>>>>>>>> <https://stackoverflow.com/questions/69651665/go-apache-beam-gcp-dataflow-could-not-find-the-sink-for-pubsub-check-that-th>
>>>>>>>>>
>>>>>>>>> Can someone please help?
>>>>>>>>>
>>>>>>>>> Regards,
>>>>>>>>> Ashok
>>>>>>>>>
>>>>>>>>


Re: Go + Apache Beam GCP Dataflow: Could not find the sink for pubsub, Check that the sink library specifies alwayslink = 1

2023-02-06 Thread Shivam Singhal
Not sure if there is any solution other than fixing the Go pubsubio package.

On Mon, 6 Feb 2023 at 17:41, Ashok KS  wrote:

> Yes, that is where Iam getting stuck. I wrote the complete pipeline in
> Python which reads from the BQ table and published it as a PubSub message.
> I'm able to force it as a streaming application by passing --streaming=True
> But for my project, they want it in Go so I had to rewrite the complete
> logic in Go.
> I did the same, but stuck at the last point of publishing it to PubSub.
>
> On Mon, Feb 6, 2023 at 11:07 PM Shivam Singhal <
> shivamsinghal5...@gmail.com> wrote:
>
>> It depends on the input source: it will decide if your pipeline is a
>> streaming or a batch pipeline.
>>
>> Since you are querying over a BQ table, the input is finite and in
>> result, your pipeline is a batch pipeline.
>> I am not sure there is a straightforward way where you can convert this
>> pipeline into a streaming pipeline.
>>
>>
>> On Mon, 6 Feb 2023 at 17:32, Ashok KS  wrote:
>>
>>> Hi Shivam,
>>>
>>> Thanks for that. How can run the pipeline as a streaming pipeline?  In
>>> python I could just run the pipeline by passing —streaming=True in the
>>> command line, but I couldn’t find anything similar in Go.
>>>
>>> Any pointers would be appreciated.
>>>
>>> Regards,
>>> Ashok
>>>
>>> On Mon, 6 Feb 2023 at 10:59 pm, Shivam Singhal <
>>> shivamsinghal5...@gmail.com> wrote:
>>>
>>>> The issue is not yet verified by the maintainers but I think the
>>>> pubsubio connector's Write method doesn't work in Batch pipelines.
>>>>
>>>> But I am pretty sure that pubsubio Write doesn't work for Batch
>>>> Pipelines because it's mentioned in the code comments. Check the below
>>>> issue for the details:
>>>> https://github.com/apache/beam/issues/25326
>>>>
>>>> On Mon, 6 Feb 2023 at 17:26, Ashok KS  wrote:
>>>>
>>>>> Hi Shivam,
>>>>>
>>>>> Thanks a lot for your response. Yes it is a batch pipeline. My task is
>>>>> to read a big query table, process the data and publish the Rows as a
>>>>> PubSub message.
>>>>>
>>>>> Regards,
>>>>> Ashok
>>>>>
>>>>> On Mon, 6 Feb 2023 at 10:52 pm, Shivam Singhal <
>>>>> shivamsinghal5...@gmail.com> wrote:
>>>>>
>>>>>> Hey Ashok KS,
>>>>>>
>>>>>> Is this a batch pipeline?
>>>>>>
>>>>>> On Mon, 6 Feb 2023 at 09:27, Ashok KS  wrote:
>>>>>>
>>>>>>> Hi All,
>>>>>>>
>>>>>>> Just sending a reminder in case anyone could help. I haven't
>>>>>>> received any response to my issue.
>>>>>>>
>>>>>>> Regards,
>>>>>>> Ashok
>>>>>>>
>>>>>>> On Fri, Feb 3, 2023 at 12:23 AM Ashok KS 
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hi All,
>>>>>>>>
>>>>>>>> I'm new to using Apache Beam using Go.
>>>>>>>>
>>>>>>>> pubsubio.Write(scope, "project", "topic", ppMessages)
>>>>>>>> When I try to publish a message in a topic I get an error message
>>>>>>>> "Could not find the sink for pubsub, Check that the sink library
>>>>>>>> specifies alwayslink = 1
>>>>>>>>
>>>>>>>> I found a StackOverFlow post for the same issue but it doesn't
>>>>>>>> solve the problem.
>>>>>>>>
>>>>>>>> Stackoverflow Link
>>>>>>>> <https://stackoverflow.com/questions/69651665/go-apache-beam-gcp-dataflow-could-not-find-the-sink-for-pubsub-check-that-th>
>>>>>>>>
>>>>>>>> Can someone please help?
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>> Ashok
>>>>>>>>
>>>>>>>


Re: Go + Apache Beam GCP Dataflow: Could not find the sink for pubsub, Check that the sink library specifies alwayslink = 1

2023-02-06 Thread Shivam Singhal
It depends on the input source: it will decide if your pipeline is a
streaming or a batch pipeline.

Since you are querying over a BQ table, the input is finite and in result,
your pipeline is a batch pipeline.
I am not sure there is a straightforward way where you can convert this
pipeline into a streaming pipeline.


On Mon, 6 Feb 2023 at 17:32, Ashok KS  wrote:

> Hi Shivam,
>
> Thanks for that. How can run the pipeline as a streaming pipeline?  In
> python I could just run the pipeline by passing —streaming=True in the
> command line, but I couldn’t find anything similar in Go.
>
> Any pointers would be appreciated.
>
> Regards,
> Ashok
>
> On Mon, 6 Feb 2023 at 10:59 pm, Shivam Singhal <
> shivamsinghal5...@gmail.com> wrote:
>
>> The issue is not yet verified by the maintainers but I think the pubsubio
>> connector's Write method doesn't work in Batch pipelines.
>>
>> But I am pretty sure that pubsubio Write doesn't work for Batch Pipelines
>> because it's mentioned in the code comments. Check the below issue for the
>> details:
>> https://github.com/apache/beam/issues/25326
>>
>> On Mon, 6 Feb 2023 at 17:26, Ashok KS  wrote:
>>
>>> Hi Shivam,
>>>
>>> Thanks a lot for your response. Yes it is a batch pipeline. My task is
>>> to read a big query table, process the data and publish the Rows as a
>>> PubSub message.
>>>
>>> Regards,
>>> Ashok
>>>
>>> On Mon, 6 Feb 2023 at 10:52 pm, Shivam Singhal <
>>> shivamsinghal5...@gmail.com> wrote:
>>>
>>>> Hey Ashok KS,
>>>>
>>>> Is this a batch pipeline?
>>>>
>>>> On Mon, 6 Feb 2023 at 09:27, Ashok KS  wrote:
>>>>
>>>>> Hi All,
>>>>>
>>>>> Just sending a reminder in case anyone could help. I haven't received
>>>>> any response to my issue.
>>>>>
>>>>> Regards,
>>>>> Ashok
>>>>>
>>>>> On Fri, Feb 3, 2023 at 12:23 AM Ashok KS  wrote:
>>>>>
>>>>>> Hi All,
>>>>>>
>>>>>> I'm new to using Apache Beam using Go.
>>>>>>
>>>>>> pubsubio.Write(scope, "project", "topic", ppMessages)
>>>>>> When I try to publish a message in a topic I get an error message
>>>>>> "Could not find the sink for pubsub, Check that the sink library
>>>>>> specifies alwayslink = 1
>>>>>>
>>>>>> I found a StackOverFlow post for the same issue but it doesn't solve
>>>>>> the problem.
>>>>>>
>>>>>> Stackoverflow Link
>>>>>> <https://stackoverflow.com/questions/69651665/go-apache-beam-gcp-dataflow-could-not-find-the-sink-for-pubsub-check-that-th>
>>>>>>
>>>>>> Can someone please help?
>>>>>>
>>>>>> Regards,
>>>>>> Ashok
>>>>>>
>>>>>


Re: Go + Apache Beam GCP Dataflow: Could not find the sink for pubsub, Check that the sink library specifies alwayslink = 1

2023-02-06 Thread Shivam Singhal
The issue is not yet verified by the maintainers but I think the pubsubio
connector's Write method doesn't work in Batch pipelines.

But I am pretty sure that pubsubio Write doesn't work for Batch Pipelines
because it's mentioned in the code comments. Check the below issue for the
details:
https://github.com/apache/beam/issues/25326

On Mon, 6 Feb 2023 at 17:26, Ashok KS  wrote:

> Hi Shivam,
>
> Thanks a lot for your response. Yes it is a batch pipeline. My task is to
> read a big query table, process the data and publish the Rows as a PubSub
> message.
>
> Regards,
> Ashok
>
> On Mon, 6 Feb 2023 at 10:52 pm, Shivam Singhal <
> shivamsinghal5...@gmail.com> wrote:
>
>> Hey Ashok KS,
>>
>> Is this a batch pipeline?
>>
>> On Mon, 6 Feb 2023 at 09:27, Ashok KS  wrote:
>>
>>> Hi All,
>>>
>>> Just sending a reminder in case anyone could help. I haven't received
>>> any response to my issue.
>>>
>>> Regards,
>>> Ashok
>>>
>>> On Fri, Feb 3, 2023 at 12:23 AM Ashok KS  wrote:
>>>
>>>> Hi All,
>>>>
>>>> I'm new to using Apache Beam using Go.
>>>>
>>>> pubsubio.Write(scope, "project", "topic", ppMessages)
>>>> When I try to publish a message in a topic I get an error message
>>>> "Could not find the sink for pubsub, Check that the sink library
>>>> specifies alwayslink = 1
>>>>
>>>> I found a StackOverFlow post for the same issue but it doesn't solve
>>>> the problem.
>>>>
>>>> Stackoverflow Link
>>>> <https://stackoverflow.com/questions/69651665/go-apache-beam-gcp-dataflow-could-not-find-the-sink-for-pubsub-check-that-th>
>>>>
>>>> Can someone please help?
>>>>
>>>> Regards,
>>>> Ashok
>>>>
>>>


Re: Go + Apache Beam GCP Dataflow: Could not find the sink for pubsub, Check that the sink library specifies alwayslink = 1

2023-02-06 Thread Shivam Singhal
Hey Ashok KS,

Is this a batch pipeline?

On Mon, 6 Feb 2023 at 09:27, Ashok KS  wrote:

> Hi All,
>
> Just sending a reminder in case anyone could help. I haven't received any
> response to my issue.
>
> Regards,
> Ashok
>
> On Fri, Feb 3, 2023 at 12:23 AM Ashok KS  wrote:
>
>> Hi All,
>>
>> I'm new to using Apache Beam using Go.
>>
>> pubsubio.Write(scope, "project", "topic", ppMessages)
>> When I try to publish a message in a topic I get an error message
>> "Could not find the sink for pubsub, Check that the sink library
>> specifies alwayslink = 1
>>
>> I found a StackOverFlow post for the same issue but it doesn't solve the
>> problem.
>>
>> Stackoverflow Link
>> 
>>
>> Can someone please help?
>>
>> Regards,
>> Ashok
>>
>


Memory Leak in streaming pipelines

2022-10-14 Thread Shivam Singhal
Hi Folks,

I noticed that two of my project's streaming pipelines are facing memory
leak and OOM errors.

One pipeline is using v2.33 and another is using the latest version v2.41

I saw hundreds of same logs in one of the pipeline. One sample log is in
this [pastebin note](https://pastebin.com/xc9JxvZx).


I haven’t dug deep into the cause of the OOM but taking a cursory look
tells me it’s a problem with the `org.apache.beam.vendor.guava.v26_0_jre`
package.


I checked the release notes <https://github.com/google/guava/releases> of
the newer versions of Guava Library and found that there have been some
memory leak fixes in version 28.1
<https://github.com/google/guava/releases/tag/v28.1> and version 30.0
<https://github.com/google/guava/releases/tag/v30.0>

Is it possible that Beam is affected by this and beam needs to be upgraded
to use a newer version of Guava?

Thanks,
Shivam Singhal


Automating the e2e testing of flows involving batch beam pipelines

2022-10-13 Thread Shivam Singhal
Hey folks,

I have a backend side flow which involves running a batch beam pipeline.

We have an automation test which:

   1. Writes some mock data to BQ
   2. Invokes Dataflow API to run a batch job which reads from BQ and
   writes the results to BigTable
   3. Asserts on the results written by the dataflow job

The problem is that step number 2 takes 6-7 mins because beam and dataflow
needs those minutes to spin up the worker and analyze the job graph.
This wait time increases our time to run tests.

Are there any best practices or ways to somehow reduce this wait time?

I know we have unit test classes in apache beam but those are just unit
tests (not integration tests).

Thanks!


Re: [JAVA] Handling repeated elements when merging two pcollections

2022-08-10 Thread Shivam Singhal
Think this should solve my problem.

Thanks Evan ans Luke!

On Thu, 11 Aug 2022 at 1:49 AM, Luke Cwik via user 
wrote:

> Use CoGroupByKey to join the two PCollections and emit only the first
> value of each iterable with the key.
>
> Duplicates will appear as iterables with more then one value while keys
> without duplicates will have iterables containing exactly one value.
>
> On Wed, Aug 10, 2022 at 12:25 PM Shivam Singhal <
> shivamsinghal5...@gmail.com> wrote:
>
>> I have two PCollections, CollectionA & CollectionB of type KV> Byte[]>.
>>
>>
>> I would like to merge them into one PCollection but CollectionA &
>> CollectionB might have some elements with the same key. In those repeated
>> cases, I would like to keep the element from CollectionA & drop the
>> repeated element from CollectionB.
>>
>> Does anyone know a simple method to do this?
>>
>> Thanks,
>> Shivam Singhal
>>
>


Re: [JAVA] Batch elements from a PCollection

2022-08-10 Thread Shivam Singhal
Is there no other way than
https://stackoverflow.com/a/44956702 ?

On Thu, 11 Aug 2022 at 1:00 AM, Shivam Singhal 
wrote:

> I have a PCollection of type KV where each key in those
> KVs is unique.
>
> I would like to split all those KV pairs into batches. This new
> PCollection will be of type PCollection>>>
> where the iterable’s length can be configured.
>
>
> I know there is a PTransform called GroupIntoBatches but it batches based
> in the keys which is not my usecase.
>
> Will be great if someone could help in this.
>
> Thanks,
> Shivam Singhal
>


[JAVA] Batch elements from a PCollection

2022-08-10 Thread Shivam Singhal
I have a PCollection of type KV where each key in those KVs
is unique.

I would like to split all those KV pairs into batches. This new PCollection
will be of type PCollection>>> where the
iterable’s length can be configured.


I know there is a PTransform called GroupIntoBatches but it batches based
in the keys which is not my usecase.

Will be great if someone could help in this.

Thanks,
Shivam Singhal


[JAVA] Handling repeated elements when merging two pcollections

2022-08-10 Thread Shivam Singhal
I have two PCollections, CollectionA & CollectionB of type KV.


I would like to merge them into one PCollection but CollectionA &
CollectionB might have some elements with the same key. In those repeated
cases, I would like to keep the element from CollectionA & drop the
repeated element from CollectionB.

Does anyone know a simple method to do this?

Thanks,
Shivam Singhal


Re: RedisIO Apache Beam JAVA Connector

2022-07-19 Thread Shivam Singhal
Hi Alexey!

Thanks for replying.
I think we will only use RedisIO to write to redis. From your reply &
github issue 21825, it seems SDF is causing some issue in reading from
Redis.

Do you know of any issues with Write?

If I get a chance to test the reading in my staging environment, I will :)

Thanks,
Shivam Singhal

On Mon, 18 Jul 2022 at 22:22, Alexey Romanenko 
wrote:

> Hi Shivam,
>
> RedisIO is already for quite a long time in Beam, so we may consider it’s
> rather stable. I guess it was marked @Experimental since its user API was
> changing at that moment (that a point) [1].
>
> However, recently RedisIO has moved to SDF for a reading part, so I can’t
> say how it was heavily tested in production system. AFAICT, there is an
> open issue [2] that is likely related to this.
>
> It would be great if you could test this IO in your testing enviroment and
> provide some feedback how it works for your cases.
>
> —
> Alexey
>
> [1] https://issues.apache.org/jira/browse/BEAM-9231
> [2] https://github.com/apache/beam/issues/21825
>
>
> On 18 Jul 2022, at 02:19, Shivam Singhal 
> wrote:
>
> Hi everyone,
>
> I see that org.apache.beam.sdk.io.redis
> <https://beam.apache.org/releases/javadoc/2.20.0/org/apache/beam/sdk/io/redis/package-summary.html>
>  version
> 2.20.0 onwards, this connector is marked experimental.
>
> I tried to see the changelog
> <https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12319527=12346780>
> for v2.20.0 but could not find an explanation.
>
> I am working with apache beam 2.40.0 and wanted to know which classes and
> functions are marked experimental in *org.apache.beam.sdk.io.redis:2.40.0
> *? Is it safe to use in production environments?
>
> Thanks!
>
>
>
>


RedisIO Apache Beam JAVA Connector

2022-07-18 Thread Shivam Singhal
Hi everyone,

I see that org.apache.beam.sdk.io.redis

version
2.20.0 onwards, this connector is marked experimental.

I tried to see the changelog

for v2.20.0 but could not find an explanation.

I am working with apache beam 2.40.0 and wanted to know which classes and
functions are marked experimental in *org.apache.beam.sdk.io.redis:2.40.0 *?
Is it safe to use in production environments?

Thanks!


How to setup staging, pre-prod, and production envs for dataflow jobs?

2022-07-08 Thread Shivam Singhal
Hi Community,

What is the flow you follow for setting up staging, pre-prod and prod
environments for your dataflow jobs?

Stackoverflow question *here
*