I suspect your dependencies have conflict. I develop Linkage Checker
enforcer rule to identify incompatible dependencies. Do you want to give it
a try?
https://github.com/GoogleCloudPlatform/cloud-opensource-java/wiki/Linkage-Checker-Enforcer-Rule
Regards,
Tomo
On Fri, Oct 2, 2020 at 21:34
Thanks Robert, yes, I'm also thinking of own expansion service and trying
it, so:
[grpc-default-executor-3] INFO
io.x.kafka.security.token.RefreshableTokenLoginModule - starting
renewal task and exposing its jmx metrics
[grpc-default-executor-3] INFO
Hi - We have a beam pipeline reading and writing using an SDF based IO
connector working fine in a local machine using Direct Runner or Flink
Runner. However when we build an image of that pipeline along with Flink
and deploy in a cluster we get below exception.
ERROR
If you make sure that these extra jars are in your path when you
execute your pipeline, they should get picked up when invoking the
expansion service (though this may not be the case long term).
The cleanest way would be to provide your own expansion service. If
you build a jar that consists of
Thanks Rober, yes, our Kafka requires JAAS configuration (sasl.jaas.config)
at the client side for security check with the corresponding LoginModule
which requires additional classes:
==
Just a followup since no one replied it.
My understanding is for any expanded transforms beam wants the
environment self-described.
So I updated boot and dockerfile for the java harness environment and use
--sdk_harness_container_image_overrides in portable runner but fail to see
the updated image
The app itself is developed in Clojure, but here's the gist of how it's getting
configured:
AwsCredentialsProvider credProvider =
EnvrionmentVariableCredentialsProvider.create();
pipeline.apply(
SqsIO.read()
.withQueueUrl(url)
For clarification, is it just streaming side inputs that present an issue for
SparkRunner or are there other areas that need work? We've started work on a
Beam-based project that includes both streaming and batch oriented work and a
Spark cluster was our choice due to the perception that it
Have you considered using Session windows? The window would start at the
timestamp of the article, and the Session gap duration would be the
(event-time) timeout after which you stop waiting for assets to join that
article.
On Fri, Oct 2, 2020 at 3:05 AM Kaymak, Tobias
wrote:
> Hello,
>
> In
Support for watermark holds is missing for both Spark streaming
implementations (DStream and structured streaming) so watermark based
triggers don't produce the correct output.
Excluding the direct runner, Flink is the OSS runner with the most people
working on it adding features and fixing bugs
Well. Of course this is not fixing the core problem.
What I can do is extend the FixedWindows class and make sure that for my
real recorded "system latency" the values still get put into the previous
window. Or is there a smarter way to deal with this?
On Fri, Oct 2, 2020 at 4:11 PM Kaymak,
Hi all,
We just released Scio 0.9.5. This release upgrades Beam to the latest
2.24.0 and includes several improvements and bug fixes, including Parquet
Avro dynamic destinations, Scalable Bloom Filter and many others. This will
also likely be the last 0.9.x release before we start working on the
I have seen NoClassDefFoundErrors even when the class is there if there is
an issue loading the class (usually related to JNI failing to load or a
static block failing). Try to find the first linkage error
(ExceptionInInitializer / UnsatisifedLinkError / ...) in the logs as it
typically has more
I suspected that io.grpc:grpc-netty-shaded:jar:1.27.2 was incorrectly
shaded, but the JAR file contains the
io/grpc/netty/shaded/io/netty/util/collection/IntObjectHashMap$2 which is
reported as missing. Strange.
suztomo-macbookpro44% jar tf grpc-netty-shaded-1.27.2.jar |grep
IntObjectHashMap
This is what I came up with:
https://gist.github.com/tkaymak/1f5eccf8633c18ab7f46f8ad01527630
The first run looks okay (in my use case size and offset are the same), but
I will need to add tests to prove my understanding of this.
On Fri, Oct 2, 2020 at 12:05 PM Kaymak, Tobias
wrote:
> Hello,
No, that was not the case. I'm still seeing this message when canceling a
pipeline. Sorry the spam.
On Fri, Oct 2, 2020 at 12:22 PM Kaymak, Tobias
wrote:
> I think this was caused by having the flink-runner defined twice in my
> pom. Oo
> (one time as defined with scope runtime, and one time
I think this was caused by having the flink-runner defined twice in my pom.
Oo
(one time as defined with scope runtime, and one time without)
On Fri, Oct 2, 2020 at 9:38 AM Kaymak, Tobias
wrote:
> Sorry that I forgot to include the versions, currently I'm on Beam 2.23.0
> / Flink 1.10.2 - I
Hello,
In chapter 4 of the Streaming Systems book by Tyler, Slava and Reuven there
is an example 4-6 on page 111 about custom windowing that deals with
UnalignedFixedWindows:
https://www.oreilly.com/library/view/streaming-systems/9781491983867/ch04.html
Unfortunately that example is abbreviated
Sorry that I forgot to include the versions, currently I'm on Beam 2.23.0 /
Flink 1.10.2 - I have a test dependency for cassandra (archinnov) which
should *not *be available at runtime, refers to netty and is included in
this tree, but the other two places where I find netty is in Flink and the
19 matches
Mail list logo