Hi,
Seems like that you want to use "com.hadoop.compression.lzo.LzoCodec"
instead of "com.hadoop.compression.lzo.*LzopCodec*" in the below line.
compressedStream =
factory.getCodecByClassName("com.hadoop.compression.lzo.LzopCodec").createOutputStream(stream);
Regarding "lzop: unexpected end of
Hello,
I'm using BulkWriter to write newline-delimited, LZO-compressed files. The
logic is very straightforward (See code below).
I am experiencing an issue decompressing the created files created in this
manner, consistently getting "lzop: unexpected end of file". Is this an
issue with caller of
Hello,
I've posted two questions on stackoverflow. I would be great if u guys can look
into this questions. That would help me very much. Below are the links to
stackoverflow.
https://stackoverflow.com/questions/58487582/convert-apache-flink-datastream-to-a-datastream-that-makes-tumbling-window
So I think what you’re saying is if I use a DFS for web.upload.dir, my clients
can send all their requests to any Job Manager instance and not worry or care
which one is the leader. That definitely is an improvement, thanks.
From: Till Rohrmann [mailto:trohrm...@apache.org]
Sent: Friday, October
Thanks a lot Zhu Zhu for such an elaborated explanation.
On Mon, 21 Oct 2019 at 08:33, Zhu Zhu wrote:
> Sources of batch jobs process InputSplit. Each InputSplit can be a file or
> a file block according to the FileSystem(for HDFS it is file block).
> Sources need to retrieve InputSplits to proc
Can you please share your dockerfile?
Please upload your jar at /opt/flink/product-libs/flink-web-upload/.
Regards,
Pritam.
On Mon, 21 Oct 2019 at 19:58, Timothy Victor wrote:
> Thanks Pritam.
>
> Unfortunately this does not work for me. I get a response that says "jar
> file /tmp/flink-web-/
Thanks Pritam.
Unfortunately this does not work for me. I get a response that says "jar
file /tmp/flink-web-/flink-web-upload/ does not exist".
It is looking for the jar in the tmp folder. Wonder of there is a way to
change that so that it looks in the right folder.
Thanks
Tim
On Sun, Oct 2
Hi All!
I would like to ask the community for any experience regarding migration
from Storm to Flink production applications.
Specifically I am interested in your experience related to the resource
requirements for the same pipeline as implemented in Flink vs in Storm. The
design of the applicati
Hi Pritam,
I tried using :8081 in my docker compose to map your local port
to container port.
Session cluster launched successfully. It was my misunderstanding in docker
compose port binding sequence since I believed that the first port is the
container port while the second one the host.
So, p
The problem as I understand is your system port 8081 is already in use, so
you want to bind a different port of local system to the container's 8081
port.
Please use :8081 in your docker compose to map your local
port to container port.
Else, you may edit your /opt/flink/conf/flink-conf.yaml to ch
Hi Aleksey,
I tried using "8081:5000" as port binding configuration with no success. I also
tried different port numbers (i.e,. other than 5000) to bind, but admin seems
not to launch.
Is there any easy way to change flink-conf.yaml or pass additional command line
argument keeping the docker-co
FYI there is already a corresponding issue
https://issues.apache.org/jira/browse/FLINK-13660
Best,
tison.
Till Rohrmann 于2019年10月18日周五 下午9:42写道:
> Hi Martin,
>
> Flink's web UI based job submission is not well suited to be run behind a
> load balancer at the moment. The problem is that the web
Hi Srikanth,
Flink SQL supports nested objects, therefore you should not need to run
a separate flattening job. If you are using Kafka as a source for your
stream it should be fairly easy. You just need to define a proper json
schema for your stream as in this example[1][2]. If you use a different
We have a multi-tenancy scenario where:
- the source will be Kafka, and a Kafka partition could contain data
from multiple tenants
- our sink will send data to a different DB instance, depending on the
tenant
Is there a way to prevent slowness in one tenant from slowing other
tenants
14 matches
Mail list logo