Hi Federico,
As far as I know, the Kafka client code has been rewritten in Java for version
0.9, meaning there is no more Scala dependency in there. Only the server
(broker) code still contains Scala but it doesn't matter what Scala version a
client uses, if any.
Best,
Aljoscha
> On 20. Sep
Fabian,
Awesome! After your initial email I got things to work by deploying my
fat jar into the flink/lib folder, and volia! it worked. :) I will grab
your pull request and give it a go tomorrow.
On Wed, Sep 20, 2017 at 1:03 PM, Fabian Hueske wrote:
> Here's the pull
Here's the pull request that hopefully fixes your issue:
https://github.com/apache/flink/pull/4690
Best, Fabian
2017-09-20 16:15 GMT+02:00 Fabian Hueske :
> Hi Garrett,
>
> I think I identified the problem.
> You said you put the Hive/HCat dependencies into your user fat Jar,
It is not surprising to see fidelity issues with the YARN proxy. I suggest
opening a ticket on Flink side to update the cancel-with-savepoint API to
take the target directory as a query string parameter (of course, backwards
compatibility should be maintained).
On Wed, Sep 20, 2017 at 1:55 AM,
Hi Garrett,
I think I identified the problem.
You said you put the Hive/HCat dependencies into your user fat Jar,
correct? In this case, they are loaded with Flink's userClassLoader (as
described before).
In the OutputFormatVertex.finalizeOnMaster() method, Flink correctly loads
the user classes
I have a couple of questions related to this:
1. We store state per key (Rocksdb backend). Currently, the state size
is ~1.5Gb. Checkpointing time sometimes reaches ~10-20 seconds. Is it
possible that checkpointing is affecting timer execution?
2. Does checkpointing cause Flink to stop
Hi,
We recently removed some cleanup code, because it involved checking some store
meta data to check when we can delete a directory. For certain stores (like
S3), requesting this meta data whenever we delete a file was so expensive that
it could bring down the job because removing state could
Hi, as far as I know some vendors like Hortonworks still use Kafka_2.10 as
part of their hadoop distribution.
Could the use of a different scala version cause issues with the Kafka
connector? I'm asking because we are using HDP 2.6 and we once already had
some issue with conflicting scala versions
+1
Original message From: Hai Zhou Date:
9/20/17 12:44 AM (GMT-08:00) To: Aljoscha Krettek ,
d...@flink.apache.org, user Subject: Re: [DISCUSS]
Dropping Scala 2.10
+1
> 在 2017年9月19日,17:56,Aljoscha Krettek
Hi
We've got a situation where we're merging several Kafka streams and for
certain streams, we want to retain up to 6 days of history. We're trying to
figure out how we can migrate savepoint data between application updates
when the data type for a certain state buffer updates.
Let's assume that
I have prepared a repo that reproduces the issue:
https://github.com/GerardGarcia/flink-global-window-growing-state
Maybe this way it is easier to spot the error or we can determine if it is a
bug.
Gerard
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi Emily,
I'm not familiar with the details of the REST API either but if this is a
problem with the proxy, maybe it is already interpreting the encoded URL and
passes it on un-encoded - have you tried encoding the path again? That is,
encoding the percent-signs:
http://
Hello Eron,
Thank you for your reply, we will take a look at this.
Regards
On 19 Sep 2017, at 22:37, Eron Wright
> wrote:
Hello, the current behavior is that Flink holds onto received offers for up to
two minutes while it attempts to
+1
> 在 2017年9月19日,17:56,Aljoscha Krettek 写道:
>
> Hi,
>
> Talking to some people I get the impression that Scala 2.10 is quite outdated
> by now. I would like to drop support for Scala 2.10 and my main motivation is
> that this would allow us to drop our custom Flakka
Hi XiangWei,
programmatically there is no nice tooling yet to cancel jobs on a dedicated
cluster. What you can do is to use Flink's REST API to issue a cancel
command [1]. You have to send a GET request to the target URL
`/jobs/:jobid/cancel`. In the future we will improve the programmatic job
It should only apply to the map operator.
On 19.09.2017 17:38, AndreaKinn wrote:
If I apply a sharing slot as in the example:
DataStream LTzAccStream = env
.addSource(new FlinkKafkaConsumer010<>("topic",
new
CustomDeserializer(), properties))
+1 for dropping support for Scala 2.10
On Tue, Sep 19, 2017 at 3:29 AM, Sean Owen wrote:
> For the curious, here's the overall task in Spark:
>
> https://issues.apache.org/jira/browse/SPARK-14220
>
> and most of the code-related changes:
>
>
17 matches
Mail list logo