This is being addressed here: https://github.com/apache/flink/pull/2249
On Tue, Jul 12, 2016 at 3:48 PM, Stephan Ewen wrote:
> I think there is a confusion between how Flink thinks about HA and job life
> cycle, and how many users think about it.
>
> Flink thinks that a killing of the YARN sessio
I left that in on purpose to protect against cases where the combination of
key and namespace can be ambiguous. For example, these two combinations of
key and namespace have the same written representation:
key [0 1 2] namespace [3 4 5] (values in brackets are byte arrays)
key [0 1] namespace [2 3
I wrote a simple Java plan that reads a file in the distributed cache and
uses the first line from that file in a map operation. Sure enough, it
works locally, but fails when the job is sent to a taskmanager on a worker
node. Since DistributedCache seems to work for everyone else, I'm thinking
that
My assumption is that this was a sanity check that actually just stuck in
the code.
It can probably be removed.
PS: Moving this to the d...@flink.apache.org list...
On Fri, Jul 15, 2016 at 11:05 AM, 刘彪 wrote:
> In AbstractRocksDBState.writeKeyAndNamespace():
>
> protected void writeKeyAndNam
Hey Robert,
thanks for the valuable input.
I have some follow-up questions:
To 2) I had a separate Docker container for the JobManager and the TaskManagers
for playing around with the Flink Cluster. In production that’s not an option.
As I mentioned before the Flink job has to interact in a mi
In AbstractRocksDBState.writeKeyAndNamespace():
protected void writeKeyAndNamespace(DataOutputView out) throws IOException {
backend.keySerializer().serialize(backend.currentKey(), out);
out.writeByte(42);
namespaceSerializer.serialize(currentNamespace, out);
}
Why write a byte 42 between key and
Could you write a java job that uses the Distributed cache to distribute
files?
If this fails then the DC is faulty, if it doesn't something in the
Python API is wrong.
On 15.07.2016 08:06, Geoffrey Mon wrote:
I've come across similar issues when trying to set up Flink on Amazon
EC2 instance