Hi Aljoscha,
I opened an issue here https://issues.apache.org/jira/browse/FLINK-4115 and
submitted a pull request.
I'm not sure if my fix is the best way to resolve this, or if it's better
to just remove the verification checks completely.
Thanks,
Josh
On Thu, Jun 23, 2016 at 9:41 AM, Aljoscha
Hi Josh,
do you maybe want to open an issue for that and contribute your fix for
that?
Cheers,
Aljoscha
On Fri, 17 Jun 2016 at 17:49 Josh wrote:
> Hi Aljoscha,
>
> Thanks! It looks like you're right. I've ran it with the FsStateBackend
> and everything works fine.
>
> I've
Hi,
I think the problem with the missing Class
com.amazon.ws.emr.hadoop.fs.EmrFileSystem is not specific to RocksDB. The
exception is thrown in the FsStateBackend, which is internally used by the
RocksDB backend to do snapshotting of non-partitioned state. The problem is
that the FsStateBackend
I found that I can still write to s3, using my Flink build of 1.1-SNAPSHOT,
for example if I run the word count example:
./bin/flink run ./examples/batch/WordCount.jar --input hdfs:///tmp/LICENSE
--output s3://permutive-flink/wordcount-result.txt
This works fine - it's just the
Hi Josh,
I assume that you build the SNAPSHOT version yourself. I had similar
version conflicts for Apache HttpCore with Flink SNAPSHOT versions on EMR.
The problem is cause by a changed behavior in Maven 3.3 and following
versions.
Due to these changes, the dependency shading is not working
Hey,
I've been running the Kinesis connector successfully now for a couple of
weeks, on a Flink cluster running Flink 1.0.3 on EMR 2.7.1/YARN.
Today I've been trying to get it working on a cluster running the current
Flink master (1.1-SNAPSHOT) but am running into a classpath issue when
starting