Re: Unable to use Flink RocksDB state backend due to endianness mismatch

2017-06-16 Thread Stefan Richter
Hi, we would also like to update to the latest RocksDB and drop FRocksDB altogether. But unfortunately, newer versions of RocksDB have issues with certain features in the Java API that we use in Flink, for example this one https://github.com/facebook/rocksdb/issues/1964

Re: Unable to use Flink RocksDB state backend due to endianness mismatch

2017-06-15 Thread Ziyad Muhammed
Hi Stefan I could solve the issue by building frocksdb with a patch for ppc architecture. How ever this patch is already applied to the latest version of rocksdb, where as frocksdb seems not updated in a while. It would be nice to have it updated with this patch. Thanks Ziyad On Tue, Jun 6, 201

Re: Unable to use Flink RocksDB state backend due to endianness mismatch

2017-06-06 Thread Stefan Richter
Hi, RocksDB is a native library with JNI binding. It is included as a dependency and does not build from source when you build Flink. The included jar provides native code for Linux, OSX, and Windows on x86-64. From the Exception, I would conclude you are using a different CPU architecture that

Unable to use Flink RocksDB state backend due to endianness mismatch

2017-06-03 Thread Ziyad Muhammed
Dear all, My Flink Job reads from a kafka topic and store the data in a RocksDB state backend, in order to make use of the queryable state. I'm able to run the job and query the state in my local machine. But when deploying on the cluster, I'm getting the below error: Caused by: org.apache.flink.