The only config files available are within the submitted jar. Things works
in eclipse using local environment while fails deploying to the cluster
On Nov 13, 2014 10:01 PM, wrote:
> Does the HBase jar in the lib folder contain a config that could be used
> instead of the config in the job jar fil
Does the HBase jar in the lib folder contain a config that could be used
instead of the config in the job jar file? Or is simply no config at all
available when the configure method is called?
--
Fabian Hueske
Phone: +49 170 5549438
Email: fhue...@gmail.com
Web: http://www
The hbase jar is in the lib directory on each node while the config files
are within the jar file I submit from the web client.
On Nov 13, 2014 9:37 PM, wrote:
> Have you added the hbase.jar file with your HBase config to the ./lib
> folders of your Flink setup (JobManager, TaskManager) or is it
Have you added the hbase.jar file with your HBase config to the ./lib folders
of your Flink setup (JobManager, TaskManager) or is it bundled with your
job.jar file?
--
Fabian Hueske
Phone: +49 170 5549438
Email: fhue...@gmail.com
Web: http://www.user.tu-berlin.de/fabian.hue
Kousuke Saruta created FLINK-1243:
-
Summary: Remove JVM MaxPermSize parameter when we use Java 8
Key: FLINK-1243
URL: https://issues.apache.org/jira/browse/FLINK-1243
Project: Flink
Issue Typ
Any help with this? :(
On Thu, Nov 13, 2014 at 2:06 PM, Flavio Pompermaier
wrote:
> We definitely discovered that instantiating HTable and Scan in configure()
> method of TableInputFormat causes problem in distributed environment!
> If you look at my implementation at
> https://github.com/fpompe
Stephan Ewen created FLINK-1242:
---
Summary: Streaming examples
Key: FLINK-1242
URL: https://issues.apache.org/jira/browse/FLINK-1242
Project: Flink
Issue Type: Bug
Components: Build S
Stefano Bortoli created FLINK-1241:
--
Summary: Record processing counter for Dashboard
Key: FLINK-1241
URL: https://issues.apache.org/jira/browse/FLINK-1241
Project: Flink
Issue Type: Improve
We definitely discovered that instantiating HTable and Scan in configure()
method of TableInputFormat causes problem in distributed environment!
If you look at my implementation at
https://github.com/fpompermaier/incubator-flink/blob/master/flink-addons/flink-hbase/src/main/java/org/apache/flink/ad
Looks to me like the deserialization is not happening properly, leavinf
some unconsumed bytes...
On Thu, Nov 13, 2014 at 11:17 AM, Ufuk Celebi wrote:
> Just my two cents, but the Exception is thrown by the lower layer
> serializers, which write/read IOReadableWriteable types. The respective
> ex
I think it makes very much sense to have this setup with the data classes.
+1 on that.
My question was only when *users* might want to run the Scala examples
instead of the Java examples since they do the same thing.
On Thu, Nov 13, 2014 at 11:14 AM, mbalassi wrote:
> Github user mbalassi comme
Just my two cents, but the Exception is thrown by the lower layer
serializers, which write/read IOReadableWriteable types. The respective
exception is thrown if a partial record has not been fully deserialized and
you receive an event (channel close event or so). The corresponding writer
part is th
I have implemented your idea of an Unkown type which uses the
KryoSerializer. Since I don't have type information, I initialize the
the serializer with Object.class. Collection execution works fine but
when I execute a simple identity mapper job normally I get the following
Exception. Is there
I think separate Scala JARs don't add much value since we have the examples
already packaged. As source they make very much sense, but is there a
difference for users when trying out the system?
On Thu, Nov 13, 2014 at 10:21 AM, aljoscha wrote:
> Github user aljoscha commented on the pull reques
Aljoscha Krettek created FLINK-1240:
---
Summary: We cannot use sortGroup on a global reduce
Key: FLINK-1240
URL: https://issues.apache.org/jira/browse/FLINK-1240
Project: Flink
Issue Type: Bu
15 matches
Mail list logo