Aljoscha Krettek created FLINK-3278:
---
Summary: Add Partitioned State Backend Based on RocksDB
Key: FLINK-3278
URL: https://issues.apache.org/jira/browse/FLINK-3278
Project: Flink
Issue
Greg Hogan created FLINK-3277:
-
Summary: Use Value types in Gelly API
Key: FLINK-3277
URL: https://issues.apache.org/jira/browse/FLINK-3277
Project: Flink
Issue Type: Improvement
Hi Gyula,
will the out of core backend be in a separate maven module? If so, you can
include the module only in the "hadoop-2" profile.
As you can see in the main pom.xml, "flink-yarn" and "flink-fs-tests" are
also "hadoop2" only modules:
hadoop-2
Hi,
While developing the out-of-core state backend that will store state
directly to hdfs (either TFiles or BloomMapFiles), I realised that some
file formats and features I use are hadoop 2.x only.
What is the suggested way to handle features that use hadoop 2.x api? Can
these be excluded from
Ted Yu created FLINK-3280:
-
Summary: Wrong usage of Boolean.getBoolean()
Key: FLINK-3280
URL: https://issues.apache.org/jira/browse/FLINK-3280
Project: Flink
Issue Type: Bug
Reporter:
Hi,
did you get any error message while executing the job? I don't think you
can serialize the "Demo" type with the "SimpleStringSchema".
On Fri, Jan 22, 2016 at 8:13 PM, Deepak Jha wrote:
> Hi Devs,
> I just started using Flink and would like to ass kafka as Sink. I
Hi Devs,
I just started using Flink and would like to ass kafka as Sink. I went
through the documentation but so far I've not succeeded in writing to Kafka
from Flink
I' building application in Scala Here is my code snippet
case class *Demo*(city: String, country: String, zipcode: Int)
I would create a JIRA ticket before adding another example. As you can see
in the "contributions guide":
http://flink.apache.org/how-to-contribute.html#propose-an-improvement-or-a-new-feature
its actually required ;)
I came across another issue which is relatively easy to do, yet very
important
Hi,
does anybody know how I can increase the stack size of a yarn session?
Best Regards,
Hilmi
--
==
Hilmi Yildirim, M.Sc.
Researcher
DFKI GmbH
Intelligente Analytik für Massendaten
DFKI Projektbüro Berlin
Alt-Moabit 91c
D-10559
I didn't move the classes out of the file for the following reason: People
looking at our examples might not do this with an IDE, but from Github or
the source archive.
Without an IDE, its harder to find those files. If the classes are located
just below the main class in the same file, there is
Hi,
I agree with Robert on this.
We tried to keep the examples concise and moved boilerplate code like the
parameter parsing to the end of the file for that reason. I’m +1 for removing
as much boilerplate code as possible but would prefer to keep the example
selfcontained and in one file.
Chesnay Schepler created FLINK-3275:
---
Summary: [py] Add support for Dataset.setParallelism()
Key: FLINK-3275
URL: https://issues.apache.org/jira/browse/FLINK-3275
Project: Flink
Issue
Maximilian Michels created FLINK-3276:
-
Summary: Move runtime parts of flink-streaming-java to
flink-runtime
Key: FLINK-3276
URL: https://issues.apache.org/jira/browse/FLINK-3276
Project: Flink
Thanks for the insight, I haven't thought about it.
On Fri, Jan 22, 2016 at 10:10 AM, Robert Metzger
wrote:
> I didn't move the classes out of the file for the following reason: People
> looking at our examples might not do this with an IDE, but from Github or
> the source
Martin Junghanns created FLINK-3272:
---
Summary: Generalize vertex value type in ConnectedComponents
Key: FLINK-3272
URL: https://issues.apache.org/jira/browse/FLINK-3272
Project: Flink
Maximilian Michels created FLINK-3273:
-
Summary: Remove Scala dependency from flink-streaming-java
Key: FLINK-3273
URL: https://issues.apache.org/jira/browse/FLINK-3273
Project: Flink
Robert Metzger created FLINK-3274:
-
Summary: Prefix Kafka connector accumulators with unique id
Key: FLINK-3274
URL: https://issues.apache.org/jira/browse/FLINK-3274
Project: Flink
Issue
Thanks for the tips, I'll look into those issues as well.
On Fri, Jan 22, 2016 at 11:44 AM, Robert Metzger
wrote:
> I would create a JIRA ticket before adding another example. As you can see
> in the "contributions guide":
>
>
Hi Hilmi,
Flink on YARN is respecting the env.java.opts configuration parameter from
the flink-conf.yaml.
2016-01-22 11:44 GMT+01:00 Hilmi Yildirim :
> Hi,
> does anybody know how I can increase the stack size of a yarn session?
>
> Best Regards,
> Hilmi
>
> --
>
Hi,
the changes to the KMeans example look good so far. About moving everything to
external classes, IMHO we should do it, but I can also see why it is nice to
have the whole example contained in one file. So let’s see what the others
think.
Cheers,
Aljoscha
> On 21 Jan 2016, at 18:04, Stefano
+1 for moving to external classes, it is much simpler to analyze/study few
little blocks of code than one bigger imho.
Andrea
2016-01-22 9:41 GMT+01:00 Aljoscha Krettek :
> Hi,
> the changes to the KMeans example look good so far. About moving
> everything to external
Hi,
I added the following line to my flink-conf.yaml:
env.java.opts: "-Xss1024m"
But it does not work. Does this line mean that every single slot of each
task manager uses a stack size of 1 GB?
Best Regards,
Hilmi
Am 22.01.2016 um 11:46 schrieb Robert Metzger:
Hi Hilmi,
Flink on YARN is
Hi,
I added the following line to my flink-conf.yaml:
env.java.opts: "-Xss1024m"
But it does not work. Does this line mean that every single slot of each
task manager uses a stack size of 1 GB?
Best Regards,
Hilmi
Am 22.01.2016 um 11:46 schrieb Robert Metzger:
Hi Hilmi,
Flink on YARN is
Hi Hilmi,
it means that each JVM the YARN session is starting (JobManager and
TaskManagers) is initialized with that parameter.
Can you check the logs of the TaskManager to see if the option has been
passed properly?
On Fri, Jan 22, 2016 at 12:22 PM, Hilmi Yildirim
Hi,
yes, looks like the argument is passed correctly.
By the way: Why are you sending your answers always twice to the mailing
list?
On Fri, Jan 22, 2016 at 2:17 PM, Hilmi Yildirim <
hilmi.yildirim.d...@gmail.com> wrote:
> Hi Robert,
> in the logs I found this
>
> 13:54:41,291 INFO
Hi Robert,
in the logs I found this
13:54:41,291 INFO
org.apache.flink.yarn.YarnTaskManagerRunner - JVM
Options:
13:54:41,292 INFO
org.apache.flink.yarn.YarnTaskManagerRunner - -Xms3072m
13:54:41,292 INFO
org.apache.flink.yarn.YarnTaskManagerRunner
+1 for a big notice once we merge this.
I would like to have a suffix-free "flink-streaming-java". However,
I'm having a hard time to refactor the streaming-java code to get rid
of Scala. The streaming API depends on "flink-clients" and
"flink-runtime" which both inherently depend on Scala.
Greg Hogan created FLINK-3279:
-
Summary: Optionally disable DistinctOperator combiner
Key: FLINK-3279
URL: https://issues.apache.org/jira/browse/FLINK-3279
Project: Flink
Issue Type: New Feature
Hi Robert,
No it was compile time issue. Actually i tried to write a string as well
but it did not work for me. Just for clarity my flink-connector-kafka
version is 0.10.1
I was able to fix the issue... SimpleStringSchema should be replaced with
JavaDefaultStringSchema as the later is doing
29 matches
Mail list logo