Github user rmetzger commented on a diff in the pull request:
https://github.com/apache/flink/pull/1043#discussion_r37729602
--- Diff:
flink-core/src/main/java/org/apache/flink/api/common/typeutils/base/array/BooleanPrimitiveArrayComparator.java
---
@@ -0,0 +1,56
GitHub user rmetzger opened a pull request:
https://github.com/apache/flink/pull/876
[FLINK-2298] Allow setting a custom application name on YARN
With this change, users can pass a --name argument to the YARN session to
give the YARN application a custom name.
I've covered
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/938#issuecomment-125520977
I tried this pull request on a cluster, because the current code is failing
with the following exception when running it with a checkpoint interval of 1
second
GitHub user rmetzger opened a pull request:
https://github.com/apache/flink/pull/987
[FLINk-1680] Remove Tachyon test, rename maven module
I removed our dependencies to Tachyon.
The Tachyon test cluster (version 0.5) seemed to be unstable.
Upgrading to Tachyon 0.7
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/987#issuecomment-127636475
The problem is that we can only include this module for the hadoop2 builds.
Therefore, I can not add the code to a module which is always present
(`flink-tests
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/977#issuecomment-127606171
For the Travis problem, I would write an email to supp...@travis-ci.com.
They are very friendly and helpful.
---
If your project is set up for it, you can reply
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/946#issuecomment-125665625
Thank you for the contribution.
+1 to merge.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/974#issuecomment-127914785
+1 to merge
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user rmetzger commented on a diff in the pull request:
https://github.com/apache/flink/pull/967#discussion_r36280020
--- Diff:
flink-core/src/main/java/org/apache/flink/ps/impl/ParameterServerIgniteImpl.java
---
@@ -0,0 +1,105 @@
+/*
+ * Copyright 2015 EURA NOVA
Github user rmetzger commented on a diff in the pull request:
https://github.com/apache/flink/pull/967#discussion_r36280063
--- Diff:
flink-optimizer/src/main/java/org/apache/flink/optimizer/plantranslate/JobGraphGenerator.java
---
@@ -920,15 +928,33 @@ private JobVertex
Github user rmetzger commented on a diff in the pull request:
https://github.com/apache/flink/pull/962#discussion_r36280519
--- Diff:
flink-staging/flink-language-binding/flink-language-binding-generic/src/main/java/org/apache/flink/languagebinding/api/java/common/streaming
Github user rmetzger commented on a diff in the pull request:
https://github.com/apache/flink/pull/967#discussion_r36280632
--- Diff:
flink-optimizer/src/main/java/org/apache/flink/optimizer/plantranslate/JobGraphGenerator.java
---
@@ -920,15 +928,33 @@ private JobVertex
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/986#issuecomment-127689216
I'm not a type extractor expert but since its fixing a bug and it has
tests: +1
---
If your project is set up for it, you can reply to this email and have your
reply
GitHub user rmetzger opened a pull request:
https://github.com/apache/flink/pull/996
[WIP][FLINK-2386] Add new Kafka Consumers
I'm opening a WIP pull request (against our rules) to get some feedback on
my ongoing work.
Please note that I'm on vacation next week (until August 17
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/995#issuecomment-128425652
+1 to merge, its just a simple renaming.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/940#issuecomment-125177948
By the way, you don't need to open another pull request for updating it.
Just force-push into the branch this PR is based on (`Feature2`).
---
If your project
GitHub user rmetzger opened a pull request:
https://github.com/apache/flink/pull/941
[FLINK-2408] Define all maven properties outside build profiles
See JIRA issue for details.
@aalexandrov, can you take a look at this?
You can merge this pull request into a Git repository
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/885#issuecomment-125187262
Okay, I tried it again and you are right. It seems to create the artifacts
correctly.
I was confused because
a) the reactor summary didn't contain the variable
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/943#issuecomment-125191411
Thank you for the PR.
Can you add a test case validating the fix?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/943#issuecomment-125192186
I would just use the Table API example you've posted in the JIRA and add
another IT case for the Table API
---
If your project is set up for it, you can reply
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/941#issuecomment-125181480
I don't think that I'm removing the activation of the `scala-2.11` profile.
If the `scala-2.11` property is set, the default value of the property
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/941#issuecomment-125191075
sbt can't do that.
We are lucky here because the `maven-shade-plugin` is generating a
dependency-reduced pom with the effective settings.
(Take a look in your
Github user rmetzger commented on a diff in the pull request:
https://github.com/apache/flink/pull/885#discussion_r35194868
--- Diff:
flink-contrib/flink-storm-compatibility/flink-storm-compatibility-examples/src/assembly/word-count-storm.xml
---
@@ -36,7 +36,7 @@ under
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/885#issuecomment-123643760
No need to hurry .. I needed 10 days to look at it .. so a few hours don't
matter ;)
I tried to install the artifacts to my local repository, but the _2.11
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/885#issuecomment-123640287
Sorry for the delay.
I will deploy the artifacts from this branch to the maven snapshot
repository to see if everything works as expected.
---
If your project
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/913#issuecomment-121617207
Big +1 to document this properly!
The PR contains headlines without content. I would either remove these
headlines before merging or merge the PR once its
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/885#issuecomment-134624663
Cool, that was quick ;)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/885#issuecomment-134658771
Is the Maven shade plugin bug the reason why this fails:
```
ERROR] Failed to execute goal
org.apache.maven.plugins:maven-shade-plugin:2.4.1:shade (shade
Github user rmetzger commented on a diff in the pull request:
https://github.com/apache/flink/pull/885#discussion_r37874193
--- Diff: flink-staging/flink-gelly/pom.xml ---
@@ -37,17 +37,17 @@ under the License.
dependencies
dependency
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1055#issuecomment-134599517
The tests in this pull request might fail because the fixes to the
BufferBarrier are not backported to 0.9 yet.
---
If your project is set up for it, you can reply
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/885#issuecomment-134667459
There is a Spark 2.11 artifact in mvn central.
I think they are doing a similar thing as we are already doing with the
hadoop1/hadoop2 versions: They generate
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1338#issuecomment-155065545
Okay, I see. Lets not fix it as part of this pull request.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
GitHub user rmetzger opened a pull request:
https://github.com/apache/flink/pull/1340
[FLINK-2987] Remove jersey-core and jersey-client dependency exclusions
⦠to make Flink on Hadoop 2.6.0+ work again.
This PR is for the 0.10 release branch.
Please review
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1338#issuecomment-155059566
Thanks a lot for working on this.
I've tried out your changes locally, and it was working as expected.
- Why is there a 10 MB limit on the upload? Usually people
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1340#issuecomment-155086171
I just verified the fix on the Azure cluster with Hadoop 2.6.0 and it's
working now.
---
If your project is set up for it, you can reply to this email and have your
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1338#issuecomment-155074648
Mh, Fabian is right. In practice, jar files very often exceed 10 MB.
> Does the web monitor also use heap space out of Job Manager? ]
Yes, the
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1338#issuecomment-155713026
Thank you. I've tested it on a YARN cluster, but the URL it is showing is
not correct.
It seems to me that you are just using the current hostname + ip of the web
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1338#issuecomment-155732014
Thank you. Sadly, I've shut down the cluster a few minutes ago, because I
was finished with the release testing.
I'll try soon...
---
If your project is set up
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1338#issuecomment-155738690
I know. The problem is that MS Azure cloud needs 30 minutes to deploy a
small hortonworks cluster. Google handles that in a few minutes.
But I can use another
Github user rmetzger commented on a diff in the pull request:
https://github.com/apache/flink/pull/1342#discussion_r44527912
--- Diff: flink-yarn/pom.xml ---
@@ -63,6 +63,11 @@ under the License.
test
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1342#issuecomment-155768292
Thanks a lot for working on this issue! It doesn't happen everyday that
users identify issues, fix them and verify the fix so properly :)
I would really like
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1342#issuecomment-155764769
The tests on travis are failing
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1344#issuecomment-155763996
Maybe not as part of this PR, but in the long term, we should add a
integration test for cancelling a job running in a YARN session.
---
If your project is set up
GitHub user rmetzger opened a pull request:
https://github.com/apache/flink/pull/1341
[FLINK-2974] Add periodic offset committer for Kafka
The offset committer is only enabled when Flink's checkpointing is disabled.
When checkpointing is enabled, we commit to ZK upon checkpoint
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1338#issuecomment-155097754
Since its probably an intentional limitation of the proxy, how about the
following approach:
If the web frontend is running on YARN, we show the direct URL
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1340#issuecomment-155112495
I've merged it to the `release-0.10` branch.
Once travis has passed for master, I'll merge it there as well and close
the issue.
---
If your project is set up
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1340#issuecomment-155103312
I also tested the patch on a Hadoop 2.4 machine, using the default Hadoop
2.3 build.
+1 to merge for the 0.10 release
---
If your project is set up
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1338#issuecomment-155102201
Good idea.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1341#issuecomment-156372109
Thank you for the detailed review. I will address your concerns soon.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user rmetzger commented on a diff in the pull request:
https://github.com/apache/flink/pull/1342#discussion_r44764775
--- Diff: flink-yarn/src/main/java/org/apache/flink/yarn/Utils.java ---
@@ -135,7 +138,54 @@ public static void setTokensFor(ContainerLaunchContext
Github user rmetzger commented on a diff in the pull request:
https://github.com/apache/flink/pull/1342#discussion_r44764797
--- Diff: pom.xml ---
@@ -82,6 +82,7 @@ under the License.
error
1.2.1
2.3.0
+ 1.1.2
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1342#issuecomment-156379250
I had some minor remarks to the PR, but overall, I'd like to merge it like
this!
---
If your project is set up for it, you can reply to this email and have your
reply
Github user rmetzger commented on a diff in the pull request:
https://github.com/apache/flink/pull/1342#discussion_r44764893
--- Diff: flink-dist/src/main/flink-bin/bin/config.sh ---
@@ -249,7 +249,15 @@ if [ -n "$HADOOP_HOME" ]; then
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1338#issuecomment-156494598
Okay, let me know when you are ready for another test drive
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1338#issuecomment-156476686
the jar has been build with scala 2.11 yes, flink with scala 2.10.
But I still expect a good exception in the web frontend for this.
---
If your project is set up
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1338#issuecomment-156503204
Okay, I'll test it again.
This time, I'm building Flink with Scala 2.11.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1338#issuecomment-156510858
Good news. Test on cluster was successful.
I'll take another look at the code, but otherwise +1 from my side.
---
If your project is set up for it, you can reply
GitHub user rmetzger opened a pull request:
https://github.com/apache/flink/pull/1361
[FLINK-2967] Enhance TaskManager network detection
JIRA: https://issues.apache.org/jira/browse/FLINK-2967
- Increase timeout for `LOCAL_HOST` address detection strategy
- give
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1331#issuecomment-157383461
I can confirm that `chill-avro` was never used. I accidentally committed
the dependency while trying out different approaches for handling Avro POJOs
with Flink
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1368#issuecomment-157384055
+1 to merge.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user rmetzger commented on a diff in the pull request:
https://github.com/apache/flink/pull/1338#discussion_r45087435
--- Diff:
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/handlers/JarDeleteHandler.java
---
@@ -0,0 +1,70 @@
+/*
+ * Licensed
Github user rmetzger commented on a diff in the pull request:
https://github.com/apache/flink/pull/1338#discussion_r45087911
--- Diff:
flink-core/src/main/java/org/apache/flink/configuration/ConfigConstants.java ---
@@ -635,8 +644,18 @@
* The default number of archived
Github user rmetzger commented on a diff in the pull request:
https://github.com/apache/flink/pull/1338#discussion_r45088017
--- Diff:
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/PipelineErrorHandler.java
---
@@ -0,0 +1,79 @@
+/*
+ * Licensed
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1367#issuecomment-157430943
+1 to merge
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user rmetzger commented on a diff in the pull request:
https://github.com/apache/flink/pull/1338#discussion_r45088463
--- Diff:
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/handlers/JarListHandler.java
---
@@ -0,0 +1,130 @@
+/*
+ * Licensed
Github user rmetzger commented on a diff in the pull request:
https://github.com/apache/flink/pull/1338#discussion_r45088529
--- Diff:
flink-core/src/main/java/org/apache/flink/configuration/ConfigConstants.java ---
@@ -635,8 +644,18 @@
* The default number of archived
Github user rmetzger commented on a diff in the pull request:
https://github.com/apache/flink/pull/1338#discussion_r45094049
--- Diff:
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/handlers/JarListHandler.java
---
@@ -0,0 +1,130 @@
+/*
+ * Licensed
Github user rmetzger commented on a diff in the pull request:
https://github.com/apache/flink/pull/1338#discussion_r45094240
--- Diff:
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/handlers/JarDeleteHandler.java
---
@@ -0,0 +1,70 @@
+/*
+ * Licensed
Github user rmetzger commented on a diff in the pull request:
https://github.com/apache/flink/pull/1338#discussion_r45094375
--- Diff:
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/handlers/JarListHandler.java
---
@@ -0,0 +1,130 @@
+/*
+ * Licensed
Github user rmetzger commented on a diff in the pull request:
https://github.com/apache/flink/pull/1338#discussion_r45100162
--- Diff:
flink-core/src/main/java/org/apache/flink/configuration/ConfigConstants.java ---
@@ -635,8 +644,18 @@
* The default number of archived
GitHub user rmetzger opened a pull request:
https://github.com/apache/flink/pull/1343
Remove and forbid use of SerializationUtils. Fix FLINK-2992
The SerializationUtils are usually not using the right classloader, and
they have some security issues.
I'm using our checkstyle
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1343#issuecomment-155412487
I tested the change on a cluster and it's working.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1338#issuecomment-155421694
I assume the PR is ready for another round of reviewing
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user rmetzger commented on a diff in the pull request:
https://github.com/apache/flink/pull/1343#discussion_r44404877
--- Diff:
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/WindowedStream.java
---
@@ -167,15 +166,11 @@ public WindowedStream
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1338#issuecomment-155424425
Okay :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user rmetzger commented on a diff in the pull request:
https://github.com/apache/flink/pull/1327#discussion_r43997592
--- Diff: flink-runtime-web/web-dashboard/app/scripts/common/filters.coffee
---
@@ -28,6 +28,32 @@ angular.module('flinkApp
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1331#issuecomment-154023724
PR looks good.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1327#issuecomment-154025813
PR looks good.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1290#issuecomment-153754298
-1 This change breaks the nightly deployment. I'll reopen FLINK-2898.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1333#issuecomment-154083197
+1 to merge
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1246#issuecomment-148367867
I suspect we can close https://github.com/apache/flink/pull/1222 then?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user rmetzger closed the pull request at:
https://github.com/apache/flink/pull/1222
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1222#issuecomment-148369010
Max took the relevant changes from this PR into
https://github.com/apache/flink/pull/1246. Closing ...
---
If your project is set up for it, you can reply
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1248#issuecomment-148328036
Thank you for documenting the feature. +1 to merge
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/948#issuecomment-148403603
I tried running the code from this pull request again, this time using the
`mesos-playa` vagrant image, and it does not work for me.
I was following your
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1222#issuecomment-146400367
I will wait until your pull request is merged to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1235#issuecomment-146303645
I don't think that we can merge this change. We can not assume that
file:/// is an invalid path.
Also, I don't think there is a good way to check whether the hadoop
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1243#issuecomment-147625208
Hi Hilmi,
thank you for your PR. We'll review it soon.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1242#issuecomment-146738873
+1 to get rid of the old API asap.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1202#issuecomment-146737613
Are there any blockers to this PR. Otherwise, I'd like to have it merged
rather soon.
---
If your project is set up for it, you can reply to this email and have your
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1233#issuecomment-146737214
@sachingoel0101: my pull request
(https://github.com/apache/flink/pull/1222) is actually based on @uce's PR
(https://github.com/apache/flink/pull/1202). I'm not sure
Github user rmetzger commented on a diff in the pull request:
https://github.com/apache/flink/pull/1191#discussion_r41595057
--- Diff:
flink-java/src/main/java/org/apache/flink/api/java/typeutils/ValueTypeInfo.java
---
@@ -51,7 +51,18 @@
public class ValueTypeInfo extends
Github user rmetzger commented on a diff in the pull request:
https://github.com/apache/flink/pull/1191#discussion_r41595115
--- Diff:
flink-java/src/main/java/org/apache/flink/api/java/aggregation/SumAggregationFunction.java
---
@@ -113,11 +182,32 @@ public Long getAggregate
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1245#issuecomment-147009826
Great that we finally get rid of this AWS dependency.
It seems that the dependency actually influenced somehow other dependencies
from Hadoop.
I have
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1121#issuecomment-147009266
That's indeed a good question. I don't have much time currently, but I'm
trying to answer you within the next few days.
---
If your project is set up for it, you can
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1202#issuecomment-146991673
Great ;)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/885#issuecomment-119578556
I agree with @aalexandrov to do the name switch using
`artifactIdflink-clients${scala.suffix}/artifactId`.
---
If your project is set up for it, you can reply
Github user rmetzger commented on a diff in the pull request:
https://github.com/apache/flink/pull/884#discussion_r34152734
--- Diff: docs/apis/storm_compatibility.md ---
@@ -0,0 +1,155 @@
+---
+title: Storm Compatibility
+is_beta: true
+---
+!--
+Licensed
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/862#issuecomment-119607694
Are there any tests for this functionality in the Flink code?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/868#issuecomment-119618256
I needed to make these changes to the script when I created the additional
binaries for hadoop24,hadoop26,...
For that I needed to create binaries without deploying
601 - 700 of 2231 matches
Mail list logo