I am noticing what looks like the same drop-off in performance when
introducing TupleN subclasses as expressed in "Understanding the JIT and
tuning the implementation" [1].
I start my single-node cluster, run an algorithm which relies purely on
Tuples, and measure the runtime. I execute a
Hi Max,
You are right, there is no need to unlabel old fix versions.
My thought was to treat the fix version like "inbox zero". There is already
an emphasis on closing blockers, but few bugs and fewer features are that
severe. Pull requests can be long lived and require a ready resolution.
Michał Fijołek created FLINK-3583:
-
Summary: Configuration not visible in gui when job is running
Key: FLINK-3583
URL: https://issues.apache.org/jira/browse/FLINK-3583
Project: Flink
Issue
Thanks for voting! The vote passes.
The following votes have been cast:
+1 votes: 4
Aljoscha
Till
Ufuk
Stephan
No -1 votes.
I will now release the binaries to the mirrors and the artifacts to maven
central.
Others from the community are working on blog posts for announcing the
release next
Aljoscha Krettek created FLINK-3582:
---
Summary: Add Iterator over State for All Keys in Partitioned State
Key: FLINK-3582
URL: https://issues.apache.org/jira/browse/FLINK-3582
Project: Flink
Aljoscha Krettek created FLINK-3581:
---
Summary: Add Non-Keyed Window Trigger
Key: FLINK-3581
URL: https://issues.apache.org/jira/browse/FLINK-3581
Project: Flink
Issue Type: Improvement
Timo Walther created FLINK-3580:
---
Summary: Reintroduce Date/Time and implement scalar functions for
it
Key: FLINK-3580
URL: https://issues.apache.org/jira/browse/FLINK-3580
Project: Flink
Timo Walther created FLINK-3579:
---
Summary: Improve String concatenation
Key: FLINK-3579
URL: https://issues.apache.org/jira/browse/FLINK-3579
Project: Flink
Issue Type: Bug
@Stefano: No problem. Always happy when people test releases :-)
On Fri, Mar 4, 2016 at 12:27 PM, Stefano Baghino <
stefano.bagh...@radicalbit.io> wrote:
> Ok, I switched back to 2.10 with the script and tried with both the
> explicit call to the script back to 2.11 and with the implicit call
Ok, I switched back to 2.10 with the script and tried with both the
explicit call to the script back to 2.11 and with the implicit call via
-Dscala-11 worked, I really don't know what happened before. Thank you for
the help, sorry for disturbing the voting process.
On Fri, Mar 4, 2016 at 12:12
You are right. Just checked the docs. They are correct.
@Stefano: the docs say that you first change the binary version via
the script and then you can specify the language version via
scala.version.
On Fri, Mar 4, 2016 at 11:51 AM, Stephan Ewen wrote:
> Are the docs actually
I'll switch back to Scala 2.10 and try again, I was sure I ran the script
before running the build; maybe something went wrong and I didn't notice.
On Fri, Mar 4, 2016 at 11:51 AM, Stephan Ewen wrote:
> Are the docs actually wrong?
>
> In the docs, it says to run the
Are the docs actually wrong?
In the docs, it says to run the "tools/change-scala-version.sh 2.11" script
first (which implicitly adds the "-Dscala-2.11" flag.
I thought this problem arose because neither the flag was specified, nor
the script run.
On Fri, Mar 4, 2016 at 11:43 AM, Ufuk Celebi
@Stefano: Yes, would be great to have a fix in the docs and pointers
on how to improve the docs for this.
On Fri, Mar 4, 2016 at 11:41 AM, Stefano Baghino
wrote:
> Build successful, thank you.
>
> On Fri, Mar 4, 2016 at 11:24 AM, Stefano Baghino <
>
Build successful, thank you.
On Fri, Mar 4, 2016 at 11:24 AM, Stefano Baghino <
stefano.bagh...@radicalbit.io> wrote:
> I'll try it immediately, thanks for the quick feedback and sorry for the
> intrusion. Should I add this to the docs? The flag seem to be
> -Dscala.version=2.11.x on them:
>
I'll try it immediately, thanks for the quick feedback and sorry for the
intrusion. Should I add this to the docs? The flag seem to be
-Dscala.version=2.11.x on them:
https://ci.apache.org/projects/flink/flink-docs-master/setup/building.html#scala-versions
On Fri, Mar 4, 2016 at 11:20 AM, Stephan
AFAIK, you should run `tools/change-scala-version.sh 2.11` before running `mvn
clean install -DskipTests -Dscala-2.11`.
Regards,
Chiwan Park
> On Mar 4, 2016, at 7:20 PM, Stephan Ewen wrote:
>
> Sorry, the flag is "-Dscala-2.11"
>
> On Fri, Mar 4, 2016 at 11:19 AM, Stephan
Sorry, the flag is "-Dscala-2.11"
On Fri, Mar 4, 2016 at 11:19 AM, Stephan Ewen wrote:
> Hi!
>
> To compile with Scala 2.11, please use the "-Dscala.version=2.11" flag.
> Otherwise the 2.11 specific build profiles will not get properly activated.
>
> Can you try that again?
>
I won't cast a vote as I'm not entirely sure this is just a local problem
(and from the document the Scala 2.11 build has been checked), however I've
checked out the `release-1.0-rc5` branch and ran `mvn clean install
-DskipTests -Dscala.version=2.11.7`, with a failure on `flink-runtime`:
[ERROR]
Jark Wu created FLINK-3577:
--
Summary: Display anchor links when hovering over headers.
Key: FLINK-3577
URL: https://issues.apache.org/jira/browse/FLINK-3577
Project: Flink
Issue Type: Bug
+1
Checked LICENSE and NOTICE files
Built against Hadoop 2.6, Scala 2.10, all tests are good
Run local pseudo cluster with examples
Log files look good, no exceptions
Tested File State Backend
Ran Storm Compatibility Examples
-> minor issue, one example fails (no release blocker in my opinion)
+1
- Checked checksums and signatures
- Verified no binaries in source release
- Checked that source release is building properly
- Build for custom Hadoop version
- Ran start scripts
- Checked log and out files
- Tested in local mode
- Tested in cluster mode
- Tested on cluster with HDFS
-
The pull request https://github.com/apache/flink/pull/1758 should improve
the TaskManager's network interface selection.
On Fri, Mar 4, 2016 at 10:19 AM, Stephan Ewen wrote:
> Hi!
>
> This registration phase means that the TaskManager tries to tell the
> JobManager that it
Hi!
This registration phase means that the TaskManager tries to tell the
JobManager that it is available.
If that fails, there can be two reasons
1) Network communication not possible to the port
1.1) JobManager IP really not reachable (not the case, as you
described)
1.2)
24 matches
Mail list logo