Thanks, I just assign you the issue =)
- Henry
On Mon, Aug 18, 2014 at 11:38 PM, Márton Balassi
wrote:
> My username is mbalassi.
> I've started watching the issue to give you a link. :)
>
>
> On Tue, Aug 19, 2014 at 8:07 AM, Henry Saputra
> wrote:
>
>> Hi Marton,
>>
>> I created the JIRA ticke
My username is mbalassi.
I've started watching the issue to give you a link. :)
On Tue, Aug 19, 2014 at 8:07 AM, Henry Saputra
wrote:
> Hi Marton,
>
> I created the JIRA ticket to track the streaming documentation:
> https://issues.apache.org/jira/browse/FLINK-1058
> Somehow I could not find yo
Hi Marton,
I created the JIRA ticket to track the streaming documentation:
https://issues.apache.org/jira/browse/FLINK-1058
Somehow I could not find your ASF JIRA username. Could you tell me
what is your ASF JIRA username?
- Henry
On Mon, Aug 18, 2014 at 10:29 PM, Márton Balassi
wrote:
> Sure,
Henry Saputra created FLINK-1058:
Summary: Add documentation for streaming feature
Key: FLINK-1058
URL: https://issues.apache.org/jira/browse/FLINK-1058
Project: Flink
Issue Type: Task
Sure, please assign it to me.
On Aug 19, 2014 2:44 AM, "Henry Saputra" wrote:
> Thanks Stephan. If no one object I will create JIRA ticket as reminder
> to add formal documentation for the streaming feature.
>
> - Henry
>
> On Mon, Aug 18, 2014 at 11:53 AM, Stephan Ewen wrote:
> > The streaming
Thanks Stephan. If no one object I will create JIRA ticket as reminder
to add formal documentation for the streaming feature.
- Henry
On Mon, Aug 18, 2014 at 11:53 AM, Stephan Ewen wrote:
> The streaming code is in "flink-addons", for new/experimental code.
>
> Documents should come over the nex
Hi all,
This is to call for a vote for releasing Flink 0.6-incubating. This is the
first release of the Apache Flink project inside the Incubator.
Vote on dev@flink.incubator.apache.org:
http://mail-archives.apache.org/mod_mbox/incubator-flink-dev/201408.mbox/%3CCAGr9p8AQWYhH4m37Pwt217ngaqZXJvU1q
Hey,
The simple reduce is like what you said yes. But there are also grouped
reduce which you can use by calling .groupBy(keyposition) and then reduce.
Also there is reduce for windows: batchReduce and windowReduce batch gives
you a sliding window over a predefined number of records, and window r
Thank you Fabian :) It looks promising here already haha.
But no worries I will keep on working on streaming as before ;)
2014.08.18. 21:25 ezt írta ("Fabian Hueske" ):
> Yummy cake :-)
>
> Gyula, I hope you have a great time in Sweden!
>
>
> 2014-08-18 19:46 GMT+02:00 Stephan Ewen :
>
> > Looks
Yummy cake :-)
Gyula, I hope you have a great time in Sweden!
2014-08-18 19:46 GMT+02:00 Stephan Ewen :
> Looks very cool!
>
> Glad to see you are enjoying the project.
>
Hi folks,
great work!
Looking at the example I have a quick question. What's the semantics of the
Reduce operator? I guess its not a window reduce.
Is it backed by a hash table and every input tuple updates the hash table
and returns the updated value?
Cheers, Fabian
2014-08-18 20:53 GMT+02:00
Supporting the Hadoop 2.0 (not 2.2) YARN API would be a lot of coding
effort. There was a huge API change between the two versions.
Maybe we can find a technical solution to this political/legal problem: I'm
going to build and try a Flink version against the "2.1.1-beta" (or
similar) (official Apac
+1
I downloaded the source package, verified the checksums for it and
successfully executed "mvn clean package -DskipTests" (its just a small
virtual server that is not able to execute the unit tests). But the source
package builds.
The 72 hours of voting are over.
Results:
This vote has pass
The streaming code is in "flink-addons", for new/experimental code.
Documents should come over the next days/weeks, definitely before we make
this part of the core.
Right now, I would suggest to have a look at some of the examples, to get a
feeling for the addon, check for example this here:
http
Hmm, quick question, I could not find any documentation about the
streaming support. Is it part of the source code or will there be
additional doc included?
- Henry
On Mon, Aug 18, 2014 at 10:55 AM, Stephan Ewen wrote:
> After the Apache Secretary confirmed that the SGA has arrived and the ICLAs
Checked binary and source version with Hadoop 1 dependency:
- Local mode
- Cluster mode
- Examples (WordCount, PageRank, ConnectedComponents)
- Webinterface
- Custom Config (different number of slots per machine)
- Quickstarts (java) with test job local execution
+1
On Mon, Aug 18, 2014 at
W00t!
- Henry
On Mon, Aug 18, 2014 at 10:55 AM, Stephan Ewen wrote:
> After the Apache Secretary confirmed that the SGA has arrived and the ICLAs
> are filed, I have merged the streaming code into the master for the next
> release.
>
> A whole bunch of code that was!
>
> Great work, all of you.
After the Apache Secretary confirmed that the SGA has arrived and the ICLAs
are filed, I have merged the streaming code into the master for the next
release.
A whole bunch of code that was!
Great work, all of you. Looking forward to what this blossoms into... It's
a good day, today :-)
On Wed,
Looks very cool!
Glad to see you are enjoying the project.
I like Sean's idea very much: Creating the three packages (Hadoop 1.x,
Hadoop 2.x, Hadoop 2.0 with Yarn beta).
Any objections to creating a help site that says "For that vendor with this
version pick the following binary release" ?
Stephan
> >> On Mon, Aug 18, 2014 at 5:58 PM, Henry Saputra
>
As for Flink, for now the additional CDH4 packaged binary is to
support "non-standard" Hadoop version that some customers may already
have.
Based on "not a question of supporting a vendor but a Hadoop version
combo.", would the approach that Flink had done to help customers get
go and running quic
It's probably the same thing as with Spark. Spark doesn't actually
work with YARN 'beta'-era releases, but works 'stable' and specially
supports 'alpha'. CDH 4.{2-4} or so == YARN 'beta' (not non-standard,
but, is probably the only distro of it you'll still run into in
circulation). (And so it's ki
LICENSE, NOTICE, and DISCLAIMER files look good.
Signatures look good
Hashes look good
No runnable external binaries
Compile and run WordCount examples in local and standalone.
+1
- Henry
On Fri, Aug 15, 2014 at 6:00 AM, Robert Metzger wrote:
> Hi All,
>
> Please vote on releasing the followin
I think the main problem was that CDH4 is a non standard build. All others
we tried worked with hadoop-1.2 and 2.2/2.4 builds.
But I understand your points.
So, instead of creating those packages, we can make a guide "how to pick
the right distribution", which points you to the hadoop-1.2 and 2.
Vendor X may be slightly against having two Flink-for-X distributions --
their own and another on a site/project they may not control.
Are all these builds really needed? meaning, does a generic Hadoop 2.x
build not work on some or most of these? I'd hope so. Might keep things
simpler for everyone
My concern with this is it appears to put Apache in the business of
picking the right Hadoop vendors. What about IBM, Pivotal, etc.? I get
that the actual desire here is to make things easy for users, and that
the original three packages offered (Hadoop1, CDH4, Hadoop2) will cover
95% of user
+1
Downloaded and built the code, downloaded the hadoop1 binaries and ran the
WordCount, KMeans, Transitive Closure, PageRank, and Web Log analysis
examples locally on Mac OS X
On Mon, Aug 18, 2014 at 2:42 PM, Till Rohrmann wrote:
> +1
> I deployed the yarn binaries and ran the WordCount, KMea
The approach seems fair in the way it presents all vendors equally and
still offers user a convenient way to get started.
I personally like it, but I cannot say in how far this is compliant with
Apache policies.
Hi,
I think we all agree that our project benefits from providing pre-compiled
binaries for different hadoop distributions.
I've drafted an extension of the current download page, that I would
suggest to use after the release: http://i.imgur.com/MucW2HD.png
As you can see, users can directly pick
+1
I deployed the yarn binaries and ran the WordCount, KMeans and
TransitiveClosure examples with it.
On Mon, Aug 18, 2014 at 11:59 AM, Aljoscha Krettek
wrote:
> +1
> I downloaded the cdh4 binaries, ran in local and cluster mode, checked the
> job submission web interface, ran WordCount, follow
Thanks guys.
On Mon, Aug 18, 2014 at 11:47 AM, Stephan Ewen wrote:
> I have seen 3-clause BSD license used quite a bit in Apache projects.
>
> When you want to add a dependency, just add an entry at in the two LICENSE
> files (one in the root, one in "flink-dist/src/flink-bin").
>
> Stephan
>
Sergey Dudoladov created FLINK-1057:
---
Summary: Broken documentation link for Javadoc.
Key: FLINK-1057
URL: https://issues.apache.org/jira/browse/FLINK-1057
Project: Flink
Issue Type: Task
+1
I downloaded the cdh4 binaries, ran in local and cluster mode, checked the
job submission web interface, ran WordCount, followed all the quickstart
guides: KMeans, Java API, and Scala API.
On Mon, Aug 18, 2014 at 10:53 AM, Fabian Hueske wrote:
> +1
> Downloaded the Hadoop1 binaries, started
I have seen 3-clause BSD license used quite a bit in Apache projects.
When you want to add a dependency, just add an entry at in the two LICENSE
files (one in the root, one in "flink-dist/src/flink-bin").
Stephan
+1
Downloaded the Hadoop1 binaries, started on Windows 7 via start-local.bat,
and ran an example with flink.bat.
2014-08-15 19:22 GMT+02:00 Alan Gates :
> +1. Looked through the License and Notice files (wow, not sure I've ever
> seen a more thorough job there), Disclaimer and Readme. I downlo
2-clause and 3-clause are quite similar for this purpose as they
differ only in a clause about endorsement:
http://en.wikipedia.org/wiki/BSD_licenses#2-clause_license_.28.22Simplified_BSD_License.22_or_.22FreeBSD_License.22.29
My interpretation of http://www.apache.org/legal/3party.html is that
bo
Hi,
I'd like to add a dependency that is licensed under the 3-clause BSD
License. The ASF legal FAQ only mentions the 2-clause version as compatible
with the Apache License.
Could someone please clarify the situation for me?
37 matches
Mail list logo