end would be interesting especially if flink could benefit
> from cassandra data locality. Cassandra/spark integration is using this for
> information to schedule spark tasks.
>
> On 9 June 2016 at 19:55, Nick Dimiduk <ndimi...@gmail.com> wrote:
>
> > You might also consider suppo
You might also consider support for a Bigtable
backend: HBase/Accumulo/Cassandra. The data model should be similar
(identical?) to RocksDB and you get HA, recoverability, and support for
really large state "for free".
On Thursday, June 9, 2016, Chen Qin wrote:
> Hi there,
>
I'm also curious for a solution here. My test code executes the flow from a
separate thread. Once i've joined on all my producer threads and I've
verified the output, I simply interrupt the flow thread. This spews
exceptions, but it all appears to be harmless.
Maybe there's a better way? I think
Hi Chenguang,
I've been using the class StreamingMultipleProgramsTestBase, found in
flink-streaming-java test jar as the basis for my integration tests. These
tests spin up a flink cluster (and kafka, and hbase, ) in a single JVM.
It's not a perfect integration environment, but it's as close as
Nick Dimiduk created FLINK-3709:
---
Summary: [streaming] Graph event rates over time
Key: FLINK-3709
URL: https://issues.apache.org/jira/browse/FLINK-3709
Project: Flink
Issue Type: Improvement
at this can be problematic when running multiple jobs on
> > YARN. Since there is a chance that this might be the last release 0.10
> > release, I would be OK to cancel the vote for your fix.
> >
> > Still, let's hear the opinion of others before doing this. What do you
> &g
Nick Dimiduk created FLINK-3372:
---
Summary: Setting custom YARN application name is ignored
Key: FLINK-3372
URL: https://issues.apache.org/jira/browse/FLINK-3372
Project: Flink
Issue Type: Bug
https://ci.apache.org/projects/flink/flink-docs-release-0.10/api/java/org/apache/flink/streaming/api/environment/StreamExecutionEnvironment.html#disableOperatorChaining()
On Mon, Feb 8, 2016 at 10:34 AM, Greg Hogan wrote:
> Is it possible to force operator chaining to be
Perhaps too late for the RC, but I've backported FLINK-3293 to this branch
via FLINK-3372. Would be nice for those wanting to monitory yarn
application submissions.
On Mon, Feb 8, 2016 at 9:37 AM, Ufuk Celebi wrote:
> Dear Flink community,
>
> Please vote on releasing the
+1 for a 0.10.2 maintenance release.
On Monday, February 1, 2016, Ufuk Celebi wrote:
> Hey all,
>
> Our release-0.10 branch contains some important fixes (for example a
> critical fix in the network stack). I would like to hear your opinions
> about doing a 0.10.2 bug fix
s not good, i.e., when Flink has
> Hadoop
> > 2.4 in the classpath, it does not work with a 2.6 YARN installation. That
> > is why we pre-build multiple versions.
> >
> > Best,
> > Stephan
> >
> >
> > On Sat, Jan 9, 2016 at 3:19 AM, Nick Di
Hi folks,
I noticed today that the parent pom for the flink-shaded-hadoop pom (and
thus also it's children) are not using ${ROOT}/pom.xml as their parent.
However, ${ROOT}/pom.xml lists the hierarchy as a module. I'm curious to
know why this is. It seems one artifact of this disconnect is that
Nick Dimiduk created FLINK-3228:
---
Summary: Cannot submit multiple streaming involving JDBC drivers
Key: FLINK-3228
URL: https://issues.apache.org/jira/browse/FLINK-3228
Project: Flink
Issue
Nick Dimiduk created FLINK-3224:
---
Summary: The Streaming API does not call setInputType if a format
implements InputTypeConfigurable
Key: FLINK-3224
URL: https://issues.apache.org/jira/browse/FLINK-3224
What's the relationship between the streaming SQL proposed here and the CEP
syntax proposed earlier in the week?
On Sunday, January 10, 2016, Henry Saputra wrote:
> Awesome! Thanks for the reply, Fabian.
>
> - Henry
>
> On Sunday, January 10, 2016, Fabian Hueske
Hi Devs,
It seems no release tag was pushed to 0.10.1. I presume this was an
oversight. Is there some place I can look to see from which sha the 0.10.1
release was built? Are the RC vote threads the only cannon in this matter?
Thanks,
Nick
should work across
> versions nicely. The main friction we saw were version clashes of
> transitive dependencies.
>
> The Flink CI builds include building Flink with Hadoop 2.5.0, see here:
> https://github.com/apache/flink/blob/master/.travis.yml
>
> Greetings,
> Stephan
use the maven-enforcer-plugin to require Maven
> 3.3.
> I guess many Linux distributions are still at Maven 3.2, so users might get
> unhappy users
>
>
> On Thu, Dec 10, 2015 at 6:33 PM, Nick Dimiduk <ndimi...@apache.org> wrote:
>
> > Lol. Okay, thanks a bunch. Mind link
hout restarts. Important for low-latency,
> shells, etc
>
> Flink itself respects these classloaders whenever dynamically looking up a
> class. It may be that Closure is written such that it can only dynamically
> instantiate what is the original classpath.
>
>
>
> O
etz...@apache.org> wrote:
> I had the same though as Nick. Maybe Leiningen allows to somehow build a
> fat-jar containing the clojure standard library.
>
> On Thu, Dec 10, 2015 at 5:51 PM, Nick Dimiduk <ndimi...@apache.org> wrote:
>
> > What happens when you follow the packagi
this idea.
>
> I extended my pom to include clojure-1.5.1.jar in my program jar.
> However, the problem is still there... I did some research on the
> Internet, and it seems I need to mess around with Clojure's class
> loading strategy...
>
> -Matthias
>
> On 12/10/2015
What happens when you follow the packaging examples provided in the flink
quick start archetypes? These have the maven-foo required to package an
uberjar suitable for flink submission. Can you try adding that step to your
pom.xml?
On Thursday, December 10, 2015, Stephan Ewen
r you as a workaround:
>
> wget
>
> http://archive.apache.org/dist/maven/maven-3/3.2.5/binaries/apache-maven-3.2.5-bin.tar.gz
> and then use that maven for now ;)
>
>
> On Thu, Dec 10, 2015 at 12:35 AM, Nick Dimiduk <ndimi...@apache.org
> <javascript:;>> wrote:
s come from is to do
> inside the "flink-dist" project a "mvn dependency:tree" run. That shows how
> the unshaded Guava was pulled in.
>
> Greetings,
> Stephan
>
>
> On Wed, Dec 9, 2015 at 6:22 PM, Nick Dimiduk <ndimi...@gmail.com> wrote:
>
Thanks, I appreciate it.
On Wed, Dec 9, 2015 at 12:50 PM, Robert Metzger <rmetz...@apache.org> wrote:
> I can confirm that guava is part of the fat jar for the 2.7.0, scala 2.11
> distribution.
>
> I'll look into the issue tomorrow
>
> On Wed, Dec 9, 2015 at 7:58
r add
> another dependency that might transitively pull Guava?
>
> Stephan
>
>
> On Tue, Dec 8, 2015 at 9:25 PM, Nick Dimiduk <ndimi...@apache.org> wrote:
>
> > Hi there,
> >
> > I'm attempting to build locally a flink based on release-0.10.0 +
> > FLINK-
Nick Dimiduk created FLINK-3147:
---
Summary: HadoopOutputFormatBase should expose CLOSE_MUTEX for
subuclasses
Key: FLINK-3147
URL: https://issues.apache.org/jira/browse/FLINK-3147
Project: Flink
Nick Dimiduk created FLINK-3148:
---
Summary: Support configured serializers for shipping UDFs
Key: FLINK-3148
URL: https://issues.apache.org/jira/browse/FLINK-3148
Project: Flink
Issue Type
Nick Dimiduk created FLINK-3119:
---
Summary: Remove dependency on Tuple from HadoopOutputFormat
Key: FLINK-3119
URL: https://issues.apache.org/jira/browse/FLINK-3119
Project: Flink
Issue Type
>
> Do you know if Hadoop/HBase is also using a maven plugin to fail a build on
> breaking API changes? I would really like to have such a functionality in
> Flink, because we can spot breaking changes very early.
I don't think we have maven integration for this as of yet. We release
managers
In HBase we keep an hbase-examples module with working code. Snippets from
that module are pasted into docs and referenced. Yes, we do see divergence,
especially when refactor tools are involved. I once looked into a doc tool
for automatically extracting snippets from source code, but that turned
Nick Dimiduk created FLINK-3004:
---
Summary: ForkableMiniCluster does not call RichFunction#open
Key: FLINK-3004
URL: https://issues.apache.org/jira/browse/FLINK-3004
Project: Flink
Issue Type
32 matches
Mail list logo