Hey,
Some initial feedback from side:
I think this a very important problem to deal with as a lot of applications
depend on it. I like the proposed runtime model and that is probably the
good way to handle this task, it is very clean what is happening.
My main concern is how to handle this from
Hey,
I have encountered a strange error in the kafka consumer. This only
happened once in my local machine so far but just wanted to let you know.
java.lang.Exception: The periodic offset committer encountered an error:
org/apache/flink/shaded/org/apache/curator/HandleHolder$2
at
com.king.rbea.fl
or clarifying.
> > > >
> > > > Just had a look at the PR. The fix seems to be quite straightforward.
> > > > If you can validate the fix tomorrow and we include it, we could
> > release
> > > > 1.0.2 early next week.
> > > >
> > > > 20
Fabian, I think Ufuk meant about 2 weeks for the next bugfix release not RC.
I have actually prepared a PR that should fix this problem:
https://github.com/apache/flink/pull/1919 regardless of how we decide.
I can only test this tomorrow though in the production environment.
I can work around thi
I found a potentially blocker issue:
https://issues.apache.org/jira/browse/FLINK-3790
What do you think?
Gyula
Aljoscha Krettek ezt írta (időpont: 2016. ápr. 20.,
Sze, 15:30):
> Chiwan is right. I just downloaded the release binary again and verified
> that the problem mentioned in the issue
t; I think that is actually a cool way to kick of an addition to the system.
> > Gives you a lot of flexibility and releasing and testing...
> >
> > It helps, though, to upload maven artifacts for it!
> >
> > On Tue, Sep 15, 2015 at 7:18 PM, Gyula Fóra wrote:
> >
eckpoint/savepoint of
> the DB state backend is therefore not a self-contained unit.
>
> Cheers,
> Aljoscha
> > On 14 Mar 2016, at 12:30, Ufuk Celebi wrote:
> >
> > On Mon, Mar 14, 2016 at 11:20 AM, Gyula Fóra wrote:
> >> We developed (and contributed) the DB st
Hello everyone!
I would like to start a discussion regarding the future of the Database
state backend for the streaming API. The main question is whether we want
to keep this as a flink-contrib moduel and continue development as part of
Flink or should we move it to an outside library.
Just as a
Hi,
I think this is an important question that will surely come up in some
cases in the future.
I see your point Robert, that we have promised api compatibility for 1.x.y
releases, but I am not sure that this should cover things that are clearly
just unintended errors in the api from our side.
I
I opened this JIRA, if anyone has good examples, please add it in the
comments:
https://issues.apache.org/jira/browse/FLINK-3566
Gyula
Gyula Fóra ezt írta (időpont: 2016. márc. 2., Sze,
15:54):
> Okay, I will open a JIRA issue
>
> Gyula
>
> Timo Walther ezt írta (időpont: 2016
Okay, I will open a JIRA issue
Gyula
Timo Walther ezt írta (időpont: 2016. márc. 2., Sze,
15:42):
> Can you open an issue with an example of your custom TypeInfo? I will
> then open a suitable PR for it.
>
>
> On 02.03.2016 15:33, Gyula Fóra wrote:
> > Would that work
So that the TypeExctractor is more extensible. This would also solve you
> problem. What do you think?
>
> On 02.03.2016 15:00, Gyula Fóra wrote:
> > Hi!
> >
> > Yes I think, that sounds good :) We just need to make sure that this
> works
> > with things like the
16 11:34, Aljoscha Krettek wrote:
> > I think you have a point. Another user also just ran into problems with
> the TypeExtractor. (The “Java Maps and TypeInformation” email).
> >
> > So let’s figure out what needs to be changed to make it work for all
> people.
> >
>
Hey,
I have brought up this issue a couple months back but I would like to do it
again.
I think the current way of validating the input type of udfs against the
out type of the preceeding operators is too aggressive and breaks a lot of
code that should otherwise work.
This issue appears all the
Hi,
I agree that getting Flink 1.0.0 out soon would be great as Flink is in a
pretty solid state right now.
I wonder whether it would make sense to include an out-of-core state
backend in streaming core that can be used with partitioned/window states.
I think if we are releasing 1.0.0 we should h
ersion}
> flink-shaded-hadoop2
> flink-shaded-hadoop2 shading-artifact-module.name>
>
>
> flink-yarn
>
> flink-fs-tests
>
>
>
>
> If the backend is not in a separate maven module, you can use reflection.
> Check out the RollingSink#r
Hi,
While developing the out-of-core state backend that will store state
directly to hdfs (either TFiles or BloomMapFiles), I realised that some
file formats and features I use are hadoop 2.x only.
What is the suggested way to handle features that use hadoop 2.x api? Can
these be excluded from th
would suddenly work after the failure. We could try and swap the lock
> Object by a "ReentrantLock(true)" and see what would happen.
>
>
> Stephan
>
>
> On Fri, Jan 8, 2016 at 11:49 AM, Gyula Fóra wrote:
>
> > Hey,
> >
> > I have encountered a we
+1 for protecting the master branch.
I also don't see any reason why anyone should force push there
Gyula
Fabian Hueske ezt írta (időpont: 2016. jan. 13., Sze,
11:07):
> Hi everybody,
>
> Lately, ASF Infra has changed the write permissions of all Git repositories
> twice.
>
> Originally, it wa
Hey,
I have encountered a weird issue in a checkpointing test I am trying to
write. The logic is the same as with the previous checkpointing tests,
there is a OnceFailingReducer.
My problem is that before the reducer fails, my job cannot take any
snapshots. The Runnables executing the checkpointi
Hi,
+1
I think it would be a good idea to separate the 2 state backends. I think
you are right in most cases the new partitioned state implementations will
benefit from this as it removes a lot of additional overhead (although
sometimes it's nice to have the 2 together, for instance if they both
gt; Is that what you had in mind?
>
> Greetings,
> Stephan
>
>
> On Sat, Jan 2, 2016 at 4:53 PM, Gyula Fóra wrote:
>
> > Ok, I could figure out the problem, it was my fault :). The issue was
> that
> > I was running a short testing job and the sources fini
ld be good to give some info to the user in case the source is
finished when the checkpoint is triggered.
On the bright side, it seems to work well, also with the savepoints :)
Cheers
Gyula
Gyula Fóra ezt írta (időpont: 2016. jan. 2., Szo,
11:57):
> Hey,
>
> I am trying to checkpoint my
Hey,
I am trying to checkpoint my streaming job to S3 but it seems that the
checkpoints never complete but also I don't get any error in the logs.
The state backend connects properly to S3 apparently as it creates the
following file in the given S3 directory :
95560b1acf5307bc3096020071c83230_$f
g libs) for a talk at some point.
>
> @Till: Do you still have the code? Could you share it with Gyula?
>
> On Wed, Dec 16, 2015 at 4:22 PM, Gyula Fóra wrote:
>
> > Hey Guys,
> >
> > Has anyone tried to setup the Flink scala shell with Jupyter? I would
>
Hey Guys,
Has anyone tried to setup the Flink scala shell with Jupyter? I would
assume the logic is similar to Zeppelin.
The reason I am asking this because we have a Jupyter cluster that runs
python and scala (2.11 I believe) and Spark works on it, so we figured it
would be good to add support f
Would the Reducing/Folding states just be some API sugar on top of what we
have know (ValueState) or does it have some added functionality (like
incremental checkpoints for list states)?
Gyula
Aljoscha Krettek ezt írta (időpont: 2015. dec. 14.,
H, 11:03):
> While enhancing the state interfaces
;
> Stephan
>
>
> On Mon, Dec 7, 2015 at 9:45 AM, Gyula Fóra wrote:
>
> > Hey guys,
> >
> > Is there any way to monitor the backpressure in the Flink job? I find it
> > hard to debug slow operators because of the backpressure mechanism so it
> > w
Hey guys,
Is there any way to monitor the backpressure in the Flink job? I find it
hard to debug slow operators because of the backpressure mechanism so it
would be good to get some info out of the network layer on what exactly
caused the backpressure.
For example:
task1 -> task2 -> task3 -> tas
Yes, please
Vasiliki Kalavri ezt írta (időpont: 2015. nov.
25., Sze, 14:37):
> So, do we all agree that the current behavior is not correct? Shall I open
> a JIRA about this?
>
> On 25 November 2015 at 13:58, Gyula Fóra wrote:
>
> > Well it kind of depends on what defin
e possible. Not sure
> why
> > this is not permitted.
> >
> > "stream.union(stream)" would contain each element twice, so should either
> > give an error or actually union (or duplicate) elements...
> >
> > Stephan
> >
> >
> > On Wed, Nov
Yes, I am not sure if this the intentional behaviour. I think you are
supposed to be able to do the things you described.
stream.union(stream.map(..)) and things like this are fair operations. Also
maybe stream.union(stream) should just give stream instead of an error.
Could someone comment on th
is abstract already ;)
>
> On 23 November 2015 at 21:54, Gyula Fóra wrote:
>
> > I think it is not too bad to only have the Right/Left classes. You can
> then
> > write it like this:
> >
> > Either e1 = new Left<>("");
> > Either e2 = new
would rather not block the minor release on this issue. We don't
> know if we have a valid fix for it. Let's get out the minor release
> first and have another one when we have the fix.
>
> On Tue, Nov 24, 2015 at 11:34 AM, Gyula Fóra wrote:
> > Hi,
> > Regarding my
Hi,
Regarding my previous comment for the Kafka/Zookeeper issue, let's discuss
if this is critical enough so we want to include it in this release or the
next bugfix.
I will try to further investigate the reason the job failed in the first
place (we suspect broker failure)
Cheers,
Gyula
Vyachesl
Hi,
I vote -1 for the RC due to the fact that the zookeeper deadlock issue was
not completely solved.
Robert could find the problem with the dependency management plugin and has
opened a PR:
[FLINK-3067] Enforce zkclient 0.7 for Kafka
https://github.com/apache/flink/pull/1399
Cheers,
Gyula
Vy
lper classes for Either. Hence, I believe they should
> be private. Maybe we could rename the methods to createLeft() /
> createRight() ?
>
> On 23 November 2015 at 20:58, Gyula Fóra wrote:
>
> > I was actually not suggesting to drop the e.left() method but instead the
> >
ow about renaming to getLeft() / getRight()?
>
> -V.
>
> On 23 November 2015 at 09:55, Gyula Fóra wrote:
>
> > Hey guys,
> >
> > I know this should have been part of the PR discussion but it kind of
> > slipped through the cracks :)
> >
> > I thi
s the latency, because the
> timeWindowAll has to wait for the next timeWindow before it can close
> the previous one. So if the first timeWindow is 10s, it takes 20s until
> you have a result, although it cant change after 10s. You know what I mean?
>
> Cheers,
>
> Konst
Hey guys,
I know this should have been part of the PR discussion but it kind of
slipped through the cracks :)
I think it might be useful to change the method name for Either.left(value)
to Either.Left(value) (or drop the method completely).
The reason is that it is slightly awkward to use it wit
. We can
> always do a new bug fix release.
>
> – Ufuk
>
> On Fri, Nov 20, 2015 at 11:25 AM, Gyula Fóra wrote:
>
> > Hi all,
> >
> > Wouldnt you think that it would make sense to wait a week or so to find
> all
> > the hot issues with the current release
Hi all,
Wouldnt you think that it would make sense to wait a week or so to find all
the hot issues with the current release?
To me it feels a little bit like rushing this out and we will have almost
the same situation afterwards.
I might be wrong but I think people should get a chance to try thi
; > >> https://issues.apache.org/jira/browse/KAFKA-824
> > >>
> > >> This has been fixed for Kafka’s 0.9.0 version.
> > >>
> > >> We should investigate why the job gets stuck though. Do you have a
> stack
> > >> trace or any logs availabl
Should I open a JIRA for this?
Gyula Fóra ezt írta (időpont: 2015. nov. 17., K,
11:30):
> Thanks for the quick response and thorough explanation :)
>
> Gyula
>
> Robert Metzger ezt írta (időpont: 2015. nov. 17.,
> K, 11:27):
>
>> I would try that approach first
>&
Hey guys,
I ran into some issue with the kafka consumers.
I am reading from more than 50 topics with parallelism 1, and while running
the job I got the following exception during the checkpoint notification
(offset committing):
java.lang.RuntimeException: Error while confirming checkpoint
at org
running with your job?
> >
> > My second best guess is that it was thrown by another component running
> > Netty (maybe a Hadoop client?).
> >
> > – Ufuk
> >
> > PS Thanks for sharing the logs with me. :)
> >
> > > On 14 Nov 2015, at 18:14,
Hi guys,
I have a Flink Streaming job running for about a day now without any errors
and then I got this in the job manager log:
15:37:49,905 WARN io.netty.channel.DefaultChannelPipeline
- An exceptionCaught() event was fired, and it reached at
the tail of the pipeline. It usually mean
Hey,
Is there any other way to cancel a job besides ./bin/flink cancel jobId?
This doesnt seem to work when a job cannot be scheduled and is retrying
over and over again.
The exception I get:
13:58:11,240 INFO org.apache.flink.runtime.jobmanager.JobManager
- Status of job 0c895d22c632
This seems to be an issue only occuring when using Java 8 lambdas, which is
still super annoying but may not be a release blocker.
Gyula Fóra ezt írta (időpont: 2015. nov. 12., Cs,
15:38):
> I am not sure if this issue affects the release or maybe I am just doing
> something wrong:
I am not sure if this issue affects the release or maybe I am just doing
something wrong: https://issues.apache.org/jira/browse/FLINK-3006
Fabian Hueske ezt írta (időpont: 2015. nov. 12., Cs,
14:51):
> The failing tests on Windows should *NOT* block the release, IMO. ;-)
>
> 2015-11-12 14:48 GMT
; Timo
>
>
> On 12.11.2015 13:16, Gyula Fóra wrote:
> > Hey,
> >
> > I get a weird error when I try to execute my job on the cluster. Locally
> > this works fine but running it from the command line fails during
> > typeextraction:
> >
> > input1.un
located and the logged amount is an
> upper bound.
>
> Cheers, Fabian
>
> 2015-11-12 13:37 GMT+01:00 Gyula Fóra :
>
> > Hey guys,
> >
> > Is it normal that when I start the cluster with
> start-cluster-streaming.sh
> > out of the 16gb tm memory 10.6 gb bec
Hey guys,
Is it normal that when I start the cluster with start-cluster-streaming.sh
out of the 16gb tm memory 10.6 gb becomes flink managed? (I get pretty much
the same number when I use start-cluster.sh)
I thought that Flink would only use a very small fraction in streaming mode.
Cheers,
Gyula
Hey,
I get a weird error when I try to execute my job on the cluster. Locally
this works fine but running it from the command line fails during
typeextraction:
input1.union(input2, input3).map(Either::
Left).returns(eventOrLongType);
This fails when trying to extract the output type from the map
shut down with external checkpoint would also be important,
> to stop and resume from exactly there.
>
>
> Stephan
>
>
> On Wed, Nov 11, 2015 at 3:19 PM, Gyula Fóra wrote:
>
> > Hey guys,
> >
> > With recent discussions around being able to shutdown
Hey,
Yes what you wrote should work. You can alternatively use
TypeExtractor.getForObject(modelMapInit) to extract the tye information.
I also like to implement my custom type info for Hashmaps and the other
types and use that.
Cheers,
Gyula
Martin Neumann ezt írta (időpont: 2015. nov. 11., Sz
Hey guys,
With recent discussions around being able to shutdown and restart streaming
jobs from specific checkpoints, there is another issue that I think needs
tackling.
As far as I understand when a streaming job finishes the tasks are not
notified for the last checkpoints and also jobs don't ta
Hey All,
I am wondering what is the reason why Function input types are validated?
This might become an issue if the user wants to write his own TypeInfo for
a type that flink also handles natively.
Let's say I want to implement my own TupleTypeinfo that handles null
values, and I pass this type
could sample the records
> that after two re-partitionings return to the same JVM, so we would not
> have clock misalignment. Still thinking about good ways to have a general
> purpose latency measurement mechanism.
>
> If you have any ideas there, let me know!
>
> Greetings,
&
Hey guys,
I am trying to look at the throughput of my Flink Streaming job over time.
Is there any way to extract this information from the dashboard or is it
only possible to view the cumulative statistics at given time points.
Also I am wondering whether there is any info about the latency in th
>
> > Trying to reproduce this error now. I'm assuming this is 0.10-SNAPSHOT?
> >
> > Cheers,
> > Max
> >
> > On Wed, Nov 4, 2015 at 1:49 PM, Gyula Fóra wrote:
> >> Hey,
> >>
> >> Running the following simple applicat
Hey,
Running the following simple application gives me an error:
//just counting by key, the
streamOfIntegers.keyBy(x -> x).timeWindow(Time.milliseconds(3000)).fold(0, (
c, next) -> c + 1).print();
Executing this gives the following error:
"No initial value was serialized for the fold window fu
rowse/FLINK-2965
> >
> > On Wed, Nov 4, 2015 at 12:00 PM, Gyula Fóra
> wrote:
> > > done
> > >
> > > Till Rohrmann ezt írta (időpont: 2015. nov. 4.,
> > Sze,
> > > 11:19):
> > >
> > >> Could you please open or updat
done
Till Rohrmann ezt írta (időpont: 2015. nov. 4., Sze,
11:19):
> Could you please open or update the corresponding JIRA issue if existing.
>
> On Wed, Nov 4, 2015 at 11:14 AM, Gyula Fóra wrote:
>
> > Hey,
> >
> > I found an interesting failure in the Kafk
Hey,
I found an interesting failure in the KafkaITCase, I am not sure if this
happened before.
It received a duplicate record and failed on that (not the usual zookeeper
timeout thing)
Logs are here:
https://s3.amazonaws.com/archive.travis-ci.org/jobs/89171477/log.txt
Cheers,
Gyula
Hey guys,
Have we disabled the default input copying after all? I don't remember
seeing a Jira or PR for this (maybe I just missed it).
And if not, do we want this in the 0.10 release?
Cheers,
Gyula
On Fri, Oct 2, 2015 at 7:57 PM, Till Rohrmann wrote:
> Do we know what kind of impact the non-
t also be stuck on a
> lock, in which case it would be waiting for the lock holder to terminate.
>
> Do you have the traces from other threads as well, so we could look which
> one actually is stuck while holding the lock?
>
> Greetings,
> Stephan
>
>
> On Mon, Oct 19, 201
Thanks Max for the effort, this is going to be huge :)
Unfortunately I have to say -1
FLINK-2888 and FLINK-2824 are blockers from my point of view.
Cheers,
Gyula
Vasiliki Kalavri ezt írta (időpont: 2015. okt.
21., Sze, 20:07):
> Awesome! Thanks Max :))
>
> I have a couple of questions:
> - wh
is an artificial corner case, or actually an issue. The
> solution is theoretically simple: Use a fair lock, but we would need to
> break the data sources API and switch from "synchronized(Object)" to a fair
> "java.concurrent.ReentrantLock".
>
> Greetings,
> St
Hey All,
I think there is some serious issue with the checkpoints. Running a simple
program like this won't complete any checkpoints:
StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(2);
env.enableCheckpointing(5000);
env.generateSequence(
I think the nice thing about a common codestyle is that everyone can set
the template in the IDE and use the formatting commands.
Matthias's suggestion makes this practically impossible so -1 for mixed
tabs/spaces from my side.
Matthias J. Sax ezt írta (időpont: 2015. okt. 21., Sze,
11:46):
> I
+1 for both :)
Till Rohrmann ezt írta (időpont: 2015. okt. 20., K,
14:58):
> I like the idea to have a bit stricter code style which will increase code
> maintainability and makes it easier for people to go through the code.
> Furthermore, it will relieve us from code style comments while review
Hey guys,
Has anyone ever got something similar working with the kafka sources?
11:52:48,838 WARN org.apache.flink.runtime.taskmanager.Task
- Task 'Source: Kafka[***] (3/4)' did not react to cancelling signal,
but is stuck in method:
org.apache.flink.streaming.runtime.tasks.StreamTask.inv
> figure this out and discuss it.
>
> The iterations will anyways need some work for the next release to
> integrate them with checkpointing and watermarks, so would you agree that
> we tackle this then as part of an effort to advance the iteration feature
> as a whole?
>
> Greet
I think this
> should break ordering as well, in your case.
>
> On Tue, 6 Oct 2015 at 10:39 Gyula Fóra wrote:
>
> > Hi,
> >
> > This is just a workaround, which actually breaks input order from my
> > source. I think the iteration construction should be reworked to s
Alright, I am creating one.
Stephan Ewen ezt írta (időpont: 2015. okt. 7., Sze,
15:44):
> I think the error message could have been better, though...
>
> This actually warrants a JIRA issue...
>
> On Wed, Oct 7, 2015 at 2:44 PM, Gyula Fóra wrote:
>
> > Thanks!
>
often has not
> enough memory reserved for the stack space to create enough threads (1-2
> threads per task)...
>
> On Wed, Oct 7, 2015 at 2:13 PM, Gyula Fóra wrote:
>
> > Hey guys,
> >
> > I am writing a job which involves creating many different sources to read
>
Hey guys,
I am writing a job which involves creating many different sources to read
data from (in this case 80 sources wiht the parallelism of 8 each, running
locally on my mac). I cannot create less unfortunately.
The problem is that the job fails while deploying the tasks with the
following exc
m(2).iterate()
> DataStream mapped = it.map(...)
> it.closeWith(mapped.partitionByHash(someField))
>
> The input is rebalanced to the map inside the iteration as in your example
> and the feedback should be partitioned by hash.
>
> Cheers,
> Aljoscha
>
>
> On Tue, 6 O
Hey,
This question is mainly targeted towards Aljoscha but maybe someone can
help me out here:
I think the way feedback partitioning is handled does not work, let me
illustrate with a simple example:
IterativeStream it = ... (parallelism 1)
DataStream mapped = it.map(...) (parallelism 2)
// this
):
> Yes, pretty clear. I guess semantically it's still a co-group, but
> implemented slightly differently.
>
> Thanks!
>
> --
> Gianmarco
>
> On 9 September 2015 at 15:37, Gyula Fóra wrote:
>
> > Hey Gianmarco,
> >
> > So the implementation lo
missing it. :)
> > It would be a convenient addition though.
> >
> > Best,
> >
> > Marton
> >
> > On Sun, Sep 13, 2015 at 8:59 PM, Gyula Fóra wrote:
> >
> > > Hey All!
> > >
> > > Is there a proper way of using a Flink Streaming
Hey All!
Is there a proper way of using a Flink Streaming source with event
timestamps and watermarks? What I mean here is instead of implementing a
custom SourceFunction, use an existing one and provide some Timestamp
extractor (like the one currently used for Time windows), which will also
autom
ed state system can handle and also how
> it
> > behaves with larger numbers of keys. The KVStore is just an interface and
> > the really heavy lifting is done by the state system so this would be a
> > good test for it.
> >
> >
> > On Tue, 8 Sep 2015 at
This sounds good +1 from me as well :)
Till Rohrmann ezt írta (időpont: 2015. szept. 9.,
Sze, 10:40):
> +1 for a milestone release with the TypeInformation issues fixed. I'm
> working on it.
>
> On Tue, Sep 8, 2015 at 9:32 PM, Stephan Ewen wrote:
>
> > Great!
> >
> > I'd like to push one more co
s of phone calls to lists
> > of IDs of participating users; etc.
> > So I imagine they would like this a lot. (At least, if they were
> > considering moving to Flink :))
> >
> > Best,
> > Gabor
> >
> >
> >
> >
> > 2015-09-08 13:35 G
Hey All,
The last couple of days I have been playing around with the idea of
building a streaming key-value store abstraction using stateful streaming
operators that can be used within Flink Streaming programs seamlessly.
Operations executed on this KV store would be fault tolerant as it
integrat
Welcome! :)
On Thu, Aug 20, 2015 at 12:34 PM Matthias J. Sax <
mj...@informatik.hu-berlin.de> wrote:
> Congrats! The squirrel "army" is growing fast. :)
>
> On 08/20/2015 11:18 AM, Robert Metzger wrote:
> > The Project Management Committee (PMC) for Apache Flink has asked Chesnay
> > Schepler to
Honestly I don't think the partitioned state changes have anything to do
with the stability, only the reworked test case, which now test proper
exactly-once which was missing before.
Stephan Ewen ezt írta (időpont: 2015. aug. 4., K, 12:12):
> Yes, the build stability is super serious right now.
t; Nested iterations should still be possible...
>
> On Mon, Aug 3, 2015 at 10:21 AM, Gyula Fóra wrote:
>
> > It is critical for many applications (such as SAMOA or Storm
> compatibility)
> > to build arbitrary cyclic flows. If your suggestion covers all cases (for
> > in
ion(head_tail2))
> > >
> > > We have one head/tail pair with parallelism 2 and on with parallelism
> 4.
> > >
> > > Of the top of my head, I don't know what happens in this case though:
> > >
> > > val iter = ds.iteration()
> > >
ere, can you help me with what you mean by
> "different iteration heads and tails" ?
>
> An iteration does not have one parallel head and one parallel tail?
>
> On Fri, Jul 31, 2015 at 6:52 PM, Gyula Fóra wrote:
>
> > Maybe you can reuse some of the logic that is cu
Hi Matthias,
I think Aljoscha is preparing a nice PR that completely reworks the
DataStream classes and the information they actually contain. I don't think
it's a good idea to mess things up before he gets a chance to open the PR.
Also I don't see a well supported reason for moving the setParall
I'll figure them
> out.
>
> P.S. The code is not well documented yet, but the base class for
> transformations is StreamTransformation. From there anyone who want's to
> check it out can find the other transformations.
>
> On Fri, 31 Jul 2015 at 17:17 Gyula Fóra w
t saying that it
> does translate and run. Your observation is true. :D
>
> I'm wondering whether it makes sense to allow users to have iteration heads
> with differing parallelism, in fact.
>
> On Fri, 31 Jul 2015 at 16:40 Gyula Fóra wrote:
>
> > I still don't get
am:
> https://gist.github.com/aljoscha/45aaf62b2a7957cfafd5
>
> It works, and the implementation is very simple, actually.
>
> On Fri, 31 Jul 2015 at 14:30 Gyula Fóra wrote:
>
> > I mean that the head operators have different parallelism:
> >
> > IterativeDataStre
he distribution of the original elements (at least IMHO). Maybe I'm
> wrong there, though.
>
> To me it seems intuitive that I get the feedback at the head they way I
> specify it at the tail. But maybe that's also just me... :D
>
> On Fri, 31 Jul 2015 at 14:00 Gyula Fóra
;>>>>>
> > >>>>>>> the java side it is just a byte array and all the comparisons are
> > >> also
> > >>>>>>> performed on these byte arrays. I think partitioning and sort
> > should
> > >>>>>
Hey,
I am not sure what is the intuitive behaviour here. As you are not applying
a transformation on the feedback stream but pass it to a closeWith method,
I thought it was somehow nature that it gets the partitioning of the
iteration input, but maybe its not intuitive.
If others also think that
: 2015. júl. 30., Cs,
22:04):
> because it still goes through the Java API that requires some kind of
> type information. imagine a java api program where you omit all generic
> types, it just wouldn't work as of now.
>
> On 30.07.2015 21:17, Gyula Fóra wrote:
> > Hey!
&g
501 - 600 of 739 matches
Mail list logo