Hello, Theodore
Could you please move vectors of development and their prioritized
positions from *## Executive summary* to the google doc?
Could you please also create some table in google doc, that is representing
the selected directions and persons, who would like to drive or participate
in
> that document? We need action.
>
> Looking forward to work on this (whatever that might be) ;) Also are there
> any data supporting one direction or the other from a customer perspective?
> It would help to make more informed decisions.
>
> On Thu, Feb 23, 2017 at 2:23 PM,
I'm not sure that this is feasible, doing all at the same time could mean
doing nothing
I'm just afraid, that words: we will work on streaming not on batching, we
have no commiter's time for this, mean that yes, we started work on
FLINK-1730, but nobody will commit this work in the end, as it
Till, thank you for your response.
But I need several points to clarify:
1) Yes, batch and batch ML is the field full of alternatives, but in my
opinion that doesn’t mean that we should ignore the problem of not
developing batch part of Flink. You know: Apache Beam, Apache Mahout they
both feel
Hello guys,
May be we will be able to focus our forces on some E2E scenario or show
case for Flink as also ML supporting engine, and in such a way actualize
the roadmap?
This means: we can take some real life/production problem, like Fraud
detection in some area, and try to solve this problem
ot assigned to anyone we would like to take this
ticket to work (my colleges could try to implement it).
Further discussion of the topic related to FLINK-1730 I would like to
handle in appropriate ticket.
пт, 10 февр. 2017 г. в 19:57, Katherin Eri <katherinm...@gmail.com>:
> I have created
I have created the ticket to discuss GPU related questions futher
https://issues.apache.org/jira/browse/FLINK-5782
пт, 10 февр. 2017 г. в 18:16, Katherin Eri <katherinm...@gmail.com>:
> Thank you, Trevor!
>
> You have shared very valuable points; I will consider them.
>
>
ira/browse/FLINK-1730
>
> 2) I have no idea about the GPU implementation. The SystemML mailing list
> will probably help you out their.
>
> Best regards,
> Felix
>
> 2017-02-08 14:33 GMT+01:00 Katherin Eri <katherinm...@gmail.com>:
>
> > Thank you Felix
d be feasible to run DL4J on Flink given that it
also runs on Spark. Have you already looked at it closer?
[1] https://issues.apache.org/jira/browse/FLINK-5131
Cheers,
Till
On Tue, Feb 7, 2017 at 11:47 AM, Katherin Eri <katherinm...@gmail.com>
wrote:
> Thank you Theodore, for your re
urden would be too much
> otherwise.
>
> Regards,
> Theodore
>
> On Mon, Feb 6, 2017 at 11:26 AM, Katherin Eri <katherinm...@gmail.com>
> wrote:
>
> > Hello, guys.
> >
> > Theodore, last week I started the review of the PR:
> > https://githu
endently **from
integration to DL4J.*
Could you please provide your opinion regarding my questions and points,
what do you think about them?
пн, 6 февр. 2017 г. в 12:51, Katherin Eri <katherinm...@gmail.com>:
> Sorry, guys I need to finish this letter first.
> Full version of it wi
Sorry, guys I need to finish this letter first.
Full version of it will come shortly.
пн, 6 февр. 2017 г. в 12:49, Katherin Eri <katherinm...@gmail.com>:
> Hello, guys.
> Theodore, last week I started the review of the PR:
> https://github.com/apache/flink/pull/2735 relat
Hello, guys.
Theodore, last week I started the review of the PR:
https://github.com/apache/flink/pull/2735 related to *word2Vec for Flink*.
During this review I have asked myself: why do we need to implement such a
very popular algorithm like *word2vec one more time*, when there is already
13 matches
Mail list logo