Re: [DISCUSS] Backwards compatibility policy.

2017-06-28 Thread Sebastian Schelter
I haven't closely followed the discussion so far, but isn't it Apache
policy that major versions should stay backwards compatible to all previous
releases with the same major version?

-s

2017-06-28 12:26 GMT+02:00 Kostas Kloudas :

> I agree that 1.1 compatibility is the most important “pain point", as
> compatibility with the rest of the versions follows a more “systematic”
> approach.
>
> I think that discarding compatibility with 1.1 will clear some parts
> of the codebase significantly.
>
> Kostas
>
> > On Jun 27, 2017, at 6:03 PM, Stephan Ewen  wrote:
> >
> > I think that this discussion is probably motivated especially by the
> > "legacy state" handling of Flink 1.1.
> > The biggest gain in codebase and productivity would be won only by
> dropping
> > 1.1 compatibility in Flink 1.4.
> >
> > My gut feeling is that this is reasonable. We support two versions back,
> > which means that users can skip one upgrade, but not two.
> >
> > From what I can tell, users are usually eager to upgrade. They don't do
> it
> > immediately, but as soon as the new release is a bit battle tested.
> >
> > I would expect skipping two entire versions to be rare enough to be okay
> > with a solution which is a bit more effort for the user:
> > You can upgrade from Flink 1.1. to 1.4 by loading the 1.1 savepoint into
> > Flink 1.2, take a savepoint (1.2 format), and resume that in Flink 1.4.
> >
> > Greetings,
> > Stephan
> >
> >
> > On Tue, Jun 27, 2017 at 12:01 PM, Stefan Richter <
> > s.rich...@data-artisans.com> wrote:
> >
> >> For many parts of the code, I would agree with Aljoscha. However, I can
> >> also see notable exceptions, such as maintaining support for the legacy
> >> state from Flink <=1.1. For example, I think dropping support for this
> can
> >> simplify new developments such as fast local recovery or state
> replication
> >> quiet a bit because this is a special case that runs through a lot of
> code
> >> from backend to JM. So besides this general discussion about a backwards
> >> compatible policy, do you think it could make sense to start another
> >> concrete discussion about if we still must or want backwards
> compatibility
> >> to Flink 1.1 in Flink 1.4?
> >>
> >>> Am 29.05.2017 um 12:08 schrieb Aljoscha Krettek :
> >>>
> >>> Normally, I’m the first one to suggest removing everything that is not
> >> absolutely necessary in order to have a clean code base. On this issue,
> >> though, I think we should support restoring from old Savepoints as far
> back
> >> as possible if it does not make the code completely unmaintainable. Some
> >> users might jump versions and always forcing them to go though every
> >> version from their old version to the current version doesn’t seem
> feasible
> >> and might put off some users.
> >>>
> >>> So far, I think the burden of supporting restore from 1.1 is still
> small
> >> enough and with each new version the changes between versions become
> less
> >> and less. The changes from 1.2 to the upcoming 1.3 are quite minimal, I
> >> think.
> >>>
> >>> Best,
> >>> Aljoscha
>  On 24. May 2017, at 17:58, Ted Yu  wrote:
> 
>  bq. about having LTS versions once a year
> 
>  +1 to the above.
> 
>  There may be various reasons users don't want to upgrade (after new
>  releases come out). We should give such users enough flexibility on
> the
>  upgrade path.
> 
>  Cheers
> 
>  On Wed, May 24, 2017 at 8:39 AM, Kostas Kloudas <
> >> k.klou...@data-artisans.com
> > wrote:
> 
> > Hi all,
> >
> > For the proposal of having a third party tool, I agree with Ted.
> > Maintaining
> > it is a big and far from trivial effort.
> >
> > Now for the window of backwards compatibility, I would argue that
> even
> >> if
> > for some users 4 months (1 release) is not enough to bump their Flink
> > version,
> > the proposed policy guarantees that there will always be a path from
> >> any
> > old
> > version to any subsequent one.
> >
> > Finally, for the proposal about having LTS versions once a year, I am
> >> not
> > sure if this will reduce or create more overhead. If I understand the
> >> plan
> > correctly, this would mean that the community will have to maintain
> > 2 or 3 LTS versions and the last two major ones, right?
> >
> >> On May 22, 2017, at 7:31 PM, Ted Yu  wrote:
> >>
> >> For #2, it is difficult to achieve:
> >>
> >> a. maintaining savepoint migration is non-trivial and should be
> >> reviewed
> > by
> >> domain experts
> >> b. how to certify such third-party tool
> >>
> >> Cheers
> >>
> >> On Mon, May 22, 2017 at 3:04 AM, 施晓罡 
> wrote:
> >>
> >>> Hi all,
> >>>
> >>> Currently, we work a lot in the maintenance of compatibility.
> >>> There exist much code in runtime to support the migration of
> >> savepoints
> >>> (most of which are deprecated), making it hard to focus on the
> 

Re: FlinkML on slack

2017-06-20 Thread Sebastian Schelter
I'd also like to get an invite to this slack, my email is s...@apache.org

Best,
Sebastian

2017-06-20 8:37 GMT+02:00 Jark Wu :

> Hi, Stravros:
> Could you please invite me to the FlinkML slack channel as well? My email
> is: imj...@gmail.com
>
> Thanks,
> Jark
>
> 2017-06-20 13:58 GMT+08:00 Shaoxuan Wang :
>
> > Hi Stavros,
> > Can I get an invitation for the slack channel.
> >
> > Thanks,
> > Shaoxuan
> >
> >
> > On Thu, Jun 8, 2017 at 3:56 AM, Stavros Kontopoulos <
> > st.kontopou...@gmail.com> wrote:
> >
> > > Hi all,
> > >
> > > We took the initiative to create the organization for FlinkML on slack
> > > (thnx Eron).
> > > There is now a channel for model-serving
> > >  > > fdEXPsPYPEywsE/edit#>.
> > > Another is coming for flink-jpmml.
> > > You are invited to join the channels and the efforts. @Gabor @Theo
> please
> > > consider adding channels for the other efforts there as well.
> > >
> > > FlinkMS on Slack  (
> > https://flinkml.slack.com/)
> > >
> > > Details for the efforts here: Flink Roadmap doc
> > >  > > d06MIRhahtJ6dw/edit#>
> > >
> > > Github  (https://github.com/FlinkML)
> > >
> > >
> > > Stavros
> > >
> >
>


[jira] [Created] (FLINK-2026) Error message in count() only jobs

2015-05-15 Thread Sebastian Schelter (JIRA)
Sebastian Schelter created FLINK-2026:
-

 Summary: Error message in count() only jobs
 Key: FLINK-2026
 URL: https://issues.apache.org/jira/browse/FLINK-2026
 Project: Flink
  Issue Type: Bug
  Components: Core
Reporter: Sebastian Schelter
Priority: Minor


If I run a job that only calls count() on a dataset (which is a valid data flow 
IMHO), Flink executes the job but complains that no sinks are defined.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-2025) Support booleans in CSV reader

2015-05-15 Thread Sebastian Schelter (JIRA)
Sebastian Schelter created FLINK-2025:
-

 Summary: Support booleans in CSV reader
 Key: FLINK-2025
 URL: https://issues.apache.org/jira/browse/FLINK-2025
 Project: Flink
  Issue Type: New Feature
  Components: Core
Reporter: Sebastian Schelter


It would be great if Flink allowed to read booleans from CSV files, e.g. 1 for 
true and 0 for false.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)