[jira] [Created] (FLINK-9236) Use Apache Parent POM 19

2018-04-22 Thread Ted Yu (JIRA)
Ted Yu created FLINK-9236:
-

 Summary: Use Apache Parent POM 19
 Key: FLINK-9236
 URL: https://issues.apache.org/jira/browse/FLINK-9236
 Project: Flink
  Issue Type: Improvement
Reporter: Ted Yu


Flink is still using Apache Parent POM 18. Apache Parent POM 19 is out. This 
will also fix Javadoc generation with JDK 10+



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9235) Add Integration test for Flink-Yarn-Kerberos integration for flip-6

2018-04-22 Thread Shuyi Chen (JIRA)
Shuyi Chen created FLINK-9235:
-

 Summary: Add Integration test for Flink-Yarn-Kerberos integration 
for flip-6
 Key: FLINK-9235
 URL: https://issues.apache.org/jira/browse/FLINK-9235
 Project: Flink
  Issue Type: Test
Affects Versions: 1.5.0
Reporter: Shuyi Chen
Assignee: Shuyi Chen


We need to provide an integration test for flip-6 similar to 
YARNSessionFIFOSecuredITCase for the legacy deployment mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [Discuss] Proposing FLIP-25 - Support User State TTL Natively in Flink

2018-04-22 Thread Bowen
Hello community,

We've come up with a completely new design for Flink state TTL, documented 
here, and have run it by a few Flink PMC/committers.

What do you think? We'd love to hear feedbacks from you

Thanks,
Bowen


> On Wed, Feb 7, 2018 at 12:53 PM, Fabian Hueske  wrote:
> Hi Bowen,
> 
> Thanks for the proposal! I think state TTL would be a great feature!
> Actually, we have implemented this for SQL / Table API [1].
> I've added a couple of comments to the design doc.
> 
> In principle, I'm not sure if this functionality should be added to the
> state backends.
> We could also use the existing timer service which would have a few nice
> benefits (see my comments in the docs).
> 
> Best, Fabian
> 
> [1]
> https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/table/streaming.html#idle-state-retention-time
> 
> 2018-02-06 8:26 GMT+01:00 Bowen Li :
> 
> > Hi guys,
> >
> > I want to propose a new FLIP -- FLIP-25 - Support User State TTL Natively
> > in Flink. This has been one of most handy and most frequently asked
> > features in Flink community. The jira ticket is FLINK-3089
> > .
> >
> > I've written a rough design
> >  > uureyEr_nPAvSo/edit#>
> > doc
> >  > uureyEr_nPAvSo/edit#>,
> > and developed prototypes for both heap and rocksdb state backends.
> >
> > My question is: shall we create a FLIP page for this? Can I be granted the
> > privileges of creating pages in
> > https://cwiki.apache.org/confluence/display/FLINK/
> > Flink+Improvement+Proposals
> > ?
> >
> > Thanks,
> > Bowen
> >



[jira] [Created] (FLINK-9234) Commons Logging is missing from shaded Flink Table library

2018-04-22 Thread Eron Wright (JIRA)
Eron Wright  created FLINK-9234:
---

 Summary: Commons Logging is missing from shaded Flink Table library
 Key: FLINK-9234
 URL: https://issues.apache.org/jira/browse/FLINK-9234
 Project: Flink
  Issue Type: Bug
  Components: Table API  SQL
Affects Versions: 1.4.2
 Environment: jdk1.8.0_172
flink 1.4.2
Mac High Sierra
Reporter: Eron Wright 
 Attachments: repro.scala

The flink-table shaded library seems to be missing some classes from 
{{org.apache.commons.logging}} that are required by 
{{org.apache.commons.configuration}}.  Ran into the problem while using the 
external catalog support, on Flink 1.4.2.

See attached a repro, which produces:
{code}
Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/flink/table/shaded/org/apache/commons/logging/Log
at 
org.apache.flink.table.catalog.ExternalTableSourceUtil$.parseScanPackagesFromConfigFile(ExternalTableSourceUtil.scala:153)
at 
org.apache.flink.table.catalog.ExternalTableSourceUtil$.(ExternalTableSourceUtil.scala:55)
at 
org.apache.flink.table.catalog.ExternalTableSourceUtil$.(ExternalTableSourceUtil.scala)
at 
org.apache.flink.table.catalog.ExternalCatalogSchema.getTable(ExternalCatalogSchema.scala:78)
at 
org.apache.calcite.jdbc.SimpleCalciteSchema.getImplicitTable(SimpleCalciteSchema.java:82)
at 
org.apache.calcite.jdbc.CalciteSchema.getTable(CalciteSchema.java:256)
at 
org.apache.calcite.jdbc.CalciteSchema$SchemaPlusImpl.getTable(CalciteSchema.java:561)
at 
org.apache.flink.table.api.TableEnvironment.scanInternal(TableEnvironment.scala:497)
at 
org.apache.flink.table.api.TableEnvironment.scan(TableEnvironment.scala:485)
at Repro$.main(repro.scala:17)
at Repro.main(repro.scala)
{code}

Dependencies:
{code}
compile 'org.slf4j:slf4j-api:1.7.25'
compile 'org.slf4j:slf4j-log4j12:1.7.25'
runtime 'log4j:log4j:1.2.17'

compile 'org.apache.flink:flink-scala_2.11:1.4.2'
compile 'org.apache.flink:flink-streaming-scala_2.11:1.4.2'
compile 'org.apache.flink:flink-clients_2.11:1.4.2'
compile 'org.apache.flink:flink-table_2.11:1.4.2'
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9233) Merging state may cause runtime exception when windows trigger onMerge

2018-04-22 Thread Hai Zhou (JIRA)
Hai Zhou created FLINK-9233:
---

 Summary: Merging state may cause runtime exception when windows  
trigger onMerge
 Key: FLINK-9233
 URL: https://issues.apache.org/jira/browse/FLINK-9233
 Project: Flink
  Issue Type: Bug
  Components: State Backends, Checkpointing
Affects Versions: 1.4.0
Reporter: Hai Zhou


the main logic of my flink job is as follows:
{code:java}
clickStream.coGroup(exposureStream).where(...).equalTo(...)
.window(EventTimeSessionWindows.withGap())
.trigger(new SessionMatchTrigger)
.evictor()
.apply();
{code}
{code:java}
SessionMatchTrigger{

ReducingStateDescriptor  stateDesc = new ReducingStateDescriptor()
...
public boolean canMerge() {
return true;
}


public void onMerge(TimeWindow window, OnMergeContext ctx) {
ctx.mergePartitionedState(this.stateDesc);
ctx.registerEventTimeTimer(window.maxTimestamp());
}

}
{code}
{panel:title=detailed error logs}
java.lang.RuntimeException: Error while merging state.
 at 
org.apache.flink.streaming.runtime.operators.windowing.WindowOperator$Context.mergePartitionedState(WindowOperator.java:895)
 at com.package.trigger.SessionMatchTrigger.onMerge(SessionMatchTrigger.java:56)
 at com.package.trigger.SessionMatchTrigger.onMerge(SessionMatchTrigger.java:14)
 at 
org.apache.flink.streaming.runtime.operators.windowing.WindowOperator$Context.onMerge(WindowOperator.java:939)
 at 
org.apache.flink.streaming.runtime.operators.windowing.EvictingWindowOperator$1.merge(EvictingWindowOperator.java:141)
 at 
org.apache.flink.streaming.runtime.operators.windowing.EvictingWindowOperator$1.merge(EvictingWindowOperator.java:120)
 at 
org.apache.flink.streaming.runtime.operators.windowing.MergingWindowSet.addWindow(MergingWindowSet.java:212)
 at 
org.apache.flink.streaming.runtime.operators.windowing.EvictingWindowOperator.processElement(EvictingWindowOperator.java:119)
 at 
org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:207)
 at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:69)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:264)
 at org.apache.flink.runtime.taskmanager.Task.run(Task.java:718)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.Exception: Error while merging state in RocksDB
 at 
org.apache.flink.contrib.streaming.state.RocksDBReducingState.mergeNamespaces(RocksDBReducingState.java:186)
 at 
org.apache.flink.streaming.runtime.operators.windowing.WindowOperator$Context.mergePartitionedState(WindowOperator.java:887)
 ... 12 more
 Caused by: java.lang.IllegalArgumentException: Illegal value provided for 
SubCode.
 at org.rocksdb.Status$SubCode.getSubCode(Status.java:109)
 at org.rocksdb.Status.(Status.java:30)
 at org.rocksdb.RocksDB.delete(Native Method)
 at org.rocksdb.RocksDB.delete(RocksDB.java:1110)
 at 
org.apache.flink.contrib.streaming.state.RocksDBReducingState.mergeNamespaces(RocksDBReducingState.java:143)
 ... 13 more
{panel}
 

I found the reason of this error. 
 Due to Java's

{RocksDB.Status.SubCode}

was out of sync with

{include/rocksdb/status.h:SubCode}

.
 When running out of disc space this led to an

{IllegalArgumentException}

because of an invalid status code, rather than just returning the corresponding 
status code without an exception.
 more details:<[https://github.com/facebook/rocksdb/pull/3050]>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)