[jira] [Closed] (FLINK-6087) The file-filter is only applied to directories in ContinuousFileMonitoringFunction

2018-06-25 Thread Yassine Marzougui (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-6087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yassine Marzougui closed FLINK-6087.

Resolution: Invalid

> The file-filter is only applied to directories in 
> ContinuousFileMonitoringFunction
> --
>
> Key: FLINK-6087
> URL: https://issues.apache.org/jira/browse/FLINK-6087
> Project: Flink
>  Issue Type: Bug
>  Components: filesystem-connector
>Affects Versions: 1.2.0
>Reporter: Yassine Marzougui
>Assignee: Yassine Marzougui
>Priority: Major
>
> The file-filter is only applied to directories in 
> ContinuousFileMonitoringFunction, therefore filtering individual files when 
> enumerateNestedFiles is true is currently not possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-6515) KafkaConsumer checkpointing fails because of ClassLoader issues

2017-05-23 Thread Yassine Marzougui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16021344#comment-16021344
 ] 

Yassine Marzougui commented on FLINK-6515:
--

Yes, I used flink.version=1.4-SNAPSHOT in the user code.

In addition, it looks like the fix 
(https://github.com/apache/flink/commit/6f8022e35e0a49d5dfffa0ab7fd1c964b1c1bf0d
 ) didn't modify the kafka-connector code. And the exception I encountered is 
actually from the new code
{code}Caused by: org.apache.flink.util.FlinkRuntimeException: Could not copy 
element via serialization{code}


> KafkaConsumer checkpointing fails because of ClassLoader issues
> ---
>
> Key: FLINK-6515
> URL: https://issues.apache.org/jira/browse/FLINK-6515
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.3.0
>Reporter: Aljoscha Krettek
>Assignee: Stephan Ewen
>Priority: Blocker
> Fix For: 1.3.0, 1.4.0
>
>
> A job with Kafka and checkpointing enabled fails with:
> {code}
> java.lang.Exception: Error while triggering checkpoint 1 for Source: Custom 
> Source -> Map -> Sink: Unnamed (1/1)
>   at org.apache.flink.runtime.taskmanager.Task$3.run(Task.java:1195)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.Exception: Could not perform checkpoint 1 for operator 
> Source: Custom Source -> Map -> Sink: Unnamed (1/1).
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:526)
>   at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask.triggerCheckpoint(SourceStreamTask.java:112)
>   at org.apache.flink.runtime.taskmanager.Task$3.run(Task.java:1184)
>   ... 5 more
> Caused by: java.lang.Exception: Could not complete snapshot 1 for operator 
> Source: Custom Source -> Map -> Sink: Unnamed (1/1).
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:407)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.checkpointStreamOperator(StreamTask.java:1157)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.executeCheckpointing(StreamTask.java:1089)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.checkpointState(StreamTask.java:653)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:589)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:520)
>   ... 7 more
> Caused by: java.lang.RuntimeException: Could not copy instance of 
> (KafkaTopicPartition{topic='test-input', partition=0},-1).
>   at 
> org.apache.flink.runtime.state.JavaSerializer.copy(JavaSerializer.java:54)
>   at 
> org.apache.flink.runtime.state.JavaSerializer.copy(JavaSerializer.java:32)
>   at 
> org.apache.flink.runtime.state.ArrayListSerializer.copy(ArrayListSerializer.java:71)
>   at 
> org.apache.flink.runtime.state.DefaultOperatorStateBackend$PartitionableListState.(DefaultOperatorStateBackend.java:368)
>   at 
> org.apache.flink.runtime.state.DefaultOperatorStateBackend$PartitionableListState.deepCopy(DefaultOperatorStateBackend.java:380)
>   at 
> org.apache.flink.runtime.state.DefaultOperatorStateBackend.snapshot(DefaultOperatorStateBackend.java:191)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:392)
>   ... 12 more
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:348)
>   at 
> org.apache.flink.util.InstantiationUtil$ClassLoaderObjectInputStream.resolveClass(InstantiationUtil.java:64)
>   at 
> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1620)
>   at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1521)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1781)
>   at java.io.ObjectInputStream.readObject0(

[jira] [Commented] (FLINK-6515) KafkaConsumer checkpointing fails because of ClassLoader issues

2017-05-23 Thread Yassine Marzougui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16021115#comment-16021115
 ] 

Yassine Marzougui commented on FLINK-6515:
--

Hi,
I'm still bumping into this issue for the branch release-1.3 and the lastest 
master (1.4-SNAPSHOT, Commit: 546e2ad)
I'm getting the following exception when a checkpoint is triggered:
{code}
java.lang.Exception: Error while triggering checkpoint 1 for Source: Custom 
Source -> Flat Map -> Map (4/8)
at org.apache.flink.runtime.taskmanager.Task$3.run(Task.java:1195)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.Exception: Could not perform checkpoint 1 for operator 
Source: Custom Source -> Flat Map -> Map (4/8).
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:528)
at 
org.apache.flink.streaming.runtime.tasks.SourceStreamTask.triggerCheckpoint(SourceStreamTask.java:111)
at org.apache.flink.runtime.taskmanager.Task$3.run(Task.java:1184)
... 5 more
Caused by: java.lang.Exception: Could not complete snapshot 1 for operator 
Source: Custom Source -> Flat Map -> Map (4/8).
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:409)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.checkpointStreamOperator(StreamTask.java:1158)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.executeCheckpointing(StreamTask.java:1090)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.checkpointState(StreamTask.java:655)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:591)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:522)
... 7 more
Caused by: org.apache.flink.util.FlinkRuntimeException: Could not copy element 
via serialization: (KafkaTopicPartition{topic='pre-bid-urls', partition=7},-1)
at 
org.apache.flink.runtime.state.JavaSerializer.copy(JavaSerializer.java:53)
at 
org.apache.flink.runtime.state.JavaSerializer.copy(JavaSerializer.java:33)
at 
org.apache.flink.runtime.state.ArrayListSerializer.copy(ArrayListSerializer.java:74)
at 
org.apache.flink.runtime.state.DefaultOperatorStateBackend$PartitionableListState.(DefaultOperatorStateBackend.java:384)
at 
org.apache.flink.runtime.state.DefaultOperatorStateBackend$PartitionableListState.deepCopy(DefaultOperatorStateBackend.java:396)
at 
org.apache.flink.runtime.state.DefaultOperatorStateBackend.snapshot(DefaultOperatorStateBackend.java:192)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:394)
... 12 more
Caused by: java.lang.ClassNotFoundException: 
org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at 
org.apache.flink.util.InstantiationUtil$ClassLoaderObjectInputStream.resolveClass(InstantiationUtil.java:64)
at 
java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1819)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1713)
at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1986)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2231)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2155)
at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2013)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422)
at 
org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:290)
at 
org.apache.flink.util.InstantiationUtil.clone(InstantiationUtil.java:371)
at 
org.apache.flink.runtime.state.JavaSerializer.copy(JavaSerializer.java:51)
... 18 more
{code}

Any Idea what's going on?

> KafkaConsumer checkpointing fails because of ClassLoade

[jira] [Updated] (FLINK-6087) The file-filter is only applied to directories in ContinuousFileMonitoringFunction

2017-03-16 Thread Yassine Marzougui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yassine Marzougui updated FLINK-6087:
-
Component/s: filesystem-connector

> The file-filter is only applied to directories in 
> ContinuousFileMonitoringFunction
> --
>
> Key: FLINK-6087
> URL: https://issues.apache.org/jira/browse/FLINK-6087
> Project: Flink
>  Issue Type: Bug
>  Components: filesystem-connector
>Affects Versions: 1.2.0
>Reporter: Yassine Marzougui
>Assignee: Yassine Marzougui
>
> The file-filter is only applied to directories in 
> ContinuousFileMonitoringFunction, therefore filtering individual files when 
> enumerateNestedFiles is true is currently not possible.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6087) The file-filter is only applied to directories in ContinuousFileMonitoringFunction

2017-03-16 Thread Yassine Marzougui (JIRA)
Yassine Marzougui created FLINK-6087:


 Summary: The file-filter is only applied to directories in 
ContinuousFileMonitoringFunction
 Key: FLINK-6087
 URL: https://issues.apache.org/jira/browse/FLINK-6087
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Yassine Marzougui
Assignee: Yassine Marzougui


The file-filter is only applied to directories in 
ContinuousFileMonitoringFunction, therefore filtering individual files when 
enumerateNestedFiles is true is currently not possible.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-5320) Fix result TypeInformation in WindowedStream.fold(ACC, FoldFunction, WindowFunction)

2017-01-10 Thread Yassine Marzougui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yassine Marzougui updated FLINK-5320:
-
Fix Version/s: 1.2.0

> Fix result TypeInformation in WindowedStream.fold(ACC, FoldFunction, 
> WindowFunction)
> 
>
> Key: FLINK-5320
> URL: https://issues.apache.org/jira/browse/FLINK-5320
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming
>Affects Versions: 1.2.0
>Reporter: Yassine Marzougui
>Assignee: Yassine Marzougui
>Priority: Blocker
> Fix For: 1.2.0
>
>
> The WindowedStream.fold(ACC, FoldFunction, WindowFunction) does not correctly 
> infer the resultType.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5432) ContinuousFileMonitoringFunction is not monitoring nested files

2017-01-10 Thread Yassine Marzougui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yassine Marzougui updated FLINK-5432:
-
Fix Version/s: 1.3.0
   1.2.0

> ContinuousFileMonitoringFunction is not monitoring nested files
> ---
>
> Key: FLINK-5432
> URL: https://issues.apache.org/jira/browse/FLINK-5432
> Project: Flink
>  Issue Type: Bug
>  Components: filesystem-connector
>Affects Versions: 1.2.0
>Reporter: Yassine Marzougui
>Assignee: Yassine Marzougui
> Fix For: 1.2.0, 1.3.0
>
>
> The {{ContinuousFileMonitoringFunction}} does not monitor nested files even 
> if the inputformat has NestedFileEnumeration set to true. This can be fixed 
> by enabling a recursive scan of the directories in the {{listEligibleFiles}} 
> method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-5432) ContinuousFileMonitoringFunction is not monitoring nested files

2017-01-10 Thread Yassine Marzougui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yassine Marzougui reassigned FLINK-5432:


Assignee: Yassine Marzougui

> ContinuousFileMonitoringFunction is not monitoring nested files
> ---
>
> Key: FLINK-5432
> URL: https://issues.apache.org/jira/browse/FLINK-5432
> Project: Flink
>  Issue Type: Bug
>  Components: filesystem-connector
>Affects Versions: 1.2.0
>Reporter: Yassine Marzougui
>Assignee: Yassine Marzougui
>
> The {{ContinuousFileMonitoringFunction}} does not monitor nested files even 
> if the inputformat has NestedFileEnumeration set to true. This can be fixed 
> by enabling a recursive scan of the directories in the {{listEligibleFiles}} 
> method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5432) ContinuousFileMonitoringFunction is not monitoring nested files

2017-01-09 Thread Yassine Marzougui (JIRA)
Yassine Marzougui created FLINK-5432:


 Summary: ContinuousFileMonitoringFunction is not monitoring nested 
files
 Key: FLINK-5432
 URL: https://issues.apache.org/jira/browse/FLINK-5432
 Project: Flink
  Issue Type: Bug
  Components: filesystem-connector
Affects Versions: 1.2.0
Reporter: Yassine Marzougui


The {{ContinuousFileMonitoringFunction}} does not monitor nested files even if 
the inputformat has NestedFileEnumeration set to true. This can be fixed by 
enabling a recursive scan of the directories in the {{listEligibleFiles}} 
method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2662) CompilerException: "Bug: Plan generation for Unions picked a ship strategy between binary plan operators."

2017-01-05 Thread Yassine Marzougui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15802570#comment-15802570
 ] 

Yassine Marzougui commented on FLINK-2662:
--

1.3-SNAPSHOT (Commit: 6ac5794)

> CompilerException: "Bug: Plan generation for Unions picked a ship strategy 
> between binary plan operators."
> --
>
> Key: FLINK-2662
> URL: https://issues.apache.org/jira/browse/FLINK-2662
> Project: Flink
>  Issue Type: Bug
>  Components: Optimizer
>Affects Versions: 0.9.1, 0.10.0
>Reporter: Gabor Gevay
>Assignee: Fabian Hueske
>Priority: Critical
> Fix For: 1.0.0, 1.2.0, 1.1.4
>
> Attachments: Bug.java, FlinkBug.scala
>
>
> I have a Flink program which throws the exception in the jira title. Full 
> text:
> Exception in thread "main" org.apache.flink.optimizer.CompilerException: Bug: 
> Plan generation for Unions picked a ship strategy between binary plan 
> operators.
>   at 
> org.apache.flink.optimizer.traversals.BinaryUnionReplacer.collect(BinaryUnionReplacer.java:113)
>   at 
> org.apache.flink.optimizer.traversals.BinaryUnionReplacer.postVisit(BinaryUnionReplacer.java:72)
>   at 
> org.apache.flink.optimizer.traversals.BinaryUnionReplacer.postVisit(BinaryUnionReplacer.java:41)
>   at 
> org.apache.flink.optimizer.plan.DualInputPlanNode.accept(DualInputPlanNode.java:170)
>   at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>   at 
> org.apache.flink.optimizer.plan.DualInputPlanNode.accept(DualInputPlanNode.java:163)
>   at 
> org.apache.flink.optimizer.plan.DualInputPlanNode.accept(DualInputPlanNode.java:163)
>   at 
> org.apache.flink.optimizer.plan.WorksetIterationPlanNode.acceptForStepFunction(WorksetIterationPlanNode.java:194)
>   at 
> org.apache.flink.optimizer.traversals.BinaryUnionReplacer.preVisit(BinaryUnionReplacer.java:49)
>   at 
> org.apache.flink.optimizer.traversals.BinaryUnionReplacer.preVisit(BinaryUnionReplacer.java:41)
>   at 
> org.apache.flink.optimizer.plan.DualInputPlanNode.accept(DualInputPlanNode.java:162)
>   at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>   at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>   at 
> org.apache.flink.optimizer.plan.OptimizedPlan.accept(OptimizedPlan.java:127)
>   at org.apache.flink.optimizer.Optimizer.compile(Optimizer.java:520)
>   at org.apache.flink.optimizer.Optimizer.compile(Optimizer.java:402)
>   at 
> org.apache.flink.client.LocalExecutor.getOptimizerPlanAsJSON(LocalExecutor.java:202)
>   at 
> org.apache.flink.api.java.LocalEnvironment.getExecutionPlan(LocalEnvironment.java:63)
>   at malom.Solver.main(Solver.java:66)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
> The execution plan:
> http://compalg.inf.elte.hu/~ggevay/flink/plan_3_4_0_0_without_verif.txt
> (I obtained this by commenting out the line that throws the exception)
> The code is here:
> https://github.com/ggevay/flink/tree/plan-generation-bug
> The class to run is "Solver". It needs a command line argument, which is a 
> directory where it would write output. (On first run, it generates some 
> lookuptables for a few minutes, which are then placed to /tmp/movegen)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2662) CompilerException: "Bug: Plan generation for Unions picked a ship strategy between binary plan operators."

2017-01-05 Thread Yassine Marzougui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15802548#comment-15802548
 ] 

Yassine Marzougui commented on FLINK-2662:
--

I came across this bug again with the following code:

{code}
final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
env.fromElements(Tuple1.of("one"))
.join(env.fromElements(Tuple1.of("one"))
.union(env.fromElements(Tuple1.of("two")))
.union(env.fromElements(Tuple1.of("three")))
.union(env.fromElements(Tuple1.of("four"))), 
JoinOperatorBase.JoinHint.BROADCAST_HASH_SECOND)
.where(0)
.equalTo(0)
.print();
{code}

> CompilerException: "Bug: Plan generation for Unions picked a ship strategy 
> between binary plan operators."
> --
>
> Key: FLINK-2662
> URL: https://issues.apache.org/jira/browse/FLINK-2662
> Project: Flink
>  Issue Type: Bug
>  Components: Optimizer
>Affects Versions: 0.9.1, 0.10.0
>Reporter: Gabor Gevay
>Assignee: Fabian Hueske
>Priority: Critical
> Fix For: 1.0.0, 1.2.0, 1.1.4
>
> Attachments: Bug.java, FlinkBug.scala
>
>
> I have a Flink program which throws the exception in the jira title. Full 
> text:
> Exception in thread "main" org.apache.flink.optimizer.CompilerException: Bug: 
> Plan generation for Unions picked a ship strategy between binary plan 
> operators.
>   at 
> org.apache.flink.optimizer.traversals.BinaryUnionReplacer.collect(BinaryUnionReplacer.java:113)
>   at 
> org.apache.flink.optimizer.traversals.BinaryUnionReplacer.postVisit(BinaryUnionReplacer.java:72)
>   at 
> org.apache.flink.optimizer.traversals.BinaryUnionReplacer.postVisit(BinaryUnionReplacer.java:41)
>   at 
> org.apache.flink.optimizer.plan.DualInputPlanNode.accept(DualInputPlanNode.java:170)
>   at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>   at 
> org.apache.flink.optimizer.plan.DualInputPlanNode.accept(DualInputPlanNode.java:163)
>   at 
> org.apache.flink.optimizer.plan.DualInputPlanNode.accept(DualInputPlanNode.java:163)
>   at 
> org.apache.flink.optimizer.plan.WorksetIterationPlanNode.acceptForStepFunction(WorksetIterationPlanNode.java:194)
>   at 
> org.apache.flink.optimizer.traversals.BinaryUnionReplacer.preVisit(BinaryUnionReplacer.java:49)
>   at 
> org.apache.flink.optimizer.traversals.BinaryUnionReplacer.preVisit(BinaryUnionReplacer.java:41)
>   at 
> org.apache.flink.optimizer.plan.DualInputPlanNode.accept(DualInputPlanNode.java:162)
>   at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>   at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>   at 
> org.apache.flink.optimizer.plan.OptimizedPlan.accept(OptimizedPlan.java:127)
>   at org.apache.flink.optimizer.Optimizer.compile(Optimizer.java:520)
>   at org.apache.flink.optimizer.Optimizer.compile(Optimizer.java:402)
>   at 
> org.apache.flink.client.LocalExecutor.getOptimizerPlanAsJSON(LocalExecutor.java:202)
>   at 
> org.apache.flink.api.java.LocalEnvironment.getExecutionPlan(LocalEnvironment.java:63)
>   at malom.Solver.main(Solver.java:66)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
> The execution plan:
> http://compalg.inf.elte.hu/~ggevay/flink/plan_3_4_0_0_without_verif.txt
> (I obtained this by commenting out the line that throws the exception)
> The code is here:
> https://github.com/ggevay/flink/tree/plan-generation-bug
> The class to run is "Solver". It needs a command line argument, which is a 
> directory where it would write output. (On first run, it generates some 
> lookuptables for a few minutes, which are then placed to /tmp/movegen)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-4967) RockDB state backend fails on Windows

2017-01-05 Thread Yassine Marzougui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yassine Marzougui resolved FLINK-4967.
--
Resolution: Resolved

> RockDB state backend fails on Windows
> -
>
> Key: FLINK-4967
> URL: https://issues.apache.org/jira/browse/FLINK-4967
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.1.3
>Reporter: Yassine Marzougui
> Fix For: 1.2.0
>
>
> Using the RocksDBStateBackend in Windows leads to the following exception 
> {{java.lang.NoClassDefFoundError: Could not initialize class 
> org.rocksdb.RocksDB}} which is caused by: {{java.lang.RuntimeException: 
> librocksdbjni-win64.dll was not found inside JAR.}}
> As mentioned here https://github.com/facebook/rocksdb/issues/1302, this can 
> be fixed by upgrading rocksDB dependecies, since version 4.9 was the first to 
> include a Windows build of RocksDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4967) RockDB state backend fails on Windows

2017-01-05 Thread Yassine Marzougui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15802523#comment-15802523
 ] 

Yassine Marzougui commented on FLINK-4967:
--

Just checked it for the 1.2 and confirm the rocksdb backend works on windows 
now. Closing the issue.

> RockDB state backend fails on Windows
> -
>
> Key: FLINK-4967
> URL: https://issues.apache.org/jira/browse/FLINK-4967
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.1.3
>Reporter: Yassine Marzougui
> Fix For: 1.2.0
>
>
> Using the RocksDBStateBackend in Windows leads to the following exception 
> {{java.lang.NoClassDefFoundError: Could not initialize class 
> org.rocksdb.RocksDB}} which is caused by: {{java.lang.RuntimeException: 
> librocksdbjni-win64.dll was not found inside JAR.}}
> As mentioned here https://github.com/facebook/rocksdb/issues/1302, this can 
> be fixed by upgrading rocksDB dependecies, since version 4.9 was the first to 
> include a Windows build of RocksDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4967) RockDB state backend fails on Windows

2017-01-05 Thread Yassine Marzougui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yassine Marzougui updated FLINK-4967:
-
Fix Version/s: 1.2.0

> RockDB state backend fails on Windows
> -
>
> Key: FLINK-4967
> URL: https://issues.apache.org/jira/browse/FLINK-4967
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.1.3
>Reporter: Yassine Marzougui
> Fix For: 1.2.0
>
>
> Using the RocksDBStateBackend in Windows leads to the following exception 
> {{java.lang.NoClassDefFoundError: Could not initialize class 
> org.rocksdb.RocksDB}} which is caused by: {{java.lang.RuntimeException: 
> librocksdbjni-win64.dll was not found inside JAR.}}
> As mentioned here https://github.com/facebook/rocksdb/issues/1302, this can 
> be fixed by upgrading rocksDB dependecies, since version 4.9 was the first to 
> include a Windows build of RocksDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5407) Savepoint for iterative Task fails.

2017-01-05 Thread Yassine Marzougui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yassine Marzougui updated FLINK-5407:
-
Attachment: SavepointBug.java

I attached a simplified version of the job reproducing the bug and confirmed it 
against the lastest master (Commit: 6ac5794). I noticed that if I remove the 
lines:
{code}
.keyBy(0)
.flatMap(new DuplicateFilter()).setParallelism(1)
{code}
the job doesn't fail, but the savepoint is stuck in the sate :
Triggering savepoint for job 4f4d0b4308aabc21a243ec34e4c193ba.
Waiting for response...

> Savepoint for iterative Task fails.
> ---
>
> Key: FLINK-5407
> URL: https://issues.apache.org/jira/browse/FLINK-5407
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.2.0
>Reporter: Stephan Ewen
> Fix For: 1.2.0, 1.3.0
>
> Attachments: SavepointBug.java
>
>
> Flink 1.2-SNAPSHOT (Commit: 5b54009) on Windows.
> Triggering a savepoint for a streaming job, both the savepoint and the job 
> failed.
> The job failed with the following exception:
> {code}
> java.lang.RuntimeException: Error while triggering checkpoint for 
> IterationSource-7 (1/1)
>   at org.apache.flink.runtime.taskmanager.Task$3.run(Task.java:1026)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
>   at java.util.concurrent.FutureTask.run(Unknown Source)
>   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
>   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>   at java.lang.Thread.run(Unknown Source)
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.createOperatorIdentifier(StreamTask.java:767)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.access$500(StreamTask.java:115)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.createStreamFactory(StreamTask.java:986)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.executeCheckpointing(StreamTask.java:956)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.checkpointState(StreamTask.java:583)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:551)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:511)
>   at org.apache.flink.runtime.taskmanager.Task$3.run(Task.java:1019)
>   ... 5 more
> And the savepoint failed with the following exception:
> Using address /127.0.0.1:6123 to connect to JobManager.
> Triggering savepoint for job 153310c4a836a92ce69151757c6b73f1.
> Waiting for response...
> 
>  The program finished with the following exception:
> java.lang.Exception: Failed to complete savepoint
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anon$7.apply(JobManager.scala:793)
> at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anon$7.apply(JobManager.scala:782)
> at 
> org.apache.flink.runtime.concurrent.impl.FlinkFuture$6.recover(FlinkFuture.java:263)
> at akka.dispatch.Recover.internal(Future.scala:267)
> at akka.dispatch.japi$RecoverBridge.apply(Future.scala:183)
> at akka.dispatch.japi$RecoverBridge.apply(Future.scala:181)
> at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:185)
> at scala.util.Try$.apply(Try.scala:161)
> at scala.util.Failure.recover(Try.scala:185)
> at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:324)
> at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:324)
> at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
> at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
> at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
> at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
> at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
> at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
> at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
> at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
> at 
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>  

[jira] [Commented] (FLINK-5320) Fix result TypeInformation in WindowedStream.fold(ACC, FoldFunction, WindowFunction)

2016-12-13 Thread Yassine Marzougui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15744775#comment-15744775
 ] 

Yassine Marzougui commented on FLINK-5320:
--

[~aljoscha] I just opened a pull request for it. Looks like I missed it only in 
WindowedStream. However I did not change the tests, I will change them.

> Fix result TypeInformation in WindowedStream.fold(ACC, FoldFunction, 
> WindowFunction)
> 
>
> Key: FLINK-5320
> URL: https://issues.apache.org/jira/browse/FLINK-5320
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming
>Affects Versions: 1.2.0
>Reporter: Yassine Marzougui
>Assignee: Yassine Marzougui
>Priority: Blocker
>
> The WindowedStream.fold(ACC, FoldFunction, WindowFunction) does not correctly 
> infer the resultType.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5320) Fix result TypeInformation in WindowedStream.fold(ACC, FoldFunction, WindowFunction)

2016-12-12 Thread Yassine Marzougui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yassine Marzougui updated FLINK-5320:
-
Affects Version/s: 1.2.0

> Fix result TypeInformation in WindowedStream.fold(ACC, FoldFunction, 
> WindowFunction)
> 
>
> Key: FLINK-5320
> URL: https://issues.apache.org/jira/browse/FLINK-5320
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming
>Affects Versions: 1.2.0
>Reporter: Yassine Marzougui
>Assignee: Yassine Marzougui
>
> The WindowedStream.fold(ACC, FoldFunction, WindowFunction) does not correctly 
> infer the resultType.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5320) Fix result TypeInformation in WindowedStream.fold(ACC, FoldFunction, WindowFunction)

2016-12-12 Thread Yassine Marzougui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yassine Marzougui updated FLINK-5320:
-
Description: The WindowedStream.fold(ACC, FoldFunction, WindowFunction) 
does not correctly infer the resultType.  (was: The WindowedStream.fold(ACC, 
FoldFunction, WindowFunction) calls fold(initialValue, foldFunction, function, 
foldAccumulatorType, inputType) instead of fold(initialValue, foldFunction, 
function, foldAccumulatorType, resultType))

> Fix result TypeInformation in WindowedStream.fold(ACC, FoldFunction, 
> WindowFunction)
> 
>
> Key: FLINK-5320
> URL: https://issues.apache.org/jira/browse/FLINK-5320
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming
>Reporter: Yassine Marzougui
>Assignee: Yassine Marzougui
>
> The WindowedStream.fold(ACC, FoldFunction, WindowFunction) does not correctly 
> infer the resultType.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5320) Fix result TypeInformation in WindowedStream.fold(ACC, FoldFunction, WindowFunction)

2016-12-12 Thread Yassine Marzougui (JIRA)
Yassine Marzougui created FLINK-5320:


 Summary: Fix result TypeInformation in WindowedStream.fold(ACC, 
FoldFunction, WindowFunction)
 Key: FLINK-5320
 URL: https://issues.apache.org/jira/browse/FLINK-5320
 Project: Flink
  Issue Type: Bug
  Components: Streaming
Reporter: Yassine Marzougui
Assignee: Yassine Marzougui


The WindowedStream.fold(ACC, FoldFunction, WindowFunction) calls 
fold(initialValue, foldFunction, function, foldAccumulatorType, inputType) 
instead of fold(initialValue, foldFunction, function, foldAccumulatorType, 
resultType)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3869) WindowedStream.apply with FoldFunction is too restrictive

2016-11-22 Thread Yassine Marzougui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15687298#comment-15687298
 ] 

Yassine Marzougui commented on FLINK-3869:
--

Thank you for the feedback [~sjwiesman].

> WindowedStream.apply with FoldFunction is too restrictive
> -
>
> Key: FLINK-3869
> URL: https://issues.apache.org/jira/browse/FLINK-3869
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming
>Reporter: Aljoscha Krettek
>Assignee: Yassine Marzougui
> Fix For: 1.2.0
>
>
> Right now we have this signature:
> {code}
> public  SingleOutputStreamOperator apply(R initialValue, 
> FoldFunction foldFunction, WindowFunction function) {
> {code}
> but we should have this signature to allow users to return a type other than 
> the fold accumulator type from their window function:
> {code}
> public  SingleOutputStreamOperator apply(ACC initialValue, 
> FoldFunction foldFunction, WindowFunction function) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (FLINK-3869) WindowedStream.apply with FoldFunction is too restrictive

2016-11-21 Thread Yassine Marzougui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15683873#comment-15683873
 ] 

Yassine Marzougui edited comment on FLINK-3869 at 11/21/16 3:35 PM:


[~aljoscha] Yes, I could fix the Scala API in the end too.


was (Author: ymarzougui):
[~aljoscha] Yes, I could fix the Scala API en the end too.

> WindowedStream.apply with FoldFunction is too restrictive
> -
>
> Key: FLINK-3869
> URL: https://issues.apache.org/jira/browse/FLINK-3869
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming
>Reporter: Aljoscha Krettek
>Assignee: Yassine Marzougui
>
> Right now we have this signature:
> {code}
> public  SingleOutputStreamOperator apply(R initialValue, 
> FoldFunction foldFunction, WindowFunction function) {
> {code}
> but we should have this signature to allow users to return a type other than 
> the fold accumulator type from their window function:
> {code}
> public  SingleOutputStreamOperator apply(ACC initialValue, 
> FoldFunction foldFunction, WindowFunction function) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3869) WindowedStream.apply with FoldFunction is too restrictive

2016-11-21 Thread Yassine Marzougui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15683873#comment-15683873
 ] 

Yassine Marzougui commented on FLINK-3869:
--

[~aljoscha] Yes, I could fix the Scala API en the end too.

> WindowedStream.apply with FoldFunction is too restrictive
> -
>
> Key: FLINK-3869
> URL: https://issues.apache.org/jira/browse/FLINK-3869
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming
>Reporter: Aljoscha Krettek
>Assignee: Yassine Marzougui
>
> Right now we have this signature:
> {code}
> public  SingleOutputStreamOperator apply(R initialValue, 
> FoldFunction foldFunction, WindowFunction function) {
> {code}
> but we should have this signature to allow users to return a type other than 
> the fold accumulator type from their window function:
> {code}
> public  SingleOutputStreamOperator apply(ACC initialValue, 
> FoldFunction foldFunction, WindowFunction function) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (FLINK-3869) WindowedStream.apply with FoldFunction is too restrictive

2016-11-21 Thread Yassine Marzougui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15669039#comment-15669039
 ] 

Yassine Marzougui edited comment on FLINK-3869 at 11/21/16 3:35 PM:


[~aljoscha] would it be OK if I incorporate the changes you proposed only to 
the Java API? The Scala compiler is complaining about too many arguments for 
method reduce and overloaded method value fold with alternatives..

EDIT : I was able to change the Scala API without problems after doing an mvn 
clean install.


was (Author: ymarzougui):
[~aljoscha] would it be OK if I incorporate the changes you proposed only to 
the Java API? The Scala compiler is complaining about too many arguments for 
method reduce and overloaded method value fold with alternatives..


> WindowedStream.apply with FoldFunction is too restrictive
> -
>
> Key: FLINK-3869
> URL: https://issues.apache.org/jira/browse/FLINK-3869
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming
>Reporter: Aljoscha Krettek
>Assignee: Yassine Marzougui
>
> Right now we have this signature:
> {code}
> public  SingleOutputStreamOperator apply(R initialValue, 
> FoldFunction foldFunction, WindowFunction function) {
> {code}
> but we should have this signature to allow users to return a type other than 
> the fold accumulator type from their window function:
> {code}
> public  SingleOutputStreamOperator apply(ACC initialValue, 
> FoldFunction foldFunction, WindowFunction function) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3869) WindowedStream.apply with FoldFunction is too restrictive

2016-11-15 Thread Yassine Marzougui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15669039#comment-15669039
 ] 

Yassine Marzougui commented on FLINK-3869:
--

[~aljoscha] would it be OK if I incorporate the changes you proposed only to 
the Java API? The Scala compiler is complaining about too many arguments for 
method reduce and overloaded method value fold with alternatives..


> WindowedStream.apply with FoldFunction is too restrictive
> -
>
> Key: FLINK-3869
> URL: https://issues.apache.org/jira/browse/FLINK-3869
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming
>Reporter: Aljoscha Krettek
>Assignee: Yassine Marzougui
>
> Right now we have this signature:
> {code}
> public  SingleOutputStreamOperator apply(R initialValue, 
> FoldFunction foldFunction, WindowFunction function) {
> {code}
> but we should have this signature to allow users to return a type other than 
> the fold accumulator type from their window function:
> {code}
> public  SingleOutputStreamOperator apply(ACC initialValue, 
> FoldFunction foldFunction, WindowFunction function) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4967) RockDB state backend fails on Windows

2016-10-30 Thread Yassine Marzougui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yassine Marzougui updated FLINK-4967:
-
Labels:   (was: easyfix)

> RockDB state backend fails on Windows
> -
>
> Key: FLINK-4967
> URL: https://issues.apache.org/jira/browse/FLINK-4967
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.1.3
>Reporter: Yassine Marzougui
>
> Using the RocksDBStateBackend in Windows leads to the following exception 
> {{java.lang.NoClassDefFoundError: Could not initialize class 
> org.rocksdb.RocksDB}} which is caused by: {{java.lang.RuntimeException: 
> librocksdbjni-win64.dll was not found inside JAR.}}
> As mentioned here https://github.com/facebook/rocksdb/issues/1302, this can 
> be fixed by upgrading rocksDB dependecies, since version 4.9 was the first to 
> include a Windows build of RocksDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4967) RockDB state backend fails on Windows

2016-10-30 Thread Yassine Marzougui (JIRA)
Yassine Marzougui created FLINK-4967:


 Summary: RockDB state backend fails on Windows
 Key: FLINK-4967
 URL: https://issues.apache.org/jira/browse/FLINK-4967
 Project: Flink
  Issue Type: Bug
  Components: State Backends, Checkpointing
Affects Versions: 1.1.3
Reporter: Yassine Marzougui


Using the RocksDBStateBackend in Windows leads to the following exception 
{{java.lang.NoClassDefFoundError: Could not initialize class 
org.rocksdb.RocksDB}} which is caused by: {{java.lang.RuntimeException: 
librocksdbjni-win64.dll was not found inside JAR.}}
As mentioned here https://github.com/facebook/rocksdb/issues/1302, this can be 
fixed by upgrading rocksDB dependecies, since version 4.9 was the first to 
include a Windows build of RocksDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-2662) CompilerException: "Bug: Plan generation for Unions picked a ship strategy between binary plan operators."

2016-10-25 Thread Yassine Marzougui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yassine Marzougui updated FLINK-2662:
-
Attachment: Bug.java

I ran into the same problem with Flink 1.1.3 when trying to perform 
partitionByRange(1).sortPartition(1, Order.DESCENDING) on the union of DataSets.
I have attached a program reproducing the bug.

> CompilerException: "Bug: Plan generation for Unions picked a ship strategy 
> between binary plan operators."
> --
>
> Key: FLINK-2662
> URL: https://issues.apache.org/jira/browse/FLINK-2662
> Project: Flink
>  Issue Type: Bug
>  Components: Optimizer
>Affects Versions: 0.9.1, 0.10.0
>Reporter: Gabor Gevay
>Assignee: Fabian Hueske
>Priority: Critical
> Fix For: 1.0.0, 1.2.0, 1.1.3
>
> Attachments: Bug.java, FlinkBug.scala
>
>
> I have a Flink program which throws the exception in the jira title. Full 
> text:
> Exception in thread "main" org.apache.flink.optimizer.CompilerException: Bug: 
> Plan generation for Unions picked a ship strategy between binary plan 
> operators.
>   at 
> org.apache.flink.optimizer.traversals.BinaryUnionReplacer.collect(BinaryUnionReplacer.java:113)
>   at 
> org.apache.flink.optimizer.traversals.BinaryUnionReplacer.postVisit(BinaryUnionReplacer.java:72)
>   at 
> org.apache.flink.optimizer.traversals.BinaryUnionReplacer.postVisit(BinaryUnionReplacer.java:41)
>   at 
> org.apache.flink.optimizer.plan.DualInputPlanNode.accept(DualInputPlanNode.java:170)
>   at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>   at 
> org.apache.flink.optimizer.plan.DualInputPlanNode.accept(DualInputPlanNode.java:163)
>   at 
> org.apache.flink.optimizer.plan.DualInputPlanNode.accept(DualInputPlanNode.java:163)
>   at 
> org.apache.flink.optimizer.plan.WorksetIterationPlanNode.acceptForStepFunction(WorksetIterationPlanNode.java:194)
>   at 
> org.apache.flink.optimizer.traversals.BinaryUnionReplacer.preVisit(BinaryUnionReplacer.java:49)
>   at 
> org.apache.flink.optimizer.traversals.BinaryUnionReplacer.preVisit(BinaryUnionReplacer.java:41)
>   at 
> org.apache.flink.optimizer.plan.DualInputPlanNode.accept(DualInputPlanNode.java:162)
>   at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>   at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>   at 
> org.apache.flink.optimizer.plan.OptimizedPlan.accept(OptimizedPlan.java:127)
>   at org.apache.flink.optimizer.Optimizer.compile(Optimizer.java:520)
>   at org.apache.flink.optimizer.Optimizer.compile(Optimizer.java:402)
>   at 
> org.apache.flink.client.LocalExecutor.getOptimizerPlanAsJSON(LocalExecutor.java:202)
>   at 
> org.apache.flink.api.java.LocalEnvironment.getExecutionPlan(LocalEnvironment.java:63)
>   at malom.Solver.main(Solver.java:66)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
> The execution plan:
> http://compalg.inf.elte.hu/~ggevay/flink/plan_3_4_0_0_without_verif.txt
> (I obtained this by commenting out the line that throws the exception)
> The code is here:
> https://github.com/ggevay/flink/tree/plan-generation-bug
> The class to run is "Solver". It needs a command line argument, which is a 
> directory where it would write output. (On first run, it generates some 
> lookuptables for a few minutes, which are then placed to /tmp/movegen)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)