[ 
https://issues.apache.org/jira/browse/MAHOUT-1818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15234321#comment-15234321
 ] 

Andrew Palumbo commented on MAHOUT-1818:
----------------------------------------

for future reference after tweaking some Flink vars the problem with {{dals}} 
​is​ a serialization error:
{code}
Process finished with exit code 137
Caused by: java.lang.Exception: Deserializing the InputFormat 
([(0,{0:0.3947476722883563,1:-0.08695028358267716,2:-1.0574297632219802,3:0.3268090996516988,4:-1.3667553319818917,5:-0.1794776700908003,6:1.078276508767426,7:-1.19520500669697,8:-0.48920817822415197,9:-0.01611590341576673,10:-0.3924584320254835,11:1.1084504280408736,12:-0.7766818602582699,13:-1.745148020967139,14:-0.30702403178017207,15:1.0870667203881104,16:0.5743916990799559,17:1.1374342122090273,18:-1.0523085600170734,19:-1.3638541557908512,20:-1.3315774874522164,21:0.13871074941128161,22:-0.1
 ...
{code}

> dals test failing in Flink-bindings
> -----------------------------------
>
>                 Key: MAHOUT-1818
>                 URL: https://issues.apache.org/jira/browse/MAHOUT-1818
>             Project: Mahout
>          Issue Type: Bug
>          Components: Flink
>    Affects Versions: 0.11.2
>            Reporter: Andrew Palumbo
>            Assignee: Andrew Palumbo
>            Priority: Blocker
>             Fix For: 0.12.0
>
>
> {{dals}} test fails in Flink bindings with an OOM.  Numerically the test 
> passes, when the matrix being decomposed in the test  lowered to the size 50 
> x 50.  But the default size of the matrix in the 
> {{DistributedDecompositionsSuiteBase}} is 500 x 500. 
> {code}
> java.lang.OutOfMemoryError: Java heap space
>       at java.util.Arrays.copyOf(Arrays.java:2271)
>       at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:118)
>       at 
> java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
>       at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
>       at 
> java.io.ObjectOutputStream$BlockDataOutputStream.writeBlockHeader(ObjectOutputStream.java:1893)
>       at 
> java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1874)
>       at 
> java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1785)
>       at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1188)
>       at 
> java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1547)
>       at 
> java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1508)
>       at 
> java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1431)
>       at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177)
>       at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:347)
>       at 
> org.apache.flink.util.InstantiationUtil.serializeObject(InstantiationUtil.java:300)
>       at 
> org.apache.flink.util.InstantiationUtil.writeObjectToConfig(InstantiationUtil.java:252)
>       at 
> org.apache.flink.runtime.operators.util.TaskConfig.setStubWrapper(TaskConfig.java:273)
>       at 
> org.apache.flink.optimizer.plantranslate.JobGraphGenerator.createDataSourceVertex(JobGraphGenerator.java:893)
>       at 
> org.apache.flink.optimizer.plantranslate.JobGraphGenerator.preVisit(JobGraphGenerator.java:286)
>       at 
> org.apache.flink.optimizer.plantranslate.JobGraphGenerator.preVisit(JobGraphGenerator.java:109)
>       at 
> org.apache.flink.optimizer.plan.SourcePlanNode.accept(SourcePlanNode.java:86)
>       at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>       at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>       at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>       at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>       at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>       at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>       at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>       at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>       at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>       at 
> org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
>       at 
> org.apache.flink.optimizer.plan.OptimizedPlan.accept(OptimizedPlan.java:128)
>       at 
> org.apache.flink.optimizer.plantranslate.JobGraphGenerator.compileJobGraph(JobGraphGenerator.java:188)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to