[ 
https://issues.apache.org/jira/browse/MAHOUT-1456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13937586#comment-13937586
 ] 

Suneel Marthi commented on MAHOUT-1456:
---------------------------------------

I don't think this issue is related to running on Hadoop 1.x or 2.x. From the 
exception that's been reported its not a Hadoop issue at all. 
This issue has been reported by other users too - see 
http://pastebin.com/rCNyTypf from another user, Jessie Wright who had reported 
this earlier.

Seems like some bad string in the Wikipedia dataset, if its consistently 
failing while reading the same chunk of data I would run this through debug or 
verbose logs to see what the offending character sequence is from the input 
test.

> The wikipediaXMLSplitter example fails with "heap size" error
> -------------------------------------------------------------
>
>                 Key: MAHOUT-1456
>                 URL: https://issues.apache.org/jira/browse/MAHOUT-1456
>             Project: Mahout
>          Issue Type: Bug
>          Components: Examples
>    Affects Versions: 0.9
>         Environment: Solaris 11.1 \
> Hadoop 2.3.0 \
> Maven 3.2.1 \
> JDK 1.7.0_07-b10 \
>            Reporter: mahmood
>              Labels: Heap,, mahout,, wikipediaXMLSplitter
>
> 1- The XML file is 
> http://dumps.wikimedia.org/enwiki/latest/enwiki-latest-pages-articles.xml.bz2
> 2- When I run "mahout wikipediaXMLSplitter -d 
> enwiki-latest-pages-articles.xml -o wikipedia/chunks -c 64", it stuck at 
> chunk #571 and after 30 minutes it fails to continue with the java heap size 
> error. Previous chunks are created rapidly (10 chunks per second).
> 3- Increasing the heap size via "-Xmx4096m" option doesn't work.
> 4- No matter what is the configuration, it seems that there is a memory leak 
> that eat all space.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to