Here's my test environments and results.

 - Hama TRUNK version, Hadoop 1.0.3
 - commodity PCs cluster (2G memory PC)
 - 1G network
 - child opts -Xmx512m

and,

  <property>
    <name>hama.graph.multi.step.partitioning.interval</name>
    <value>5000000</value>
  </property>

Both two small datasets introduced our Wiki are runs well.

----
edward@udanax:~/workspace/hama-trunk$ bin/hama jar
examples/target/hama-examples-0.6.0-SNAPSHOT.jar pagerank
/user/edward/edward/web-Google.txt edward/testout
12/09/18 11:37:00 INFO bsp.FileInputFormat: Total input paths to process : 1
12/09/18 11:37:00 INFO bsp.FileInputFormat: Total # of splits: 3
12/09/18 11:37:01 INFO bsp.BSPJobClient: Running job: job_201209181136_0001
12/09/18 11:37:04 INFO bsp.BSPJobClient: Current supersteps number: 0
12/09/18 11:37:07 INFO bsp.BSPJobClient: Current supersteps number: 2
12/09/18 11:37:10 INFO bsp.BSPJobClient: Current supersteps number: 3
12/09/18 11:37:16 INFO bsp.BSPJobClient: Current supersteps number: 5
12/09/18 11:37:19 INFO bsp.BSPJobClient: Current supersteps number: 6
12/09/18 11:37:22 INFO bsp.BSPJobClient: Current supersteps number: 7
12/09/18 11:37:25 INFO bsp.BSPJobClient: Current supersteps number: 10
12/09/18 11:37:28 INFO bsp.BSPJobClient: Current supersteps number: 16
12/09/18 11:37:31 INFO bsp.BSPJobClient: The total number of supersteps: 16
12/09/18 11:37:31 INFO bsp.BSPJobClient: Counters: 10
12/09/18 11:37:31 INFO bsp.BSPJobClient:
org.apache.hama.bsp.JobInProgress$JobCounter
12/09/18 11:37:31 INFO bsp.BSPJobClient:     LAUNCHED_TASKS=3
12/09/18 11:37:31 INFO bsp.BSPJobClient:
org.apache.hama.bsp.BSPPeerImpl$PeerCounter
12/09/18 11:37:31 INFO bsp.BSPJobClient:     SUPERSTEPS=16
12/09/18 11:37:31 INFO bsp.BSPJobClient:     SUPERSTEP_SUM=48
12/09/18 11:37:31 INFO bsp.BSPJobClient:     COMPRESSED_BYTES_SENT=59159696
12/09/18 11:37:31 INFO bsp.BSPJobClient:     TIME_IN_SYNC_MS=7788
12/09/18 11:37:31 INFO bsp.BSPJobClient:     IO_BYTES_READ=75380115
12/09/18 11:37:31 INFO bsp.BSPJobClient:     COMPRESSED_BYTES_RECEIVED=59159696
12/09/18 11:37:31 INFO bsp.BSPJobClient:     TOTAL_MESSAGES_SENT=14646888
12/09/18 11:37:31 INFO bsp.BSPJobClient:     TASK_INPUT_RECORDS=5105043
12/09/18 11:37:31 INFO bsp.BSPJobClient:     TOTAL_MESSAGES_RECEIVED=7323444
Job Finished in 30.769 seconds

On Tue, Sep 18, 2012 at 11:11 AM, Edward J. Yoon <[email protected]> wrote:
> Then, please lower the value of "hama.graph.multi.step.partitioning.interval".
>
> On Fri, Sep 14, 2012 at 3:45 PM, 庄克琛 <[email protected]> wrote:
>> em... I have try your configure advise and restart the hama.
>>  I use the  Google web graph( http://wiki.apache.org/hama/WriteHamaGraphFile
>>  ),
>> Nodes: 875713 Edges: 5105039, which is about 73Mb, upload to a small HDFS
>> cluster(block size is 64Mb), test the PageRank in (
>> http://wiki.apache.org/hama/WriteHamaGraphFile ), got the result as:
>> ################
>> function@624-PC:~/hadoop-1.0.3/hama-0.6.0$ hama jar hama-6-P* input-google
>> ouput-google
>> 12/09/14 14:27:50 INFO bsp.FileInputFormat: Total input paths to process : 1
>> 12/09/14 14:27:50 INFO bsp.FileInputFormat: Total # of splits: 3
>> 12/09/14 14:27:50 INFO bsp.BSPJobClient: Running job: job_201008141420_0004
>> 12/09/14 14:27:53 INFO bsp.BSPJobClient: Current supersteps number: 0
>> Java HotSpot(TM) Server VM warning: Attempt to allocate stack guard pages
>> failed.
>> ###################
>>
>> Last time the supersteps  could be 1 or 2, then the same result.
>> the task attempt****.err files are empty.
>> Is the graph too large?
>> I test on a small graph, get the right Rank results
>>
>>
>> 2012/9/14 Edward J. Yoon <[email protected]>
>>
>>> I've added multi-step partitioning method to save memory[1].
>>>
>>> Please try to configure below property to hama-site.xml.
>>>
>>>   <property>
>>>     <name>hama.graph.multi.step.partitioning.interval</name>
>>>     <value>10000000</value>
>>>   </property>
>>>
>>> 1. https://issues.apache.org/jira/browse/HAMA-599
>>>
>>> On Fri, Sep 14, 2012 at 3:13 PM, 庄克琛 <[email protected]> wrote:
>>> > HI, Actually I use this (
>>> >
>>> https://builds.apache.org/job/Hama-Nightly/672/artifact/.repository/org/apache/hama/hama-dist/0.6.0-SNAPSHOT/
>>> > )
>>> > to test again, I mean use this 0.6.0SNAPSHOT version replace everything,
>>> > got the same out of memory results. I just don't know what cause the out
>>> of
>>> > memory fails, only some small graph computing can be finished. Is this
>>> > version finished the "
>>> > [HAMA-596<https://issues.apache.org/jira/browse/HAMA-596>]:Optimize
>>> > memory usage of graph job" ?
>>> > Thanks
>>> >
>>> > 2012/9/14 Thomas Jungblut <[email protected]>
>>> >
>>> >> Hey, what jar did you exactly replace?
>>> >> Am 14.09.2012 07:49 schrieb "庄克琛" <[email protected]>:
>>> >>
>>> >> > hi, every one:
>>> >> > I use the hama-0.5.0 with the hadoop-1.0.3, try to do some large
>>> graphs
>>> >> > analysis.
>>> >> > When I test the PageRank examples, as the (
>>> >> > http://wiki.apache.org/hama/WriteHamaGraphFile) shows, I download the
>>> >> > graph
>>> >> > data, and run the PageRank job on a small distributed cluser, I can
>>> only
>>> >> > get the out of memory failed, with Superstep 0,1,2 works well, then
>>> get
>>> >> the
>>> >> > memory out fail.(Each computer have 2G memory) But when I test some
>>> small
>>> >> > graph, everything went well.
>>> >> > Also I try the trunk version(
>>> >> > https://builds.apache.org/job/Hama-Nightly/672/changes#detail3),
>>> replace
>>> >> > my
>>> >> > hama-0.5.0 with the hama-0.6.0-snapshot, only get the same results.
>>> >> > Anyone got better ideas?
>>> >> >
>>> >> > Thanks!
>>> >> >
>>> >> > --
>>> >> >
>>> >> > *Zhuang Kechen
>>> >> > *
>>> >> >
>>> >>
>>> >
>>> >
>>> >
>>> > --
>>> >
>>> > *Zhuang Kechen*
>>> >
>>> > School of Computer Science & Technology
>>> >
>>> > **
>>> > Nanjing University of Science & Technology
>>> >
>>> > Lab.623, School of Computer Sci. & Tech.
>>> >
>>> > No.200, Xiaolingwei Street
>>> >
>>> > Nanjing, Jiangsu, 210094
>>> >
>>> > P.R. China
>>> >
>>> > Tel: 025-84315982**
>>> >
>>> > Email: [email protected]
>>>
>>>
>>>
>>> --
>>> Best Regards, Edward J. Yoon
>>> @eddieyoon
>>>
>>
>>
>>
>> --
>>
>> *Zhuang Kechen
>> *
>
>
>
> --
> Best Regards, Edward J. Yoon
> @eddieyoon



-- 
Best Regards, Edward J. Yoon
@eddieyoon

Reply via email to