[ 
https://issues.apache.org/jira/browse/HDFS-395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13087100#comment-13087100
 ] 

Tomasz Nykiel commented on HDFS-395:
------------------------------------

I am quite new to the process, and I am experiencing problems with building the 
trunk:

[ivy:resolve]   ==== apache-snapshot: tried
[ivy:resolve]     
https://repository.apache.org/content/repositories/snapshots/commons-configuration/commons-configuration/1.6/commons-configuration-1.6.pom
[ivy:resolve]     -- artifact 
commons-configuration#commons-configuration;1.6!commons-configuration.jar:
[ivy:resolve]     
https://repository.apache.org/content/repositories/snapshots/commons-configuration/commons-configuration/1.6/commons-configuration-1.6.jar
[ivy:resolve]   ==== maven2: tried
[ivy:resolve]     
http://repo1.maven.org/maven2/commons-configuration/commons-configuration/1.6/commons-configuration-1.6.pom
[ivy:resolve]           [FAILED     ] javax.jms#jms;1.1!jms.jar:  (0ms)
[ivy:resolve]   ==== apache-snapshot: tried
[ivy:resolve]     
https://repository.apache.org/content/repositories/snapshots/javax/jms/jms/1.1/jms-1.1.jar
[ivy:resolve]   ==== maven2: tried
[ivy:resolve]     http://repo1.maven.org/maven2/javax/jms/jms/1.1/jms-1.1.jar
[ivy:resolve]           [FAILED     ] com.sun.jdmk#jmxtools;1.2.1!jmxtools.jar: 
 (0ms)
[ivy:resolve]   ==== apache-snapshot: tried
[ivy:resolve]     
https://repository.apache.org/content/repositories/snapshots/com/sun/jdmk/jmxtools/1.2.1/jmxtools-1.2.1.jar
[ivy:resolve]   ==== maven2: tried
[ivy:resolve]     
http://repo1.maven.org/maven2/com/sun/jdmk/jmxtools/1.2.1/jmxtools-1.2.1.jar
[ivy:resolve]           [FAILED     ] com.sun.jmx#jmxri;1.2.1!jmxri.jar:  (0ms)
[ivy:resolve]   ==== apache-snapshot: tried
[ivy:resolve]     
https://repository.apache.org/content/repositories/snapshots/com/sun/jmx/jmxri/1.2.1/jmxri-1.2.1.jar
[ivy:resolve]   ==== maven2: tried
[ivy:resolve]     
http://repo1.maven.org/maven2/com/sun/jmx/jmxri/1.2.1/jmxri-1.2.1.jar
[ivy:resolve]           ::::::::::::::::::::::::::::::::::::::::::::::
[ivy:resolve]           ::          UNRESOLVED DEPENDENCIES         ::
[ivy:resolve]           ::::::::::::::::::::::::::::::::::::::::::::::
[ivy:resolve]           :: commons-configuration#commons-configuration;1.6: not 
found
[ivy:resolve]           ::::::::::::::::::::::::::::::::::::::::::::::
[ivy:resolve]           ::::::::::::::::::::::::::::::::::::::::::::::
[ivy:resolve]           ::              FAILED DOWNLOADS            ::
[ivy:resolve]           :: ^ see resolution messages for details  ^ ::
[ivy:resolve]           ::::::::::::::::::::::::::::::::::::::::::::::
[ivy:resolve]           :: javax.jms#jms;1.1!jms.jar
[ivy:resolve]           :: com.sun.jdmk#jmxtools;1.2.1!jmxtools.jar
[ivy:resolve]           :: com.sun.jmx#jmxri;1.2.1!jmxri.jar
[ivy:resolve]           ::::::::::::::::::::::::::::::::::::::::::::::

It seems that there is something wrong with online repositories.
I am not sure if this seems familiar to anyone ?
I would appreciate help.


@Suresh I agree with the argument to have a single array of acks.

> DFS Scalability: Incremental block reports
> ------------------------------------------
>
>                 Key: HDFS-395
>                 URL: https://issues.apache.org/jira/browse/HDFS-395
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: data-node, name-node
>            Reporter: dhruba borthakur
>            Assignee: Tomasz Nykiel
>         Attachments: blockReportPeriod.patch, explicitAcks.patch-3, 
> explicitDeleteAcks.patch
>
>
> I have a cluster that has 1800 datanodes. Each datanode has around 50000 
> blocks and sends a block report to the namenode once every hour. This means 
> that the namenode processes a block report once every 2 seconds. Each block 
> report contains all blocks that the datanode currently hosts. This makes the 
> namenode compare a huge number of blocks that practically remains the same 
> between two consecutive reports. This wastes CPU on the namenode.
> The problem becomes worse when the number of datanodes increases.
> One proposal is to make succeeding block reports (after a successful send of 
> a full block report) be incremental. This will make the namenode process only 
> those blocks that were added/deleted in the last period.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to