[jira] [Commented] (HDFS-3119) Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file

2012-03-27 Thread J.Andreina (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13240174#comment-13240174
 ] 

J.Andreina commented on HDFS-3119:
--

Hi,
  Even after the datanode has sent the block report many times, the 
overreplicated block is not deleted.
Even if i execute fsck for that particular file after sometime the block 
remains overreplicated.

> Overreplicated block is not deleted even after the replication factor is 
> reduced after sync follwed by closing that file
> 
>
> Key: HDFS-3119
> URL: https://issues.apache.org/jira/browse/HDFS-3119
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.24.0
>Reporter: J.Andreina
>Priority: Minor
> Fix For: 0.24.0, 0.23.2
>
>
> cluster setup:
> --
> 1NN,2 DN,replication factor 2,block report interval 3sec ,block size-256MB
> step1: write a file "filewrite.txt" of size 90bytes with sync(not closed) 
> step2: change the replication factor to 1  using the command: "./hdfs dfs 
> -setrep 1 /filewrite.txt"
> step3: close the file
> * At the NN side the file "Decreasing replication from 2 to 1 for 
> /filewrite.txt" , logs has occured but the overreplicated blocks are not 
> deleted even after the block report is sent from DN
> * while listing the file in the console using "./hdfs dfs -ls " the 
> replication factor for that file is mentioned as 1
> * In fsck report for that files displays that the file is replicated to 2 
> datanodes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3119) Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file

2012-03-21 Thread J.Andreina (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13234387#comment-13234387
 ] 

J.Andreina commented on HDFS-3119:
--

Thanks Uma ill also check the same

> Overreplicated block is not deleted even after the replication factor is 
> reduced after sync follwed by closing that file
> 
>
> Key: HDFS-3119
> URL: https://issues.apache.org/jira/browse/HDFS-3119
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.24.0
>Reporter: J.Andreina
>Priority: Minor
> Fix For: 0.24.0, 0.23.2
>
>
> cluster setup:
> --
> 1NN,2 DN,replication factor 2,block report interval 3sec ,block size-256MB
> step1: write a file "filewrite.txt" of size 90bytes with sync(not closed) 
> step2: change the replication factor to 1  using the command: "./hdfs dfs 
> -setrep 1 /filewrite.txt"
> step3: close the file
> * At the NN side the file "Decreasing replication from 2 to 1 for 
> /filewrite.txt" , logs has occured but the overreplicated blocks are not 
> deleted even after the block report is sent from DN
> * while listing the file in the console using "./hdfs dfs -ls " the 
> replication factor for that file is mentioned as 1
> * In fsck report for that files displays that the file is replicated to 2 
> datanodes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3119) Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file

2012-03-21 Thread J.Andreina (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13234245#comment-13234245
 ] 

J.Andreina commented on HDFS-3119:
--


I understood whatever u have commented and i agree with your comments

But if the replication cannot be done for an unfinalized block 
then the log message : "Decreasing replication from 2 to 1 for /filewrite.txt" 
at the NN side when we decrease the replication for an unfinalized block can be 
avoided.

> Overreplicated block is not deleted even after the replication factor is 
> reduced after sync follwed by closing that file
> 
>
> Key: HDFS-3119
> URL: https://issues.apache.org/jira/browse/HDFS-3119
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.24.0
>Reporter: J.Andreina
>Priority: Minor
> Fix For: 0.24.0, 0.23.2
>
>
> cluster setup:
> --
> 1NN,2 DN,replication factor 2,block report interval 3sec ,block size-256MB
> step1: write a file "filewrite.txt" of size 90bytes with sync(not closed) 
> step2: change the replication factor to 1  using the command: "./hdfs dfs 
> -setrep 1 /filewrite.txt"
> step3: close the file
> * At the NN side the file "Decreasing replication from 2 to 1 for 
> /filewrite.txt" , logs has occured but the overreplicated blocks are not 
> deleted even after the block report is sent from DN
> * while listing the file in the console using "./hdfs dfs -ls " the 
> replication factor for that file is mentioned as 1
> * In fsck report for that files displays that the file is replicated to 2 
> datanodes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2932) Under replicated block after the pipeline recovery.

2012-02-10 Thread J.Andreina (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13205454#comment-13205454
 ] 

J.Andreina commented on HDFS-2932:
--

Initially Three Dn was running ..but while writing the block (block-id-1005) i 
brought down 3rdDN.so the write was successful to other two DN but the block 
stamp was changed to (block-id-1006) and the rest of the block was also written 
successfuly to those two datanodes.

After the write was over ,The 3rd DN was restarted after 5 mins.then when fsck 
command was issued "block-id_1006 is underreplicatede.Target replicas is 3 but 
found 2 replicas" message was displayed.

Ever after the replication monitor period is over ,the blocks is still  under 
replicated only.I checked the same after 10 hrs. 

> Under replicated block after the pipeline recovery.
> ---
>
> Key: HDFS-2932
> URL: https://issues.apache.org/jira/browse/HDFS-2932
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 0.24.0
>Reporter: J.Andreina
> Fix For: 0.24.0
>
>
> Started 1NN,DN1,DN2,DN3 in the same machine.
> Written a huge file of size 2 Gb
> while the write for the block-id-1005 is in progress bruought down DN3.
> after the pipeline recovery happened.Block stamp changed into block_id_1006 
> in DN1,Dn2.
> after the write is over.DN3 is brought up and fsck command is issued.
> the following mess is displayed as follows
> "block-id_1006 is underreplicatede.Target replicas is 3 but found 2 replicas".

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2734) Even if we configure the property fs.checkpoint.size in both core-site.xml and hdfs-site.xml the values are not been considered

2012-01-01 Thread J.Andreina (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13178287#comment-13178287
 ] 

J.Andreina commented on HDFS-2734:
--

The property has been configured in 0.20/1.0  (core-site.xml) only but still 
the values which are configured are not taken into account.

SNN_HOST:50090/conf is also verified

> Even if we configure the property fs.checkpoint.size in both core-site.xml 
> and hdfs-site.xml  the values are not been considered
> 
>
> Key: HDFS-2734
> URL: https://issues.apache.org/jira/browse/HDFS-2734
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.20.1, 0.23.0
>Reporter: J.Andreina
>Priority: Minor
>
> Even if we configure the property fs.checkpoint.size in both core-site.xml 
> and hdfs-site.xml  the values are not been considered

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira