[jira] [Created] (HADOOP-15898) WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.io.IOException: java.io.IOException: All datanodes DatanodeInfoWithStorage [[74.120.143.

2018-11-03 Thread Srinivas (JIRA)
Srinivas created HADOOP-15898:
-

 Summary: WARN [main] org.apache.hadoop.mapred.YarnChild: Exception 
running child  : java.io.IOException: java.io.IOException: All datanodes 
DatanodeInfoWithStorage 
[[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK] are bad. 
Aborting...
 Key: HADOOP-15898
 URL: https://issues.apache.org/jira/browse/HADOOP-15898
 Project: Hadoop Common
  Issue Type: Improvement
  Components: performance
Affects Versions: 2.6.0
 Environment: Hadoop 2.6.0-cdh5.5.1

 

 
Reporter: Srinivas
 Fix For: 2.6.0


There is a business impact MR job which runs every day @ 2.00 PM PST and data 
size is about 1 - 1.5 TB (depends on the business days) . Ideal elapsed time of 
this job : 4 hrs.  But the multiple  mappers of this job simultaneously  
failing  with the following error so job will take some times 11 and even 13 
hours also like that.  

Steps to prevent this problem : 1, Migrated the environment to Yarn .2 
increased the ulimit 3. Added extra nodes to the cluster. 4. Disks replacement 
taking place regularly  But no luck.

WARN [DataStreamer for file 
/analytical_profile/DMP_analytical_profile/Turn/SAUP/2018_11_02_tmp/tmp/part-01357.5789
block BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089]

org.apache.hadoop.hdfs.DFSClient: Error Recovery for block 
BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089 in 
pipeline DatanodeInfoWithStorage
[10.0.1.37:50010,DS-ed333d2e-839a-4029-a1c9-b6615c322ed2,DISK],

 
DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK],

DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]:(

bad datanode 
DatanodeInfoWithStorage[10.0.1.37:50010,DS-ed333d2e-839a-4029-a1c9-b6615c322ed2,DISK]

 

WARN [DataStreamer for file 
/analytical_profile/DMP_analytical_profile/Turn/SAUP/2018_11_02_tmp/tmp/part-01357.5789
 block BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089] 
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block 
BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089 in 
pipeline 
DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK],
 
DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]:
 bad datanode 
DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK]

 

WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : 
java.io.IOException: java.io.IOException: All datanodes 
DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]
 are bad. Aborting... at 
com.turn.platform.cheetah.storage.dmp.analytical_profile.merge.IncrementalProfileMergerMapper.close(IncrementalProfileMergerMapper.java:1185)
 at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15898) WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.io.IOException: java.io.IOException: All datanodes DatanodeInfoWithStorage [[74.120.143.

2018-11-03 Thread Srinivas (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srinivas updated HADOOP-15898:
--
Hadoop Flags:   (was: Incompatible change,Reviewed)

> WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child  : 
> java.io.IOException: java.io.IOException: All datanodes 
> DatanodeInfoWithStorage 
> [[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK] are bad. 
> Aborting...
> --
>
> Key: HADOOP-15898
> URL: https://issues.apache.org/jira/browse/HADOOP-15898
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 2.6.0
> Environment: Hadoop 2.6.0-cdh5.5.1
>  
>  
>Reporter: Srinivas
>Priority: Major
>  Labels: performance
> Fix For: 2.6.0
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> There is a business impact MR job which runs every day @ 2.00 PM PST and data 
> size is about 1 - 1.5 TB (depends on the business days) . Ideal elapsed time 
> of this job : 4 hrs.  But the multiple  mappers of this job simultaneously  
> failing  with the following error so job will take some times 11 and even 13 
> hours also like that.  
> Steps to prevent this problem : 1, Migrated the environment to Yarn .2 
> increased the ulimit 3. Added extra nodes to the cluster. 4. Disks 
> replacement taking place regularly  But no luck.
> WARN [DataStreamer for file 
> /analytical_profile/DMP_analytical_profile/Turn/SAUP/2018_11_02_tmp/tmp/part-01357.5789
> block BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089]
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block 
> BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089 in 
> pipeline DatanodeInfoWithStorage
> [10.0.1.37:50010,DS-ed333d2e-839a-4029-a1c9-b6615c322ed2,DISK],
>  
> DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK],
> DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]:(
> bad datanode 
> DatanodeInfoWithStorage[10.0.1.37:50010,DS-ed333d2e-839a-4029-a1c9-b6615c322ed2,DISK]
>  
> WARN [DataStreamer for file 
> /analytical_profile/DMP_analytical_profile/Turn/SAUP/2018_11_02_tmp/tmp/part-01357.5789
>  block BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089] 
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block 
> BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089 in 
> pipeline 
> DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK],
>  
> DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]:
>  bad datanode 
> DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK]
>  
> WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : 
> java.io.IOException: java.io.IOException: All datanodes 
> DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]
>  are bad. Aborting... at 
> com.turn.platform.cheetah.storage.dmp.analytical_profile.merge.IncrementalProfileMergerMapper.close(IncrementalProfileMergerMapper.java:1185)
>  at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15898) 1 TB TeraGen fails to run with the following error

2018-11-03 Thread Srinivas (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srinivas updated HADOOP-15898:
--
Summary: 1 TB TeraGen fails to run with the following error   (was: WARN 
[main] org.apache.hadoop.mapred.YarnChild: Exception running child  : 
java.io.IOException: java.io.IOException: All datanodes DatanodeInfoWithStorage 
[[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK] are bad. 
Aborting...)

> 1 TB TeraGen fails to run with the following error 
> ---
>
> Key: HADOOP-15898
> URL: https://issues.apache.org/jira/browse/HADOOP-15898
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 2.6.0
> Environment: Hadoop 2.6.0-cdh5.5.1
>  
>  
>Reporter: Srinivas
>Priority: Major
>  Labels: performance
> Fix For: 2.6.0
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> There is a business impact MR job which runs every day @ 2.00 PM PST and data 
> size is about 1 - 1.5 TB (depends on the business days) . Ideal elapsed time 
> of this job : 4 hrs.  But the multiple  mappers of this job simultaneously  
> failing  with the following error so job will take some times 11 and even 13 
> hours also like that.  
> Steps to prevent this problem : 1, Migrated the environment to Yarn .2 
> increased the ulimit 3. Added extra nodes to the cluster. 4. Disks 
> replacement taking place regularly  But no luck.
> WARN [DataStreamer for file 
> /analytical_profile/DMP_analytical_profile/Turn/SAUP/2018_11_02_tmp/tmp/part-01357.5789
> block BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089]
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block 
> BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089 in 
> pipeline DatanodeInfoWithStorage
> [10.0.1.37:50010,DS-ed333d2e-839a-4029-a1c9-b6615c322ed2,DISK],
>  
> DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK],
> DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]:(
> bad datanode 
> DatanodeInfoWithStorage[10.0.1.37:50010,DS-ed333d2e-839a-4029-a1c9-b6615c322ed2,DISK]
>  
> WARN [DataStreamer for file 
> /analytical_profile/DMP_analytical_profile/Turn/SAUP/2018_11_02_tmp/tmp/part-01357.5789
>  block BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089] 
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block 
> BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089 in 
> pipeline 
> DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK],
>  
> DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]:
>  bad datanode 
> DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK]
>  
> WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : 
> java.io.IOException: java.io.IOException: All datanodes 
> DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]
>  are bad. Aborting... at 
> com.turn.platform.cheetah.storage.dmp.analytical_profile.merge.IncrementalProfileMergerMapper.close(IncrementalProfileMergerMapper.java:1185)
>  at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15898) 1 - 1.5 TB Data size fails to run with the following error

2018-11-03 Thread Srinivas (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srinivas updated HADOOP-15898:
--
Summary: 1 - 1.5 TB Data size fails to run with the following error   (was: 
1 TB Data size fails to run with the following error )

> 1 - 1.5 TB Data size fails to run with the following error 
> ---
>
> Key: HADOOP-15898
> URL: https://issues.apache.org/jira/browse/HADOOP-15898
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 2.6.0
> Environment: Hadoop 2.6.0-cdh5.5.1
>  
>  
>Reporter: Srinivas
>Priority: Major
>  Labels: performance
> Fix For: 2.6.0
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> There is a business impact MR job which runs every day @ 2.00 PM PST and data 
> size is about 1 - 1.5 TB (depends on the business days) . Ideal elapsed time 
> of this job : 4 hrs.  But the multiple  mappers of this job simultaneously  
> failing  with the following error so job will take some times 11 and even 13 
> hours also like that.  
> Steps to prevent this problem : 1, Migrated the environment to Yarn .2 
> increased the ulimit 3. Added extra nodes to the cluster. 4. Disks 
> replacement taking place regularly  But no luck.
> WARN [DataStreamer for file 
> /analytical_profile/DMP_analytical_profile/Turn/SAUP/2018_11_02_tmp/tmp/part-01357.5789
> block BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089]
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block 
> BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089 in 
> pipeline DatanodeInfoWithStorage
> [10.0.1.37:50010,DS-ed333d2e-839a-4029-a1c9-b6615c322ed2,DISK],
>  
> DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK],
> DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]:(
> bad datanode 
> DatanodeInfoWithStorage[10.0.1.37:50010,DS-ed333d2e-839a-4029-a1c9-b6615c322ed2,DISK]
>  
> WARN [DataStreamer for file 
> /analytical_profile/DMP_analytical_profile/Turn/SAUP/2018_11_02_tmp/tmp/part-01357.5789
>  block BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089] 
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block 
> BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089 in 
> pipeline 
> DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK],
>  
> DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]:
>  bad datanode 
> DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK]
>  
> WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : 
> java.io.IOException: java.io.IOException: All datanodes 
> DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]
>  are bad. Aborting... at 
> com.turn.platform.cheetah.storage.dmp.analytical_profile.merge.IncrementalProfileMergerMapper.close(IncrementalProfileMergerMapper.java:1185)
>  at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15898) 1 TB Data size fails to run with the following error

2018-11-03 Thread Srinivas (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srinivas updated HADOOP-15898:
--
Summary: 1 TB Data size fails to run with the following error   (was: 1 TB 
TeraGen fails to run with the following error )

> 1 TB Data size fails to run with the following error 
> -
>
> Key: HADOOP-15898
> URL: https://issues.apache.org/jira/browse/HADOOP-15898
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 2.6.0
> Environment: Hadoop 2.6.0-cdh5.5.1
>  
>  
>Reporter: Srinivas
>Priority: Major
>  Labels: performance
> Fix For: 2.6.0
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> There is a business impact MR job which runs every day @ 2.00 PM PST and data 
> size is about 1 - 1.5 TB (depends on the business days) . Ideal elapsed time 
> of this job : 4 hrs.  But the multiple  mappers of this job simultaneously  
> failing  with the following error so job will take some times 11 and even 13 
> hours also like that.  
> Steps to prevent this problem : 1, Migrated the environment to Yarn .2 
> increased the ulimit 3. Added extra nodes to the cluster. 4. Disks 
> replacement taking place regularly  But no luck.
> WARN [DataStreamer for file 
> /analytical_profile/DMP_analytical_profile/Turn/SAUP/2018_11_02_tmp/tmp/part-01357.5789
> block BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089]
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block 
> BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089 in 
> pipeline DatanodeInfoWithStorage
> [10.0.1.37:50010,DS-ed333d2e-839a-4029-a1c9-b6615c322ed2,DISK],
>  
> DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK],
> DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]:(
> bad datanode 
> DatanodeInfoWithStorage[10.0.1.37:50010,DS-ed333d2e-839a-4029-a1c9-b6615c322ed2,DISK]
>  
> WARN [DataStreamer for file 
> /analytical_profile/DMP_analytical_profile/Turn/SAUP/2018_11_02_tmp/tmp/part-01357.5789
>  block BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089] 
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block 
> BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089 in 
> pipeline 
> DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK],
>  
> DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]:
>  bad datanode 
> DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK]
>  
> WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : 
> java.io.IOException: java.io.IOException: All datanodes 
> DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]
>  are bad. Aborting... at 
> com.turn.platform.cheetah.storage.dmp.analytical_profile.merge.IncrementalProfileMergerMapper.close(IncrementalProfileMergerMapper.java:1185)
>  at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15898) 1 - 1.5 TB Data size fails to run with the following error

2018-11-03 Thread Srinivas (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srinivas updated HADOOP-15898:
--
Environment: 
Hadoop 2.6.0-cdh5.5.1 Express edition.

 

 

  was:
Hadoop 2.6.0-cdh5.5.1

 

 


> 1 - 1.5 TB Data size fails to run with the following error 
> ---
>
> Key: HADOOP-15898
> URL: https://issues.apache.org/jira/browse/HADOOP-15898
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 2.6.0
> Environment: Hadoop 2.6.0-cdh5.5.1 Express edition.
>  
>  
>Reporter: Srinivas
>Priority: Major
>  Labels: performance
> Fix For: 2.6.0
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> There is a business impact MR job which runs every day @ 2.00 PM PST and data 
> size is about 1 - 1.5 TB (depends on the business days) . Ideal elapsed time 
> of this job : 4 hrs.  But the multiple  mappers of this job simultaneously  
> failing  with the following error so job will take some times 11 and even 13 
> hours also like that.  
> Steps to prevent this problem : 1, Migrated the environment to Yarn .2 
> increased the ulimit 3. Added extra nodes to the cluster. 4. Disks 
> replacement taking place regularly 5. Monitoring the cluster and terminating 
> other jobs which impacts this job.  But no luck.
> WARN [DataStreamer for file 
> /analytical_profile/DMP_analytical_profile/Turn/SAUP/2018_11_02_tmp/tmp/part-01357.5789
> block BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089]
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block 
> BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089 in 
> pipeline DatanodeInfoWithStorage
> [10.0.1.37:50010,DS-ed333d2e-839a-4029-a1c9-b6615c322ed2,DISK],
>  
> DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK],
> DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]:(
> bad datanode 
> DatanodeInfoWithStorage[10.0.1.37:50010,DS-ed333d2e-839a-4029-a1c9-b6615c322ed2,DISK]
>  
> WARN [DataStreamer for file 
> /analytical_profile/DMP_analytical_profile/Turn/SAUP/2018_11_02_tmp/tmp/part-01357.5789
>  block BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089] 
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block 
> BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089 in 
> pipeline 
> DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK],
>  
> DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]:
>  bad datanode 
> DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK]
>  
> WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : 
> java.io.IOException: java.io.IOException: All datanodes 
> DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]
>  are bad. Aborting... at 
> com.turn.platform.cheetah.storage.dmp.analytical_profile.merge.IncrementalProfileMergerMapper.close(IncrementalProfileMergerMapper.java:1185)
>  at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15898) 1 - 1.5 TB Data size fails to run with the following error

2018-11-03 Thread Srinivas (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srinivas updated HADOOP-15898:
--
Description: 
There is a business impact MR job which runs every day @ 2.00 PM PST and data 
size is about 1 - 1.5 TB (depends on the business days) . Ideal elapsed time of 
this job : 4 hrs.  But the multiple  mappers of this job simultaneously  
failing  with the following error so job will take some times 11 and even 13 
hours also like that.  

Steps to prevent this problem : 1, Migrated the environment to Yarn .2 
increased the ulimit 3. Added extra nodes to the cluster. 4. Disks replacement 
taking place regularly 5. Monitoring the cluster and terminating other jobs 
which impacts this job.  But no luck.

WARN [DataStreamer for file 
/analytical_profile/DMP_analytical_profile/Turn/SAUP/2018_11_02_tmp/tmp/part-01357.5789
block BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089]

org.apache.hadoop.hdfs.DFSClient: Error Recovery for block 
BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089 in 
pipeline DatanodeInfoWithStorage
[10.0.1.37:50010,DS-ed333d2e-839a-4029-a1c9-b6615c322ed2,DISK],

 
DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK],

DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]:(

bad datanode 
DatanodeInfoWithStorage[10.0.1.37:50010,DS-ed333d2e-839a-4029-a1c9-b6615c322ed2,DISK]

 

WARN [DataStreamer for file 
/analytical_profile/DMP_analytical_profile/Turn/SAUP/2018_11_02_tmp/tmp/part-01357.5789
 block BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089] 
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block 
BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089 in 
pipeline 
DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK],
 
DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]:
 bad datanode 
DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK]

 

WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : 
java.io.IOException: java.io.IOException: All datanodes 
DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]
 are bad. Aborting... at 
com.turn.platform.cheetah.storage.dmp.analytical_profile.merge.IncrementalProfileMergerMapper.close(IncrementalProfileMergerMapper.java:1185)
 at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)

 

  was:
There is a business impact MR job which runs every day @ 2.00 PM PST and data 
size is about 1 - 1.5 TB (depends on the business days) . Ideal elapsed time of 
this job : 4 hrs.  But the multiple  mappers of this job simultaneously  
failing  with the following error so job will take some times 11 and even 13 
hours also like that.  

Steps to prevent this problem : 1, Migrated the environment to Yarn .2 
increased the ulimit 3. Added extra nodes to the cluster. 4. Disks replacement 
taking place regularly  But no luck.

WARN [DataStreamer for file 
/analytical_profile/DMP_analytical_profile/Turn/SAUP/2018_11_02_tmp/tmp/part-01357.5789
block BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089]

org.apache.hadoop.hdfs.DFSClient: Error Recovery for block 
BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089 in 
pipeline DatanodeInfoWithStorage
[10.0.1.37:50010,DS-ed333d2e-839a-4029-a1c9-b6615c322ed2,DISK],

 
DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK],

DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]:(

bad datanode 
DatanodeInfoWithStorage[10.0.1.37:50010,DS-ed333d2e-839a-4029-a1c9-b6615c322ed2,DISK]

 

WARN [DataStreamer for file 
/analytical_profile/DMP_analytical_profile/Turn/SAUP/2018_11_02_tmp/tmp/part-01357.5789
 block BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089] 
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block 
BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089 in 
pipeline 
DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK],
 
DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]:
 bad datanode 
DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK]

 

WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : 
java.io.IOException: java.io.IOException: All datanodes 
DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]
 are bad. Aborting... at 
com.turn.platform.cheetah.storage.dmp.analytical_profile.merge.IncrementalProfileMergerMapper.close(IncrementalProfileMergerMapper.java:1185)
 at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)

 


> 1 - 1.5 TB Data size fails to run with

[jira] [Updated] (HADOOP-15898) 1 - 1.5 TB Data size fails to run with the following error

2018-11-03 Thread Srinivas (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srinivas updated HADOOP-15898:
--
Description: 
There is a business impact MR job which runs every day @ 2.00 PM PST and data 
size is about 1 - 1.5 TB (depends on the business days) . Ideal elapsed time of 
this job : 4 hrs.  But the multiple  mappers of this job simultaneously  
failing  with the following error so job will take some times 11 and even 13 
hours also like that.  

Steps to prevent this problem : 1, Migrated the environment to Yarn .2 
increased the ulimit 3. Added extra nodes to the cluster. 4. Disks replacement 
taking place regularly 5. Monitoring the cluster and terminating other jobs 
which impacts this job.  But no luck.

org.apache.hadoop.hdfs.DFSClient: Error Recovery for block 
BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089 in 
pipeline DatanodeInfoWithStorage
[10.0.1.37:50010,DS-ed333d2e-839a-4029-a1c9-b6615c322ed2,DISK],
 
DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK],
DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]:(
bad datanode 
DatanodeInfoWithStorage[10.0.1.37:50010,DS-ed333d2e-839a-4029-a1c9-b6615c322ed2,DISK]
 
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block 
BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089 in 
pipeline 
DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK],
 
DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]:
 bad datanode 
DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK]

org.apache.hadoop.mapred.YarnChild: Exception running child : 
java.io.IOException: java.io.IOException: All datanodes 
DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]
 are bad. Aborting... at 
 

  was:
There is a business impact MR job which runs every day @ 2.00 PM PST and data 
size is about 1 - 1.5 TB (depends on the business days) . Ideal elapsed time of 
this job : 4 hrs.  But the multiple  mappers of this job simultaneously  
failing  with the following error so job will take some times 11 and even 13 
hours also like that.  

Steps to prevent this problem : 1, Migrated the environment to Yarn .2 
increased the ulimit 3. Added extra nodes to the cluster. 4. Disks replacement 
taking place regularly 5. Monitoring the cluster and terminating other jobs 
which impacts this job.  But no luck.

WARN [DataStreamer for file 
/analytical_profile/DMP_analytical_profile/Turn/SAUP/2018_11_02_tmp/tmp/part-01357.5789
block BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089]

org.apache.hadoop.hdfs.DFSClient: Error Recovery for block 
BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089 in 
pipeline DatanodeInfoWithStorage
[10.0.1.37:50010,DS-ed333d2e-839a-4029-a1c9-b6615c322ed2,DISK],

 
DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK],

DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]:(

bad datanode 
DatanodeInfoWithStorage[10.0.1.37:50010,DS-ed333d2e-839a-4029-a1c9-b6615c322ed2,DISK]

 

WARN [DataStreamer for file 
/analytical_profile/DMP_analytical_profile/Turn/SAUP/2018_11_02_tmp/tmp/part-01357.5789
 block BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089] 
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block 
BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089 in 
pipeline 
DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK],
 
DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]:
 bad datanode 
DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK]

 

WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : 
java.io.IOException: java.io.IOException: All datanodes 
DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]
 are bad. Aborting... at 
com.turn.platform.cheetah.storage.dmp.analytical_profile.merge.IncrementalProfileMergerMapper.close(IncrementalProfileMergerMapper.java:1185)
 at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)

 


> 1 - 1.5 TB Data size fails to run with the following error 
> ---
>
> Key: HADOOP-15898
> URL: https://issues.apache.org/jira/browse/HADOOP-15898
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 2.6.0
> Environment: Hadoop 2.6.0-cdh5.5.1 Express edition.
>  
>  
>Reporter: Srinivas
>Priority: Major
>  Labels: performance
> Fix For: 2.6.0
>
>   Origi

[jira] [Updated] (HADOOP-15898) 1 - 1.5 TB Data size fails to run with the following error

2019-01-04 Thread Srinivas (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srinivas updated HADOOP-15898:
--
Description: 
There is a business impact MR job which runs every day @ 2.00 PM PST and data 
size is about 1 - 1.5 TB (depends on the business days) . Ideal elapsed time of 
this job : 4 hrs.  But the multiple  mappers of this job simultaneously  
failing  with the following error so job will take some times 11 and even 13 
hours also like that.  

Steps to prevent this problem : 1, Migrated the environment to Yarn .2 
increased the ulimit 3. Added extra nodes to the cluster. 4. Disks replacement 
taking place regularly 5. Monitoring the cluster and terminating other jobs 
which impacts this job. 

Few of the values that we tried increasing without any benefit are

1. increased open files

2.  increase dfs.datanode.handler.count

3. increase dfs.datanode.max.xcievers

4. increase dfs.datanode.max.transfer.threads

But no luck.

org.apache.hadoop.hdfs.DFSClient: Error Recovery for block 
BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089 in 
pipeline DatanodeInfoWithStorage
 [10.0.1.37:50010,DS-ed333d2e-839a-4029-a1c9-b6615c322ed2,DISK],
  
DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK],
 
DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]:(
 bad datanode 
DatanodeInfoWithStorage[10.0.1.37:50010,DS-ed333d2e-839a-4029-a1c9-b6615c322ed2,DISK]
  
 org.apache.hadoop.hdfs.DFSClient: Error Recovery for block 
BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089 in 
pipeline 
DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK],
 
DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]:
 bad datanode 
DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK]

org.apache.hadoop.mapred.YarnChild: Exception running child : 
java.io.IOException: java.io.IOException: All datanodes 
DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]
 are bad. Aborting... at 
  

 

  was:
There is a business impact MR job which runs every day @ 2.00 PM PST and data 
size is about 1 - 1.5 TB (depends on the business days) . Ideal elapsed time of 
this job : 4 hrs.  But the multiple  mappers of this job simultaneously  
failing  with the following error so job will take some times 11 and even 13 
hours also like that.  

Steps to prevent this problem : 1, Migrated the environment to Yarn .2 
increased the ulimit 3. Added extra nodes to the cluster. 4. Disks replacement 
taking place regularly 5. Monitoring the cluster and terminating other jobs 
which impacts this job.  But no luck.

org.apache.hadoop.hdfs.DFSClient: Error Recovery for block 
BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089 in 
pipeline DatanodeInfoWithStorage
[10.0.1.37:50010,DS-ed333d2e-839a-4029-a1c9-b6615c322ed2,DISK],
 
DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK],
DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]:(
bad datanode 
DatanodeInfoWithStorage[10.0.1.37:50010,DS-ed333d2e-839a-4029-a1c9-b6615c322ed2,DISK]
 
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block 
BP-854530680-69.194.253.58-1430267558563:blk_4683766046_1108754130089 in 
pipeline 
DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK],
 
DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]:
 bad datanode 
DatanodeInfoWithStorage[74.120.143.19:50010,DS-5d10576e-adc3-474f-bc9d-f0d6fb3ae4c3,DISK]

org.apache.hadoop.mapred.YarnChild: Exception running child : 
java.io.IOException: java.io.IOException: All datanodes 
DatanodeInfoWithStorage[74.120.143.6:50010,DS-a5299d68-2858-46c3-8e37-d2559895f979,DISK]
 are bad. Aborting... at 
 


> 1 - 1.5 TB Data size fails to run with the following error 
> ---
>
> Key: HADOOP-15898
> URL: https://issues.apache.org/jira/browse/HADOOP-15898
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 2.6.0
> Environment: Hadoop 2.6.0-cdh5.5.1 Express edition.
>  
>  
>Reporter: Srinivas
>Priority: Major
>  Labels: performance
> Fix For: 2.6.0
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> There is a business impact MR job which runs every day @ 2.00 PM PST and data 
> size is about 1 - 1.5 TB (depends on the business days) . Ideal elapsed time 
> of this job : 4 hrs.  But the multiple  mappers of this job simultaneously  
> failing  with the following error so job will take some times 11 and even 13 
> hours also like

[jira] [Commented] (HADOOP-16090) S3A Client to add explicit support for versioned stores

2020-03-25 Thread R.satish Srinivas (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17066935#comment-17066935
 ] 

R.satish Srinivas commented on HADOOP-16090:


[~ste...@apache.org] [~dchmelev] Does this issue occur only when dealing with 
large number of file writes to S3? I use a Spark streaming application with 
Hadoop 2.8.3, which keeps adding files to S3 directories and it is accumulating 
lots of directory level delete markers, which is causing XML parsing errors 
during S3 list operations. Also is the fix for this available in any version of 
Hadoop yet?

> S3A Client to add explicit support for versioned stores
> ---
>
> Key: HADOOP-16090
> URL: https://issues.apache.org/jira/browse/HADOOP-16090
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.1
>Reporter: Dmitri Chmelev
>Assignee: Steve Loughran
>Priority: Minor
>
> The fix to avoid calls to getFileStatus() for each path component in 
> deleteUnnecessaryFakeDirectories() (HADOOP-13164) results in accumulation of 
> delete markers in versioned S3 buckets. The above patch replaced 
> getFileStatus() checks with a single batch delete request formed by 
> generating all ancestor keys formed from a given path. Since the delete 
> request is not checking for existence of fake directories, it will create a 
> delete marker for every path component that did not exist (or was previously 
> deleted). Note that issuing a DELETE request without specifying a version ID 
> will always create a new delete marker, even if one already exists ([AWS S3 
> Developer 
> Guide|https://docs.aws.amazon.com/AmazonS3/latest/dev/RemDelMarker.html])
> Since deleteUnnecessaryFakeDirectories() is called as a callback on 
> successful writes and on renames, delete markers accumulate rather quickly 
> and their rate of accumulation is inversely proportional to the depth of the 
> path. In other words, directories closer to the root will have more delete 
> markers than the leaves.
> This behavior negatively impacts performance of getFileStatus() operation 
> when it has to issue listObjects() request (especially v1) as the delete 
> markers have to be examined when the request searches for first current 
> non-deleted version of an object following a given prefix.
> I did a quick comparison against 3.x and the issue is still present: 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L2947|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L2947]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-8230) Enable sync by default and disable append

2012-04-30 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13265151#comment-13265151
 ] 

Suresh Srinivas commented on HADOOP-8230:
-

Eli, sorry for the late comment. I agree with the general direction of 
splitting hflush/hsync feature from append. Perhaps these features should be 
using two different flags.  

I have concerns with this change:
# I thought the proposal from HDFS-3120 was to add "dfs.support.sync". I do not 
see that flag in this patch.
# There are installations where hsync/hflush is disabled, using 
dfs.support.append. That option should be preserved.
# "dfs.support.broken.append" - why add this and not delete the tests that are 
testing append functionality?

> Enable sync by default and disable append
> -
>
> Key: HADOOP-8230
> URL: https://issues.apache.org/jira/browse/HADOOP-8230
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.0.0
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 1.1.0
>
> Attachments: hadoop-8230.txt
>
>
> Per HDFS-3120 for 1.x let's:
> - Always enable the sync path, which is currently only enabled if 
> dfs.support.append is set
> - Remove the dfs.support.append configuration option. We'll keep the code 
> paths though in case we ever fix append on branch-1, in which case we can add 
> the config option back

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8331) Created patch that adds oracle support to DBInputFormat and solves a splitting duplication problem introduced with my last patch.

2012-04-30 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13265180#comment-13265180
 ] 

Suresh Srinivas commented on HADOOP-8331:
-

BTW patch looks really huge? Is this correct patch?

> Created patch that adds oracle support to DBInputFormat and solves a 
> splitting duplication problem introduced with my last patch.
> -
>
> Key: HADOOP-8331
> URL: https://issues.apache.org/jira/browse/HADOOP-8331
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 1.0.0
> Environment: Redhat x86_64 cluster
>Reporter: Joseph Doss
>  Labels: package
> Fix For: 1.0.0, 1.0.2
>
> Attachments: 
> hadoop-1.0.0-20120426-DBInputFormat-stopDuplicatingSplits.patch
>
>
> This patch mainly resolves an overlap of records when splitting tasks in 
> DBInputFormat, thereby removing duplication of records processed. Tested on 
> 1.0.0 and 1.0.2

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8331) Created patch that adds oracle support to DBInputFormat and solves a splitting duplication problem introduced with my last patch.

2012-04-30 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13265178#comment-13265178
 ] 

Suresh Srinivas commented on HADOOP-8331:
-

Which jira was "last patch" from? Can please you link that jira to this jira?

> Created patch that adds oracle support to DBInputFormat and solves a 
> splitting duplication problem introduced with my last patch.
> -
>
> Key: HADOOP-8331
> URL: https://issues.apache.org/jira/browse/HADOOP-8331
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 1.0.0
> Environment: Redhat x86_64 cluster
>Reporter: Joseph Doss
>  Labels: package
> Fix For: 1.0.0, 1.0.2
>
> Attachments: 
> hadoop-1.0.0-20120426-DBInputFormat-stopDuplicatingSplits.patch
>
>
> This patch mainly resolves an overlap of records when splitting tasks in 
> DBInputFormat, thereby removing duplication of records processed. Tested on 
> 1.0.0 and 1.0.2

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8230) Enable sync by default and disable append

2012-05-02 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266700#comment-13266700
 ] 

Suresh Srinivas commented on HADOOP-8230:
-

bq. Wrt #2 personally I don't think we should allow people to disable durable 
sync as that can result in data loss for people running HBase. See HADOOP-8230 
for more info. I'm open to having an option to disable durable sync if you 
think that use case is important.
There are installations where HBase is not used and sync was disabled. Now this 
patch has removed that option. When an installation upgrades to a release with 
this patch, suddenly sync is enabled and there is no way to disable it.

bq. (1) there are tests that are using append not to test append per se but for 
the side effects and we'd lose sync test coverage by removing those tests and 
(2) per the description we're keeping the append code path in case someone 
wants to fix the data loss issues in which case it makes sense to keep the test 
coverage as well.
For testing sync, with this patch, since it is enabled by default, you do not 
need the flag right?

> Enable sync by default and disable append
> -
>
> Key: HADOOP-8230
> URL: https://issues.apache.org/jira/browse/HADOOP-8230
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.0.0
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 1.1.0
>
> Attachments: hadoop-8230.txt
>
>
> Per HDFS-3120 for 1.x let's:
> - Always enable the sync path, which is currently only enabled if 
> dfs.support.append is set
> - Remove the dfs.support.append configuration option. We'll keep the code 
> paths though in case we ever fix append on branch-1, in which case we can add 
> the config option back

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8230) Enable sync by default and disable append

2012-05-03 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13267584#comment-13267584
 ] 

Suresh Srinivas commented on HADOOP-8230:
-

bq. Would such an installation be using the sync call?
No from what I know.

>From what I understand, the intention of this change is to:
# Disable append, since 1.x has bugs in that implementation.
# Enable sync by default.

bq. Making sync actually work is a bug fix, it was a bug that we allowed people 
to call sync and unlike append there wasn't a flag to enable it that was 
disabled by default. Better to fix the default behavior (which allows you to 
sync).
The implementation earlier used dfs.supports.append to support both durable 
sync and append. When this flag is off, whole bunch of code got turned off, 
related to sync functionality on how the blocks are stored, block reports etc. 
Now with this change, this code can no longer be turned off. I agree with 
enabling sync by default. However, for folks who chose not to enable the 
related code and not impacted by it, we need to add a flag to turn off that 
functionality.

> Enable sync by default and disable append
> -
>
> Key: HADOOP-8230
> URL: https://issues.apache.org/jira/browse/HADOOP-8230
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.0.0
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 1.1.0
>
> Attachments: hadoop-8230.txt
>
>
> Per HDFS-3120 for 1.x let's:
> - Always enable the sync path, which is currently only enabled if 
> dfs.support.append is set
> - Remove the dfs.support.append configuration option. We'll keep the code 
> paths though in case we ever fix append on branch-1, in which case we can add 
> the config option back

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8230) Enable sync by default and disable append

2012-05-03 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13267819#comment-13267819
 ] 

Suresh Srinivas commented on HADOOP-8230:
-

bq. We've had sync on by default in hundreds of our customer clusters for 
almost two years now and have yet to see a related data-loss event. The only 
bugs we've seen have been bugs where sync() wouldn't provide the correct 
semantics, but for installs which don't use sync, that doesn't matter.

That is great.

Still, I think we should retain ability to turn it off, because I want to 
continue running my installation that way and this patch removes that ability.

> Enable sync by default and disable append
> -
>
> Key: HADOOP-8230
> URL: https://issues.apache.org/jira/browse/HADOOP-8230
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.0.0
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 1.1.0
>
> Attachments: hadoop-8230.txt
>
>
> Per HDFS-3120 for 1.x let's:
> - Always enable the sync path, which is currently only enabled if 
> dfs.support.append is set
> - Remove the dfs.support.append configuration option. We'll keep the code 
> paths though in case we ever fix append on branch-1, in which case we can add 
> the config option back

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8230) Enable sync by default and disable append

2012-05-06 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13269283#comment-13269283
 ] 

Suresh Srinivas commented on HADOOP-8230:
-

Adding dfs.support.sync flag is along the lines of my previous comments. I am 
reluctantly okay with enabling it by default. This should be a blocker on 1.1. 
It might be easy to revert this patch, and add the new flag, as lot of paths to 
be enabled by the new flag are removed in this patch.

> Enable sync by default and disable append
> -
>
> Key: HADOOP-8230
> URL: https://issues.apache.org/jira/browse/HADOOP-8230
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.0.0
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 1.1.0
>
> Attachments: hadoop-8230.txt
>
>
> Per HDFS-3120 for 1.x let's:
> - Always enable the sync path, which is currently only enabled if 
> dfs.support.append is set
> - Remove the dfs.support.append configuration option. We'll keep the code 
> paths though in case we ever fix append on branch-1, in which case we can add 
> the config option back

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8230) Enable sync by default and disable append

2012-05-07 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13269785#comment-13269785
 ] 

Suresh Srinivas commented on HADOOP-8230:
-

bq. what is that use case?
I think I have explained it in the comments above. To repeat:

"Still, I think we should retain ability to turn it off, because I want to 
continue running my installation that way and this patch removes that ability."

> Enable sync by default and disable append
> -
>
> Key: HADOOP-8230
> URL: https://issues.apache.org/jira/browse/HADOOP-8230
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.0.0
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 1.1.0
>
> Attachments: hadoop-8230.txt
>
>
> Per HDFS-3120 for 1.x let's:
> - Always enable the sync path, which is currently only enabled if 
> dfs.support.append is set
> - Remove the dfs.support.append configuration option. We'll keep the code 
> paths though in case we ever fix append on branch-1, in which case we can add 
> the config option back

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8365) Provide ability to disable working sync

2012-05-07 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8365:


Priority: Blocker  (was: Major)

Marking this as blocker.

> Provide ability to disable working sync
> ---
>
> Key: HADOOP-8365
> URL: https://issues.apache.org/jira/browse/HADOOP-8365
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.1.0
>Reporter: Eli Collins
>Priority: Blocker
>
> Per HADOOP-8230 there's a request for a flag so sync can be disabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8230) Enable sync by default and disable append

2012-05-07 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13269858#comment-13269858
 ] 

Suresh Srinivas commented on HADOOP-8230:
-

bq. There may be a misunderstanding: the dfs.support.append flag never 
controlled whether sync was enabled.
dfs.support.append turned off some code paths. These code paths are not just 
related to append. They enable durable sync. See the patch where it changes, 
"if support append then do x else do y" to do "x" without any check. That is 
the behavior I want a user to be able to turn off with a flag.


> Enable sync by default and disable append
> -
>
> Key: HADOOP-8230
> URL: https://issues.apache.org/jira/browse/HADOOP-8230
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.0.0
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 1.1.0
>
> Attachments: hadoop-8230.txt
>
>
> Per HDFS-3120 for 1.x let's:
> - Always enable the sync path, which is currently only enabled if 
> dfs.support.append is set
> - Remove the dfs.support.append configuration option. We'll keep the code 
> paths though in case we ever fix append on branch-1, in which case we can add 
> the config option back

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8366) Use ProtoBuf for RpcResponseHeader

2012-05-07 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8366:


Affects Version/s: (was: HA Branch (HDFS-1623))
   0.2.0
   0.3.0

> Use ProtoBuf for RpcResponseHeader
> --
>
> Key: HADOOP-8366
> URL: https://issues.apache.org/jira/browse/HADOOP-8366
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 0.2.0, 0.3.0
>Reporter: Sanjay Radia
>Assignee: Sanjay Radia
>Priority: Blocker
> Attachments: hadoop-8366-1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8230) Enable sync by default and disable append

2012-05-07 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13269920#comment-13269920
 ] 

Suresh Srinivas commented on HADOOP-8230:
-

bq. Do you think there many users who'd want to do this Suresh?
There are several clusters that I support that do not use sync, that currently 
runs with append turned off. 

bq. I'd think the number few and if there any still conscious this option even 
exists, they are probably suffering from the FUD that sync is buggy/broke. We 
should help them get over their misconception?
I agree that the code that is being enabled has been stable for some time, 
which is the main reason why it was ported to 0.20.205. However I would like to 
retain the existing behavior and not enable a change unnecessarily on these 
clusters. This avoids having to worry about or spend time looking at any 
bugs/changed behavior that might crop up.

For these kinds of changes (see several token related changes that happened in 
1.x), I have always advocated adding a flag so existing deployments can stay 
unaffected. I am asking the same here. It is more important given this patch 
removed an option that existed to turn off new code.

bq. if you feel strongly that we should have a config option that let's people 
keep the previous/broken sync behavior go for it
The need for an option is a comment on the patch committed in this jira. Sorry 
I could not comment quickly enough, as this patch was committed with a short 
turn around time. I think it should be addressed as a subsequent patch for this 
jira and not a separate optional item. Alternatively we could revert this 
change and rework it to add a flag.

> Enable sync by default and disable append
> -
>
> Key: HADOOP-8230
> URL: https://issues.apache.org/jira/browse/HADOOP-8230
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.0.0
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 1.1.0
>
> Attachments: hadoop-8230.txt
>
>
> Per HDFS-3120 for 1.x let's:
> - Always enable the sync path, which is currently only enabled if 
> dfs.support.append is set
> - Remove the dfs.support.append configuration option. We'll keep the code 
> paths though in case we ever fix append on branch-1, in which case we can add 
> the config option back

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-6546) BloomMapFile can return false negatives

2012-05-07 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-6546:


Target Version/s: 1.1.0

> BloomMapFile can return false negatives
> ---
>
> Key: HADOOP-6546
> URL: https://issues.apache.org/jira/browse/HADOOP-6546
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 0.20.1
>Reporter: Clark Jefcoat
>Assignee: Clark Jefcoat
> Fix For: 0.21.0
>
> Attachments: HADOOP-6546.patch
>
>
> BloomMapFile can return false negatives when using keys of varying sizes.  If 
> the amount of data written by the write() method of your key class differs 
> between instance of your key, your BloomMapFile may return false negatives.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8371) Hadoop 1.0.1 release - DFS rollback issues

2012-05-08 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8371:


Description: See the next comment for details.  (was: h1.Test Setup
All tests were done on a single node cluster, that runs namenode, 
secondarynamenode, datanode, all on one machine, running Ubuntu
12.04.
/usr/local/hadoop/ is a soft link to /usr/local/hadoop-0.20.203.0/
/usr/local/hadoop-1.0.1 contains the upgrade version.
h1.Version - 0.20.203.0
* Formatted name node.
* Contents of {dfs.name.dir}/current/VERSION
{quote}
Tue May 08 08:08:57 EDT 2012
namespaceID=350250898
cTime=0
storageType=NAME_NODE
layoutVersion=-31
{quote}
* Contents of {dfs.name.dir}/previous.checkpoint/VERSION
{quote}
Tue May 08 08:03:35 EDT 2012
namespaceID=350250898
cTime=0
storageType=NAME_NODE
layoutVersion=-31
{quote}
* Copied a few test files into HDFS.
* Output from "fs -lsr /" command
{quote}
hduser@ruff790:/usr/local/hadoop/bin$ ./hadoop dfs -lsr /
drwxr-xr-x - hduser supergroup 0 2012-05-08 08:04 /test
-rw-r--r-- 1 hduser supergroup 27574849 2012-05-08 08:04 
/test/rr_archive_1655003175_1660003165.gz
-rw-r--r-- 1 hduser supergroup 18065179 2012-05-08 08:04 
/test/twonkyportal.log.2011-12-03.rr.gz
drwxr-xr-x - hduser supergroup 0 2012-05-08 08:04 /user
drwxr-xr-x - hduser supergroup 0 2012-05-08 08:04 /user/hduser
{quote}
* Executed "hadoop dfsadmin -finalizeUpgrade" (I do not think this is required, 
but i do not think it should matter either).
* Stopped DFS by executing "stop-dfs.sh"

h1. Version - 1.0.1
h2. Upgrade
* Tried starting DFS by running /usr/local/hadoop-1.0.1/bin/start-dfs.sh
* As expected the name node start failed due to a version mismatch.
{quote}
2012-05-08 08:22:38,166 ERROR 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
initialization failed.
java.io.IOException:
File system image contains an old layout version -31.
An upgrade to version -32 is required.
Please restart NameNode with -upgrade option.
{quote}
* Ran /usr/local/hadoop-1.0.1/bin/stop-dfs.sh to stop datanode and 
secondarynamenode.
* Started DFS by running /usr/local/hadoop-1.0.1/bin/start-dfs.sh -upgrade
* Checked upgrade status by calling /usr/local/hadoop-1.0.1/bin/hadoop dfsadmin 
-upgradeProgress status
{quote}
Upgrade for version -32 has been completed.
Upgrade is not finalized.
{quote}
* Contents of {dfs.name.dir}/current/VERSION
{quote}
#Tue May 08 08:25:51 EDT 2012
namespaceID=350250898
cTime=1336479951669
storageType=NAME_NODE
layoutVersion=-32
{quote}
* Contents of {dfs.name.dir}/previous.checkpoint/VERSION
{quote}
Tue May 08 08:03:35 EDT 2012
namespaceID=350250898
cTime=0
storageType=NAME_NODE
layoutVersion=-31
{quote}
* Contents of {dfs.name.dir}/previous/VERSION
{quote}
#Tue May 08 08:08:57 EDT 2012
namespaceID=350250898
cTime=0
storageType=NAME_NODE
layoutVersion=-31
{quote}
* Checked to make sure i can list the contents of DFS
* Stop DFS.

h2.Rollback
* Started DFS by running /usr/local/hadoop-1.0.1/bin/start-dfs.sh -rollback
* As per contents of "hadoop-hduser-namenode-ruff790.log", rollback seems to 
have succeeded.
{quote}
012-05-08 08:37:41,799 INFO org.apache.hadoop.hdfs.server.common.Storage: 
Rolling back storage
directory /usr/local/app/hadoop/tmp/dfs/name.
new LV = -31; new CTime = 0
2012-05-08 08:37:41,801 INFO org.apache.hadoop.hdfs.server.common.Storage: 
Rollback of
/usr/local/app/hadoop/tmp/dfs/name is complete.
{quote}
* Contents of {dfs.name.dir}/current/VERSION
{quote}
Tue May 08 08:37:42 EDT 2012
namespaceID=350250898
cTime=0
storageType=NAME_NODE
layoutVersion=-31
{quote}
* Contents of {dfs.name.dir}/previous.checkpoint/VERSION
{quote}
#Tue May 08 08:08:57 EDT 2012
namespaceID=350250898
cTime=0
storageType=NAME_NODE
layoutVersion=-31
{quote}
* Checked to make sure i can list the contents of DFS
{quote}
hduser@ruff790:/usr/local/hadoop-1.0.1/bin$ ./hadoop dfs -lsr /
drwxr-xr-x - hduser supergroup 0 2012-05-08 08:04 /test
-rw-r--r-- 1 hduser supergroup 27574849 2012-05-08 08:04 
/test/rr_archive_1655003175_1660003165.gz
-rw-r--r-- 1 hduser supergroup 18065179 2012-05-08 08:04 
/test/twonkyportal.log.2011-12-03.rr.gz
drwxr-xr-x - hduser supergroup 0 2012-05-08 08:04 /user
drwxr-xr-x - hduser supergroup 0 2012-05-08 08:04 /user/hduser
{quote}
* However at this point i could not browse the file system from WebUI. Then i 
realized that data node is not really running. From the data
node log file it seems like it had shut down during the rollback process.
{quote}
012-05-08 08:37:57,953 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
DataNode is shutting
down: org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.hdfs.protocol.UnregisteredDatanodeException: Unregistered 
data node:
127.0.0.1:50010
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.verifyRequest(NameNode.java:1077)
{quote}
* So i ran "stop-dfs.sh" to shut down namnode and secondarynamenode.
* Next "start-dfs.sh" fail

[jira] [Commented] (HADOOP-8371) Hadoop 1.0.1 release - DFS rollback issues

2012-05-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270706#comment-13270706
 ] 

Suresh Srinivas commented on HADOOP-8371:
-

bq. Started DFS by running /usr/local/hadoop-1.0.1/bin/start-dfs.sh -rollback

When you upgrade from v1 to v2, you do it by running start-dfs.sh -upgrade on 
v2. After upgrade, to rollback, you have to do start-dfs.sh -rollback on * v1 * 
version of the software and not * v2 * as you have done here. That is the 
reason why you are seeing the problem.

We should still log a bug on why rollback was allowed from 1.0.1, which rolled 
back to namenode state from 0.20.203.

> Hadoop 1.0.1 release - DFS rollback issues
> --
>
> Key: HADOOP-8371
> URL: https://issues.apache.org/jira/browse/HADOOP-8371
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 1.0.1
> Environment: All tests were done on a single node cluster, that runs 
> namenode, secondarynamenode, datanode, all on one machine, running Ubuntu 
> 12.04
>Reporter: Giri
>Priority: Minor
>  Labels: hdfs
>
> See the next comment for details.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6546) BloomMapFile can return false negatives

2012-05-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270731#comment-13270731
 ] 

Suresh Srinivas commented on HADOOP-6546:
-

I committed this patch to branch-1. It should be available in release 1.1.

> BloomMapFile can return false negatives
> ---
>
> Key: HADOOP-6546
> URL: https://issues.apache.org/jira/browse/HADOOP-6546
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 0.20.1
>Reporter: Clark Jefcoat
>Assignee: Clark Jefcoat
> Fix For: 0.21.0
>
> Attachments: HADOOP-6546.patch
>
>
> BloomMapFile can return false negatives when using keys of varying sizes.  If 
> the amount of data written by the write() method of your key class differs 
> between instance of your key, your BloomMapFile may return false negatives.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-6546) BloomMapFile can return false negatives

2012-05-08 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-6546:


Fix Version/s: 1.1.0

> BloomMapFile can return false negatives
> ---
>
> Key: HADOOP-6546
> URL: https://issues.apache.org/jira/browse/HADOOP-6546
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 0.20.1
>Reporter: Clark Jefcoat
>Assignee: Clark Jefcoat
> Fix For: 1.1.0, 0.21.0
>
> Attachments: HADOOP-6546.patch
>
>
> BloomMapFile can return false negatives when using keys of varying sizes.  If 
> the amount of data written by the write() method of your key class differs 
> between instance of your key, your BloomMapFile may return false negatives.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8372) normalizeHostName() in NetUtils is not working properly in resolving a hostname start with numeric character

2012-05-09 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8372:


Status: Patch Available  (was: Open)

> normalizeHostName() in NetUtils is not working properly in resolving a 
> hostname start with numeric character
> 
>
> Key: HADOOP-8372
> URL: https://issues.apache.org/jira/browse/HADOOP-8372
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, util
>Affects Versions: 0.23.0, 1.0.0
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: HADOOP-8372.patch
>
>
> A valid host name can start with numeric value (You can refer RFC952, RFC1123 
> or http://www.zytrax.com/books/dns/apa/names.html), so it is possible in a 
> production environment, user name their hadoop nodes as: 1hosta, 2hostb, etc. 
> But normalizeHostName() will recognise this hostname as IP address and return 
> directly rather than resolving the real IP address. These nodes will be 
> failed to get correct network topology if topology script/TableMapping only 
> contains their IPs (without hostname).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8372) normalizeHostName() in NetUtils is not working properly in resolving a hostname start with numeric character

2012-05-09 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13271540#comment-13271540
 ] 

Suresh Srinivas commented on HADOOP-8372:
-

I was concerned about performance implication of this change. However, in sun 
jdk, IPAddressUtil#isIPv4LiteralAddress() is called, which is doing more 
complete check for ip address before doing a look up.

Please fix the tests. While at it, please indent the the code in the test 
correctly. Also, optionaly, you reduce a line by {{ return 
InetAddress.getByName(name).getHostAddress(); }}

> normalizeHostName() in NetUtils is not working properly in resolving a 
> hostname start with numeric character
> 
>
> Key: HADOOP-8372
> URL: https://issues.apache.org/jira/browse/HADOOP-8372
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, util
>Affects Versions: 1.0.0, 0.23.0
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: HADOOP-8372.patch
>
>
> A valid host name can start with numeric value (You can refer RFC952, RFC1123 
> or http://www.zytrax.com/books/dns/apa/names.html), so it is possible in a 
> production environment, user name their hadoop nodes as: 1hosta, 2hostb, etc. 
> But normalizeHostName() will recognise this hostname as IP address and return 
> directly rather than resolving the real IP address. These nodes will be 
> failed to get correct network topology if topology script/TableMapping only 
> contains their IPs (without hostname).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8372) normalizeHostName() in NetUtils is not working properly in resolving a hostname start with numeric character

2012-05-09 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13271561#comment-13271561
 ] 

Suresh Srinivas commented on HADOOP-8372:
-

BTW can you describe the test better, especially cases "3w.org -> xx.xx.xx.xx" 
and "UnknownHost -> UnknownHost" better.

> normalizeHostName() in NetUtils is not working properly in resolving a 
> hostname start with numeric character
> 
>
> Key: HADOOP-8372
> URL: https://issues.apache.org/jira/browse/HADOOP-8372
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, util
>Affects Versions: 1.0.0, 0.23.0
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: HADOOP-8372.patch
>
>
> A valid host name can start with numeric value (You can refer RFC952, RFC1123 
> or http://www.zytrax.com/books/dns/apa/names.html), so it is possible in a 
> production environment, user name their hadoop nodes as: 1hosta, 2hostb, etc. 
> But normalizeHostName() will recognise this hostname as IP address and return 
> directly rather than resolving the real IP address. These nodes will be 
> failed to get correct network topology if topology script/TableMapping only 
> contains their IPs (without hostname).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8372) normalizeHostName() in NetUtils is not working properly in resolving a hostname start with numeric character

2012-05-09 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13271592#comment-13271592
 ] 

Suresh Srinivas commented on HADOOP-8372:
-

bq. "3w.org -> xx.xx.xx.xx", it is a public website start with numeric that can 
be resolved by DNS.
This could become an issue. But we could fix it later.

Sorry I was not clear. What I meant in my previous comment was, you could add 
comments to make the test easier to understand. For example, you could method 
level comment to say {{ /** Test for {@link NetUtils#normalizeHostNames }}. 
Also you could add a comment saying, when ipaddress is normalized, same address 
is expected in return and for a resolvable hostname, ipaddress it resolved is 
expected in return.

The reason why I am suggesting this is - our tests are poorly documented. When 
adding new features, lot more time goes into understanding tests and fixing 
them than implementing the feature itself.

> normalizeHostName() in NetUtils is not working properly in resolving a 
> hostname start with numeric character
> 
>
> Key: HADOOP-8372
> URL: https://issues.apache.org/jira/browse/HADOOP-8372
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, util
>Affects Versions: 1.0.0, 0.23.0
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: HADOOP-8372.patch
>
>
> A valid host name can start with numeric value (You can refer RFC952, RFC1123 
> or http://www.zytrax.com/books/dns/apa/names.html), so it is possible in a 
> production environment, user name their hadoop nodes as: 1hosta, 2hostb, etc. 
> But normalizeHostName() will recognise this hostname as IP address and return 
> directly rather than resolving the real IP address. These nodes will be 
> failed to get correct network topology if topology script/TableMapping only 
> contains their IPs (without hostname).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8372) normalizeHostName() in NetUtils is not working properly in resolving a hostname start with numeric character

2012-05-09 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13271594#comment-13271594
 ] 

Suresh Srinivas commented on HADOOP-8372:
-

bq. This could become an issue. But we could fix it later.
By this, I mean, if resolving the address using DNS fails for some reason, we 
could fix the test. So the code that you have added seems fine to me.

> normalizeHostName() in NetUtils is not working properly in resolving a 
> hostname start with numeric character
> 
>
> Key: HADOOP-8372
> URL: https://issues.apache.org/jira/browse/HADOOP-8372
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, util
>Affects Versions: 1.0.0, 0.23.0
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: HADOOP-8372.patch
>
>
> A valid host name can start with numeric value (You can refer RFC952, RFC1123 
> or http://www.zytrax.com/books/dns/apa/names.html), so it is possible in a 
> production environment, user name their hadoop nodes as: 1hosta, 2hostb, etc. 
> But normalizeHostName() will recognise this hostname as IP address and return 
> directly rather than resolving the real IP address. These nodes will be 
> failed to get correct network topology if topology script/TableMapping only 
> contains their IPs (without hostname).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8372) normalizeHostName() in NetUtils is not working properly in resolving a hostname start with numeric character

2012-05-09 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13271632#comment-13271632
 ] 

Suresh Srinivas commented on HADOOP-8372:
-

Junping, the patch looks good. Could you please remove the TODO comment. Also 
can you please use two spaces for indenting instead of tab in the tests.

> normalizeHostName() in NetUtils is not working properly in resolving a 
> hostname start with numeric character
> 
>
> Key: HADOOP-8372
> URL: https://issues.apache.org/jira/browse/HADOOP-8372
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, util
>Affects Versions: 1.0.0, 0.23.0
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: HADOOP-8372-v2.patch, HADOOP-8372.patch
>
>
> A valid host name can start with numeric value (You can refer RFC952, RFC1123 
> or http://www.zytrax.com/books/dns/apa/names.html), so it is possible in a 
> production environment, user name their hadoop nodes as: 1hosta, 2hostb, etc. 
> But normalizeHostName() will recognise this hostname as IP address and return 
> directly rather than resolving the real IP address. These nodes will be 
> failed to get correct network topology if topology script/TableMapping only 
> contains their IPs (without hostname).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8372) normalizeHostName() in NetUtils is not working properly in resolving a hostname start with numeric character

2012-05-09 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8372:


Attachment: HADOOP-8372.patch

Minor edit to the patch:
# Removed unused imports in TestNetUtils.java (unrelated to the change from 
this patch)
# Added missing } to {{Test for {@link NetUtils#normalizeHostNames}}

> normalizeHostName() in NetUtils is not working properly in resolving a 
> hostname start with numeric character
> 
>
> Key: HADOOP-8372
> URL: https://issues.apache.org/jira/browse/HADOOP-8372
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, util
>Affects Versions: 1.0.0, 0.23.0
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: HADOOP-8372-v2.patch, HADOOP-8372-v3.patch, 
> HADOOP-8372.patch, HADOOP-8372.patch
>
>
> A valid host name can start with numeric value (You can refer RFC952, RFC1123 
> or http://www.zytrax.com/books/dns/apa/names.html), so it is possible in a 
> production environment, user name their hadoop nodes as: 1hosta, 2hostb, etc. 
> But normalizeHostName() will recognise this hostname as IP address and return 
> directly rather than resolving the real IP address. These nodes will be 
> failed to get correct network topology if topology script/TableMapping only 
> contains their IPs (without hostname).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8372) normalizeHostName() in NetUtils is not working properly in resolving a hostname start with numeric character

2012-05-09 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8372:


   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I committed the patch. Thank you Junping.

> normalizeHostName() in NetUtils is not working properly in resolving a 
> hostname start with numeric character
> 
>
> Key: HADOOP-8372
> URL: https://issues.apache.org/jira/browse/HADOOP-8372
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, util
>Affects Versions: 1.0.0, 0.23.0
>Reporter: Junping Du
>Assignee: Junping Du
> Fix For: 3.0.0
>
> Attachments: HADOOP-8372-v2.patch, HADOOP-8372-v3.patch, 
> HADOOP-8372.patch, HADOOP-8372.patch
>
>
> A valid host name can start with numeric value (You can refer RFC952, RFC1123 
> or http://www.zytrax.com/books/dns/apa/names.html), so it is possible in a 
> production environment, user name their hadoop nodes as: 1hosta, 2hostb, etc. 
> But normalizeHostName() will recognise this hostname as IP address and return 
> directly rather than resolving the real IP address. These nodes will be 
> failed to get correct network topology if topology script/TableMapping only 
> contains their IPs (without hostname).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8372) normalizeHostName() in NetUtils is not working properly in resolving a hostname start with numeric character

2012-05-09 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8372:


Fix Version/s: 2.0.0

I committed the patch to branch-2 as well.

> normalizeHostName() in NetUtils is not working properly in resolving a 
> hostname start with numeric character
> 
>
> Key: HADOOP-8372
> URL: https://issues.apache.org/jira/browse/HADOOP-8372
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, util
>Affects Versions: 1.0.0, 0.23.0
>Reporter: Junping Du
>Assignee: Junping Du
> Fix For: 2.0.0, 3.0.0
>
> Attachments: HADOOP-8372-v2.patch, HADOOP-8372-v3.patch, 
> HADOOP-8372.patch, HADOOP-8372.patch
>
>
> A valid host name can start with numeric value (You can refer RFC952, RFC1123 
> or http://www.zytrax.com/books/dns/apa/names.html), so it is possible in a 
> production environment, user name their hadoop nodes as: 1hosta, 2hostb, etc. 
> But normalizeHostName() will recognise this hostname as IP address and return 
> directly rather than resolving the real IP address. These nodes will be 
> failed to get correct network topology if topology script/TableMapping only 
> contains their IPs (without hostname).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-8371) Hadoop 1.0.1 release - DFS rollback issues

2012-05-09 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HADOOP-8371.
-

Resolution: Not A Problem
  Assignee: Suresh Srinivas

Rollback is not a problem. 

However, I created a related bug HDFS-3393 to track the issue where rollback 
was allowed on the newer release.

> Hadoop 1.0.1 release - DFS rollback issues
> --
>
> Key: HADOOP-8371
> URL: https://issues.apache.org/jira/browse/HADOOP-8371
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 1.0.1
> Environment: All tests were done on a single node cluster, that runs 
> namenode, secondarynamenode, datanode, all on one machine, running Ubuntu 
> 12.04
>Reporter: Giri
>Assignee: Suresh Srinivas
>Priority: Minor
>  Labels: hdfs
>
> See the next comment for details.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8366) Use ProtoBuf for RpcResponseHeader

2012-05-10 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13272928#comment-13272928
 ] 

Suresh Srinivas commented on HADOOP-8366:
-

Comments:
# Minor: remove empty lines after  int callId = response.getCallId(); 
# Minor: remove empty line in Server#setupResponse before "if (status == 
RpcStatus.SUCCESS)"
# RpcPayloadHeader.proto 
#* Please name RpcStatus to RpcStatusProto. Also it would be nice to delete 
unnecessary lines.
#* We should make both callId and status mandatory
#* repsonse_ change to response

Please remember to delete Status.java when you commit the code

+1 for the patch with these changes.

> Use ProtoBuf for RpcResponseHeader
> --
>
> Key: HADOOP-8366
> URL: https://issues.apache.org/jira/browse/HADOOP-8366
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Sanjay Radia
>Assignee: Sanjay Radia
>Priority: Blocker
> Attachments: hadoop-8366-1.patch, hadoop-8366-2.patch, 
> hadoop-8366-3.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8367) Better document the declaringClassProtocolName in the rpc headers better

2012-05-16 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13276902#comment-13276902
 ] 

Suresh Srinivas commented on HADOOP-8367:
-

Minor comments:
# ProtobufRpcEngine.java
#* Typo differnt
#* The newly added comment is not very clear. Can you please add more 
information about what you mean by metaProtocols. Also the sentense does not 
read right. It might make sense to capture the same comments from 
hadoop_rpc.proto in here.
# hadoop_rpc.proto some lines are going beyond 80 characters. Also the last 
sentence in the newly added comment does not read right.


> Better document the declaringClassProtocolName in the rpc headers better
> 
>
> Key: HADOOP-8367
> URL: https://issues.apache.org/jira/browse/HADOOP-8367
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Sanjay Radia
>Assignee: Sanjay Radia
> Attachments: hadoop-8367-1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8402) Add support for generating pdf clover report to 1.1 release

2012-05-16 Thread Suresh Srinivas (JIRA)
Suresh Srinivas created HADOOP-8402:
---

 Summary: Add support for generating pdf clover report to 1.1 
release
 Key: HADOOP-8402
 URL: https://issues.apache.org/jira/browse/HADOOP-8402
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.0.0
Reporter: Suresh Srinivas
Priority: Minor


Add support for generating clover PDF report.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8402) Add support for generating pdf clover report in 1.1 release

2012-05-16 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8402:


Summary: Add support for generating pdf clover report in 1.1 release  (was: 
Add support for generating pdf clover report to 1.1 release)

> Add support for generating pdf clover report in 1.1 release
> ---
>
> Key: HADOOP-8402
> URL: https://issues.apache.org/jira/browse/HADOOP-8402
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.0.0
>Reporter: Suresh Srinivas
>Priority: Minor
> Attachments: HADOOP-8402.txt
>
>
> Add support for generating clover PDF report.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8402) Add support for generating pdf clover report in 1.1 release

2012-05-16 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8402:


Attachment: HADOOP-8402.txt

> Add support for generating pdf clover report in 1.1 release
> ---
>
> Key: HADOOP-8402
> URL: https://issues.apache.org/jira/browse/HADOOP-8402
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.0.0
>Reporter: Suresh Srinivas
>Priority: Minor
> Attachments: HADOOP-8402.txt
>
>
> Add support for generating clover PDF report.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8402) Add support for generating pdf clover report in 1.1 release

2012-05-16 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13276938#comment-13276938
 ] 

Suresh Srinivas commented on HADOOP-8402:
-

I manually tested and ensure that clover_coverage.pdf is generated with clover 
report summary.

> Add support for generating pdf clover report in 1.1 release
> ---
>
> Key: HADOOP-8402
> URL: https://issues.apache.org/jira/browse/HADOOP-8402
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.0.0
>Reporter: Suresh Srinivas
>Priority: Minor
> Attachments: HADOOP-8402.txt
>
>
> Add support for generating clover PDF report.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Comment Edited] (HADOOP-8402) Add support for generating pdf clover report in 1.1 release

2012-05-16 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13276938#comment-13276938
 ] 

Suresh Srinivas edited comment on HADOOP-8402 at 5/16/12 5:59 PM:
--

I manually tested and ensured that clover_coverage.pdf is generated with clover 
report summary.

  was (Author: sureshms):
I manually tested and ensure that clover_coverage.pdf is generated with 
clover report summary.
  
> Add support for generating pdf clover report in 1.1 release
> ---
>
> Key: HADOOP-8402
> URL: https://issues.apache.org/jira/browse/HADOOP-8402
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.0.0
>Reporter: Suresh Srinivas
>Priority: Minor
> Attachments: HADOOP-8402.txt
>
>
> Add support for generating clover PDF report.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8409) Address Hadoop path related issues on Windows

2012-06-01 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13287428#comment-13287428
 ] 

Suresh Srinivas commented on HADOOP-8409:
-

bq. In line with this, I spent quite a bit of time thinking about pros and cons 
to having Path object support backslash VS not. Both approaches have legitimate 
pros and cons. Once I sum them up on my end, I'll reply back.

@Please also look at the issues raised in HADOOP-8139 and the reasons why we 
did not support windows paths on HDFS. 

> Address Hadoop path related issues on Windows
> -
>
> Key: HADOOP-8409
> URL: https://issues.apache.org/jira/browse/HADOOP-8409
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, test, util
>Affects Versions: 1.0.0
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
> Attachments: HADOOP-8409-branch-1-win.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> There are multiple places in prod and test code where Windows paths are not 
> handled properly. From a high level this could be summarized with:
> 1. Windows paths are not necessarily valid DFS paths (while Unix paths are)
> 2. Windows paths are not necessarily valid URIs (while Unix paths are)
> #1 causes a number of tests to fail because they implicitly assume that local 
> paths are valid DFS paths (by extracting the DFS test path from for example 
> "test.build.data" property)
> #2 causes issues when URIs are directly created on path strings passed in by 
> the user

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8485) Don't hardcode "Apache Hadoop 0.23" in the docs

2012-06-06 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8485:


Summary: Don't hardcode "Apache Hadoop 0.23" in the docs  (was: Don't 
harcode "Apache Hadoop 0.23" in the docs)

> Don't hardcode "Apache Hadoop 0.23" in the docs
> ---
>
> Key: HADOOP-8485
> URL: https://issues.apache.org/jira/browse/HADOOP-8485
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Eli Collins
>Priority: Minor
> Attachments: hadoop-8485.txt
>
>
> The docs currently hardcode the string "Apache Hadoop 0.23" and 
> "hadoop-0.20.205" in the main page.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8510) Implement auto-refresh for dsfhealth.jsp and jobtracker.jsp

2012-06-14 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13295392#comment-13295392
 ] 

Suresh Srinivas commented on HADOOP-8510:
-

Good idea.

bq. I am pretty confused by which version to patch
Submit the patch for the trunk - svn repo: 
https://svn.apache.org/repos/asf/hadoop/common/trunk. 

The jsps to change are in:
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/
hadoop-mapreduce-project/src/webapps/job

Once this is committed, we can put this change into 1.1 release.

> Implement auto-refresh for dsfhealth.jsp and jobtracker.jsp
> ---
>
> Key: HADOOP-8510
> URL: https://issues.apache.org/jira/browse/HADOOP-8510
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.0.1
>Reporter: Lewis John McGibbney
>Priority: Trivial
>
> A simple auto refresh switch would be nice from within the webapp. I am 
> pretty confused by which version to patch, I've looked in trunk and find 
> myself even more confused. 
> If someone would be kind enough to point out where I can check out code and 
> patch to include this issue then I'll happily submit the trivial patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8510) Implement auto-refresh for dsfhealth.jsp and jobtracker.jsp

2012-06-15 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13295755#comment-13295755
 ] 

Suresh Srinivas commented on HADOOP-8510:
-

bq. MAPREDUCE-3842
This is related to new Yarn MapReduce and unrelated to the files you wanted to 
change. It should be okay to change the jsps Lewis pointed out.

> Implement auto-refresh for dsfhealth.jsp and jobtracker.jsp
> ---
>
> Key: HADOOP-8510
> URL: https://issues.apache.org/jira/browse/HADOOP-8510
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.0.1
>Reporter: Lewis John McGibbney
>Priority: Trivial
>
> A simple auto refresh switch would be nice from within the webapp. I am 
> pretty confused by which version to patch, I've looked in trunk and find 
> myself even more confused. 
> If someone would be kind enough to point out where I can check out code and 
> patch to include this issue then I'll happily submit the trivial patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8533) Remove Parallel Call in IPC

2012-06-26 Thread Suresh Srinivas (JIRA)
Suresh Srinivas created HADOOP-8533:
---

 Summary: Remove Parallel Call in IPC
 Key: HADOOP-8533
 URL: https://issues.apache.org/jira/browse/HADOOP-8533
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Fix For: 3.0.0


>From what I know, I do not think any one uses Parallel Call. I also think it 
>is not tested very well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8530) Potential deadlock in IPC

2012-06-26 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401663#comment-13401663
 ] 

Suresh Srinivas commented on HADOOP-8530:
-

Parallel call is not used. So this may not be an important problem to fix. 
Created HADOOP-8533 to fix this issue.


> Potential deadlock in IPC
> -
>
> Key: HADOOP-8530
> URL: https://issues.apache.org/jira/browse/HADOOP-8530
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 1.0.3, 2.0.0-alpha
>Reporter: Tom White
> Attachments: 1_jcarder_result_0.dot.png
>
>
> This cycle (see attached image, and explanation here: 
> http://www.jcarder.org/manual.html#analysis) was found with jcarder in 
> branch-1 (affects trunk too).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8059) Add javadoc to InterfaceAudience and InterfaceStability

2012-06-27 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13402358#comment-13402358
 ] 

Suresh Srinivas commented on HADOOP-8059:
-

Brandon, I as thinking of capturing information along following lines:
We should add some of the following comments:
# InterfaceAudience 
#* All public classes must have InterfaceAudience annotation. Public classes 
that are not marked with this annotation must be considered by default as 
InterfaceAudience#Private.
#* External applications must only use classes that are marked 
InterfaceAudience#Public. Avoid using non public classes as these classes could 
be removed or change in incompatible ways.
#* Internal projects must only use classes that are marked 
InterfaceAudience#LimitedPrivate or InterfaceAudience#Public.
#* Methods may have a different annotation that it is more restrictive compared 
to the audience classification of the class. Example: A class might be 
InterfaceAudience#Publice, but a method may be InterfaceAudience#LimtedPrivate

# Interface stability
#* All classes that are annotated with InterfaceAudience#Public or 
LimitedPrivate must have InterfaceStability annotation.
#* Classes that are InterfaceAudience#Private are to be considered unstable 
unless a different InterfaceStability annotation states otherwise.
#* Incompatible changes must not be made to classes marked as stable.


> Add javadoc to InterfaceAudience and InterfaceStability
> ---
>
> Key: HADOOP-8059
> URL: https://issues.apache.org/jira/browse/HADOOP-8059
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.24.0
>Reporter: Suresh Srinivas
>Assignee: Brandon Li
> Attachments: HADOOP-8059.patch
>
>
> InterfaceAudience and InterfaceStability javadoc is incomplete. The details 
> from HADOOP-5073.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8059) Add javadoc to InterfaceAudience and InterfaceStability

2012-06-27 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8059:


Affects Version/s: (was: 0.24.0)
   3.0.0
   2.0.0-alpha
   Status: Patch Available  (was: Open)

> Add javadoc to InterfaceAudience and InterfaceStability
> ---
>
> Key: HADOOP-8059
> URL: https://issues.apache.org/jira/browse/HADOOP-8059
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Suresh Srinivas
>Assignee: Brandon Li
> Attachments: HADOOP-8059.patch, HADOOP-8059.patch
>
>
> InterfaceAudience and InterfaceStability javadoc is incomplete. The details 
> from HADOOP-5073.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8059) Add javadoc to InterfaceAudience and InterfaceStability

2012-06-27 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8059:


   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

+1 for the patch. I committed it. Thank you Brandon.

> Add javadoc to InterfaceAudience and InterfaceStability
> ---
>
> Key: HADOOP-8059
> URL: https://issues.apache.org/jira/browse/HADOOP-8059
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Suresh Srinivas
>Assignee: Brandon Li
> Fix For: 3.0.0
>
> Attachments: HADOOP-8059.patch, HADOOP-8059.patch
>
>
> InterfaceAudience and InterfaceStability javadoc is incomplete. The details 
> from HADOOP-5073.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8533) Remove Parallel Call in IPC

2012-07-02 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13405383#comment-13405383
 ] 

Suresh Srinivas commented on HADOOP-8533:
-

+1 for the patch.

> Remove Parallel Call in IPC
> ---
>
> Key: HADOOP-8533
> URL: https://issues.apache.org/jira/browse/HADOOP-8533
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Suresh Srinivas
>Assignee: Brandon Li
> Fix For: 3.0.0
>
> Attachments: HADOOP-8533.patch
>
>
> From what I know, I do not think any one uses Parallel Call. I also think it 
> is not tested very well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8533) Remove Parallel Call in IPC

2012-07-02 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8533:


  Description: From what I know, I do not think anyone uses Parallel 
Call. I also think it is not tested very well.  (was: From what I know, I do 
not think any one uses Parallel Call. I also think it is not tested very well.)
Affects Version/s: 3.0.0
   1.0.0
   2.0.0-alpha
   Issue Type: Improvement  (was: Bug)

> Remove Parallel Call in IPC
> ---
>
> Key: HADOOP-8533
> URL: https://issues.apache.org/jira/browse/HADOOP-8533
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 1.0.0, 2.0.0-alpha, 3.0.0
>Reporter: Suresh Srinivas
>Assignee: Brandon Li
> Fix For: 3.0.0
>
> Attachments: HADOOP-8533.patch
>
>
> From what I know, I do not think anyone uses Parallel Call. I also think it 
> is not tested very well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8533) Remove Parallel Call in IPC

2012-07-02 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8533:


  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I committed the patch. Thank you Brandon.

> Remove Parallel Call in IPC
> ---
>
> Key: HADOOP-8533
> URL: https://issues.apache.org/jira/browse/HADOOP-8533
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 1.0.0, 2.0.0-alpha, 3.0.0
>Reporter: Suresh Srinivas
>Assignee: Brandon Li
> Fix For: 3.0.0
>
> Attachments: HADOOP-8533.patch
>
>
> From what I know, I do not think anyone uses Parallel Call. I also think it 
> is not tested very well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8533) Remove Parallel Call in IPC

2012-07-02 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13405412#comment-13405412
 ] 

Suresh Srinivas commented on HADOOP-8533:
-

sounds good.

> Remove Parallel Call in IPC
> ---
>
> Key: HADOOP-8533
> URL: https://issues.apache.org/jira/browse/HADOOP-8533
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 1.0.0, 2.0.0-alpha, 3.0.0
>Reporter: Suresh Srinivas
>Assignee: Brandon Li
> Fix For: 3.0.0
>
> Attachments: HADOOP-8533.patch
>
>
> From what I know, I do not think anyone uses Parallel Call. I also think it 
> is not tested very well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8533) Remove Parallel Call in IPC

2012-07-02 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8533:


Fix Version/s: 2.0.1-alpha
 Release Note: Merged the change to branch-2

> Remove Parallel Call in IPC
> ---
>
> Key: HADOOP-8533
> URL: https://issues.apache.org/jira/browse/HADOOP-8533
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 1.0.0, 2.0.0-alpha, 3.0.0
>Reporter: Suresh Srinivas
>Assignee: Brandon Li
> Fix For: 2.0.1-alpha, 3.0.0
>
> Attachments: HADOOP-8533.patch
>
>
> From what I know, I do not think anyone uses Parallel Call. I also think it 
> is not tested very well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8434) TestConfiguration currently has no tests for direct setter methods

2012-07-03 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8434:


Attachment: HADOOP-8434-2.patch

> TestConfiguration currently has no tests for direct setter methods
> --
>
> Key: HADOOP-8434
> URL: https://issues.apache.org/jira/browse/HADOOP-8434
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Harsh J
>Assignee: madhukara phatak
>  Labels: newbie
> Attachments: HADOOP-8434-1.patch, HADOOP-8434-2.patch, 
> HADOOP-8434.patch
>
>
> Jan van der Lugt noticed this on HADOOP-8415.
> bq. Just FYI, there are no tests for setFloat, setInt, setLong, etc. Might be 
> better to add all of those at the same time.
> Would be good to have (coverage-wise first, regression-wise second) explicit 
> tests for the each of the setter methods, although other projects' tests do 
> test this extensively.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8434) TestConfiguration currently has no tests for direct setter methods

2012-07-03 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13405966#comment-13405966
 ] 

Suresh Srinivas commented on HADOOP-8434:
-

Made a minor update to the patch - line was more than 80 columns and removed 
unnecessary IOExceptions (throws in the existing code and not in the code from 
the patch).

> TestConfiguration currently has no tests for direct setter methods
> --
>
> Key: HADOOP-8434
> URL: https://issues.apache.org/jira/browse/HADOOP-8434
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Harsh J
>Assignee: madhukara phatak
>  Labels: newbie
> Attachments: HADOOP-8434-1.patch, HADOOP-8434-2.patch, 
> HADOOP-8434.patch
>
>
> Jan van der Lugt noticed this on HADOOP-8415.
> bq. Just FYI, there are no tests for setFloat, setInt, setLong, etc. Might be 
> better to add all of those at the same time.
> Would be good to have (coverage-wise first, regression-wise second) explicit 
> tests for the each of the setter methods, although other projects' tests do 
> test this extensively.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8434) TestConfiguration currently has no tests for direct setter methods

2012-07-03 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8434:


Issue Type: Test  (was: Bug)

> TestConfiguration currently has no tests for direct setter methods
> --
>
> Key: HADOOP-8434
> URL: https://issues.apache.org/jira/browse/HADOOP-8434
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0
>Reporter: Harsh J
>Assignee: madhukara phatak
>  Labels: newbie
> Attachments: HADOOP-8434-1.patch, HADOOP-8434-2.patch, 
> HADOOP-8434.patch
>
>
> Jan van der Lugt noticed this on HADOOP-8415.
> bq. Just FYI, there are no tests for setFloat, setInt, setLong, etc. Might be 
> better to add all of those at the same time.
> Would be good to have (coverage-wise first, regression-wise second) explicit 
> tests for the each of the setter methods, although other projects' tests do 
> test this extensively.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8434) TestConfiguration currently has no tests for direct setter methods

2012-07-03 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8434:


   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I committed the patch. Thank you Madhukara for providing the patch.

> TestConfiguration currently has no tests for direct setter methods
> --
>
> Key: HADOOP-8434
> URL: https://issues.apache.org/jira/browse/HADOOP-8434
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0
>Reporter: Harsh J
>Assignee: madhukara phatak
>  Labels: newbie
> Fix For: 3.0.0
>
> Attachments: HADOOP-8434-1.patch, HADOOP-8434-2.patch, 
> HADOOP-8434.patch
>
>
> Jan van der Lugt noticed this on HADOOP-8415.
> bq. Just FYI, there are no tests for setFloat, setInt, setLong, etc. Might be 
> better to add all of those at the same time.
> Would be good to have (coverage-wise first, regression-wise second) explicit 
> tests for the each of the setter methods, although other projects' tests do 
> test this extensively.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7818) DiskChecker#checkDir should fail if the directory is not executable

2012-07-03 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13406027#comment-13406027
 ] 

Suresh Srinivas commented on HADOOP-7818:
-

Eli, since you have the context of this jira, can you please review and commit 
the patch, if you have time? If you are busy, I will commit the patch.

> DiskChecker#checkDir should fail if the directory is not executable
> ---
>
> Key: HADOOP-7818
> URL: https://issues.apache.org/jira/browse/HADOOP-7818
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 0.20.205.0, 0.23.0, 0.24.0
>Reporter: Eli Collins
>Assignee: madhukara phatak
>Priority: Minor
> Attachments: HADOOP-7818-1.patch, HADOOP-7818-2.patch, 
> HADOOP-7818.patch
>
>
> DiskChecker#checkDir fails if a directory can't be created, read, or written 
> but does not fail if the directory exists and is not executable. This causes 
> subsequent code to think the directory is OK but later fail due to an 
> inability to access the directory (eg see MAPREDUCE-2921). I propose checkDir 
> fails if the directory is not executable. Looking at the uses, this should be 
> fine, I think it was ignored because checkDir is often used to create 
> directories and it creates executable directories.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8362) Improve exception message when Configuration.set() is called with a null key or value

2012-07-03 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8362:


Attachment: HADOOP-8362.9.patch

Minor changes to fix indentation issues.

Madhukara, in future, please follow the coding guidelines as Colin had 
suggested.

> Improve exception message when Configuration.set() is called with a null key 
> or value
> -
>
> Key: HADOOP-8362
> URL: https://issues.apache.org/jira/browse/HADOOP-8362
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.0.0-alpha
>Reporter: Todd Lipcon
>Assignee: madhukara phatak
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-8362-1.patch, HADOOP-8362-2.patch, 
> HADOOP-8362-3.patch, HADOOP-8362-4.patch, HADOOP-8362-5.patch, 
> HADOOP-8362-6.patch, HADOOP-8362-7.patch, HADOOP-8362-8.patch, 
> HADOOP-8362.9.patch, HADOOP-8362.patch
>
>
> Currently, calling Configuration.set(...) with a null value results in a 
> NullPointerException within Properties.setProperty. We should check for null 
> key/value and throw a better exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira





[jira] [Updated] (HADOOP-8362) Improve exception message when Configuration.set() is called with a null key or value

2012-07-03 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8362:


   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed the patch. Thank you Madhukara.

> Improve exception message when Configuration.set() is called with a null key 
> or value
> -
>
> Key: HADOOP-8362
> URL: https://issues.apache.org/jira/browse/HADOOP-8362
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.0.0-alpha
>Reporter: Todd Lipcon
>Assignee: madhukara phatak
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.0.0
>
> Attachments: HADOOP-8362-1.patch, HADOOP-8362-2.patch, 
> HADOOP-8362-3.patch, HADOOP-8362-4.patch, HADOOP-8362-5.patch, 
> HADOOP-8362-6.patch, HADOOP-8362-7.patch, HADOOP-8362-8.patch, 
> HADOOP-8362.9.patch, HADOOP-8362.patch
>
>
> Currently, calling Configuration.set(...) with a null value results in a 
> NullPointerException within Properties.setProperty. We should check for null 
> key/value and throw a better exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7818) DiskChecker#checkDir should fail if the directory is not executable

2012-07-03 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13406118#comment-13406118
 ] 

Suresh Srinivas commented on HADOOP-7818:
-

Please follow the coding conventions - 
http://wiki.apache.org/hadoop/CodeReviewChecklist . I fixed it for other 
patches. Some examples:
# Please fix the indentation - {{if (!dir.canExecute)}} has an extra white 
space preceding it
# please name _checkDirs as checkDirs
# Please use space before and after "+"
# Catch should be in the same line as try blocks enclosing {



> DiskChecker#checkDir should fail if the directory is not executable
> ---
>
> Key: HADOOP-7818
> URL: https://issues.apache.org/jira/browse/HADOOP-7818
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 0.20.205.0, 0.23.0, 0.24.0
>Reporter: Eli Collins
>Assignee: madhukara phatak
>Priority: Minor
> Attachments: HADOOP-7818-1.patch, HADOOP-7818-2.patch, 
> HADOOP-7818.patch
>
>
> DiskChecker#checkDir fails if a directory can't be created, read, or written 
> but does not fail if the directory exists and is not executable. This causes 
> subsequent code to think the directory is OK but later fail due to an 
> inability to access the directory (eg see MAPREDUCE-2921). I propose checkDir 
> fails if the directory is not executable. Looking at the uses, this should be 
> fine, I think it was ignored because checkDir is often used to create 
> directories and it creates executable directories.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8552) Conflict: Same security.log.file for multiple users.

2012-07-03 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13406123#comment-13406123
 ] 

Suresh Srinivas commented on HADOOP-8552:
-

Usename is in the log entries right. Can you describe the problem better?

> Conflict: Same security.log.file for multiple users. 
> -
>
> Key: HADOOP-8552
> URL: https://issues.apache.org/jira/browse/HADOOP-8552
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf, security
>Affects Versions: 1.0.3, 2.0.0-alpha
>Reporter: Karthik Kambatla
>
> In log4j.properties, hadoop.security.log.file is set to SecurityAuth.audit. 
> In the presence of multiple users, this can lead to a potential conflict.
> Adding username to the log file would avoid this scenario.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8564) Create a Windows native InputStream class to address datanode concurrent reading and writing issue

2012-07-05 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13407491#comment-13407491
 ] 

Suresh Srinivas commented on HADOOP-8564:
-

+1 for the second option. This will also allow adding future optimization at 
the stream level on Windows, similar to the ones done for Linux.

> Create a Windows native InputStream class to address datanode concurrent 
> reading and writing issue
> --
>
> Key: HADOOP-8564
> URL: https://issues.apache.org/jira/browse/HADOOP-8564
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 1-win
>Reporter: Chuan Liu
>Assignee: Chuan Liu
>
> HDFS files are made up of blocks. First, let’s look at writing. When the data 
> is written to datanode, an active or temporary file is created to receive 
> packets. After the last packet for the block is received, we will finalize 
> the block. One step during finalization is to rename the block file to a new 
> directory. The relevant code can be found via the call sequence: 
> FSDataSet.finalizeBlockInternal -> FSDir.addBlock.
> {code} 
> if ( ! metaData.renameTo( newmeta ) ||
> ! src.renameTo( dest ) ) {
>   throw new IOException( "could not move files for " + b +
>  " from tmp to " + 
>  dest.getAbsolutePath() );
> }
> {code}
> Let’s then switch to reading. On HDFS, it is expected the client can also 
> read these unfinished blocks. So when the read calls from client reach 
> datanode, the datanode will open an input stream on the unfinished block file.
> The problem comes in when the file is opened for reading while the datanode 
> receives last packet from client and try to rename the finished block file. 
> This operation will succeed on Linux, but not on Windows .  The behavior can 
> be modified on Windows to open the file with FILE_SHARE_DELETE flag on, i.e. 
> sharing the delete (including renaming) permission with other processes while 
> opening the file. There is also a Java bug ([id 
> 6357433|http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6357433]) reported 
> a while back on this. However, since this behavior exists for Java on Windows 
> since JDK 1.0, the Java developers do not want to break the backward 
> compatibility on this behavior. Instead, a new file system API is proposed in 
> JDK 7.
> As outlined in the [Java forum|http://www.java.net/node/645421] by the Java 
> developer (kbr), there are three ways to fix the problem:
> # Use different mechanism in the application in dealing with files.
> # Create a new implementation of InputStream abstract class using Windows 
> native code.
> # Patch JDK with a private patch that alters FileInputStream behavior.
> For the third option, it cannot fix the problem for users using Oracle JDK.
> We discussed some options for the first approach. For example one option is 
> to use two phase renaming, i.e. first hardlink; then remove the old hardlink 
> when read is finished. This option was thought to be rather pervasive.  
> Another option discussed is to change the HDFS behavior on Windows by not 
> allowing client reading unfinished blocks. However this behavior change is 
> thought to be problematic and may affect other application build on top of 
> HDFS.
> For all the reasons discussed above, we will use the second approach to 
> address the problem.
> If there are better options to fix the problem, we would also like to hear 
> about them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8230) Enable sync by default and disable append

2012-07-06 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13408193#comment-13408193
 ] 

Suresh Srinivas commented on HADOOP-8230:
-

I had marked HADOOP-8365 as a blocker for 1.1.0. 

Since HADOOP-8365 has not been fixed yet for 1.1.0, I am -1 on this patch. If 
HADOOP-8365 gets fixed, I will remove my -1.


> Enable sync by default and disable append
> -
>
> Key: HADOOP-8230
> URL: https://issues.apache.org/jira/browse/HADOOP-8230
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.0.0
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 1.1.0
>
> Attachments: hadoop-8230.txt
>
>
> Per HDFS-3120 for 1.x let's:
> - Always enable the sync path, which is currently only enabled if 
> dfs.support.append is set
> - Remove the dfs.support.append configuration option. We'll keep the code 
> paths though in case we ever fix append on branch-1, in which case we can add 
> the config option back

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8567) Backport conf servlet with dump running configuration to branch 1.x

2012-07-06 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13408394#comment-13408394
 ] 

Suresh Srinivas commented on HADOOP-8567:
-

+1 for backport. This will be very useful feature on stable release.

> Backport conf servlet with dump running configuration to branch 1.x
> ---
>
> Key: HADOOP-8567
> URL: https://issues.apache.org/jira/browse/HADOOP-8567
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: conf
>Affects Versions: 1.0.3
>Reporter: Junping Du
>Assignee: Junping Du
> Fix For: 0.21.1, 2.0.1-alpha
>
>
> HADOOP-6408 provide conf servlet that can dump running configuration which 
> great helps admin to trouble shooting the configuration issue. However, that 
> patch works on branch after 0.21 only and should be backport to branch 1.x.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8365) Provide ability to disable working sync

2012-07-09 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13409766#comment-13409766
 ] 

Suresh Srinivas commented on HADOOP-8365:
-

Comments:
# The check added into FSNamesystem.java also needs to be added to 
DataNode.java, FSDataset.java, where earlier support for append was checked.
# Not sure why you are calling it *broken* sync. Can you remove broken from 
variable and configuration name.


> Provide ability to disable working sync
> ---
>
> Key: HADOOP-8365
> URL: https://issues.apache.org/jira/browse/HADOOP-8365
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.1.0
>Reporter: Eli Collins
>Assignee: Eli Collins
>Priority: Blocker
> Attachments: hadoop-8365.txt
>
>
> Per HADOOP-8230 there's a request for a flag to disable the sync code paths 
> that dfs.support.append used to enable. The sync method itself will still be 
> available and have a broken implementation as that was the behavior before 
> HADOOP-8230. This config flag should default to false as the primary 
> motivation for HADOOP-8230 is so HBase works out-of-the-box with Hadoop 1.1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8579) Websites for HDFS and MapReduce both send users to video training resource which is non-public

2012-07-09 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13409968#comment-13409968
 ] 

Suresh Srinivas commented on HADOOP-8579:
-

Harsh, please ensure the link (if you are retaining it) follows the directions 
given in HADOOP-5754.

> Websites for HDFS and MapReduce both send users to video training resource 
> which is non-public
> --
>
> Key: HADOOP-8579
> URL: https://issues.apache.org/jira/browse/HADOOP-8579
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: website
>Reporter: David L. Willson
>Assignee: Harsh J
>Priority: Minor
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Main pages for HDFS and MapReduce send new user to unavailable training 
> resource.
> These two pages:
> http://hadoop.apache.org/mapreduce/
> http://hadoop.apache.org/hdfs/
> Link to this page:
> http://vimeo.com/3584536
> That page is not public, and not shared to all registered Vimeo users, and I 
> see nothing indicating how to ask for access to the resource.
> Please make the vids public, or remove the link of disappointment.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8579) Websites for HDFS and MapReduce both send users to video training resource which is non-public

2012-07-09 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13409971#comment-13409971
 ] 

Suresh Srinivas commented on HADOOP-8579:
-

BTW my vote is to remove that link altogether, since it is hard to make sure 
that the video adheres to the guidelines from HADOOP-5754.

> Websites for HDFS and MapReduce both send users to video training resource 
> which is non-public
> --
>
> Key: HADOOP-8579
> URL: https://issues.apache.org/jira/browse/HADOOP-8579
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: website
>Reporter: David L. Willson
>Assignee: Harsh J
>Priority: Minor
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Main pages for HDFS and MapReduce send new user to unavailable training 
> resource.
> These two pages:
> http://hadoop.apache.org/mapreduce/
> http://hadoop.apache.org/hdfs/
> Link to this page:
> http://vimeo.com/3584536
> That page is not public, and not shared to all registered Vimeo users, and I 
> see nothing indicating how to ask for access to the resource.
> Please make the vids public, or remove the link of disappointment.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8579) Websites for HDFS and MapReduce both send users to video training resource which is non-public

2012-07-10 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13410544#comment-13410544
 ] 

Suresh Srinivas commented on HADOOP-8579:
-

Sounds good. Do we have such a page?

> Websites for HDFS and MapReduce both send users to video training resource 
> which is non-public
> --
>
> Key: HADOOP-8579
> URL: https://issues.apache.org/jira/browse/HADOOP-8579
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: website
>Reporter: David L. Willson
>Assignee: Harsh J
>Priority: Minor
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Main pages for HDFS and MapReduce send new user to unavailable training 
> resource.
> These two pages:
> http://hadoop.apache.org/mapreduce/
> http://hadoop.apache.org/hdfs/
> Link to this page:
> http://vimeo.com/3584536
> That page is not public, and not shared to all registered Vimeo users, and I 
> see nothing indicating how to ask for access to the resource.
> Please make the vids public, or remove the link of disappointment.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8365) Provide ability to disable working sync

2012-07-10 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13411248#comment-13411248
 ] 

Suresh Srinivas commented on HADOOP-8365:
-

+1 for the patch.

> Provide ability to disable working sync
> ---
>
> Key: HADOOP-8365
> URL: https://issues.apache.org/jira/browse/HADOOP-8365
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.1.0
>Reporter: Eli Collins
>Assignee: Eli Collins
>Priority: Blocker
> Attachments: hadoop-8365.txt, hadoop-8365.txt
>
>
> Per HADOOP-8230 there's a request for a flag to disable the sync code paths 
> that dfs.support.append used to enable. The sync method itself will still be 
> available and have a broken implementation as that was the behavior before 
> HADOOP-8230. This config flag should default to false as the primary 
> motivation for HADOOP-8230 is so HBase works out-of-the-box with Hadoop 1.1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7753) Support fadvise and sync_data_range in NativeIO, add ReadaheadPool class

2012-07-11 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13412275#comment-13412275
 ] 

Suresh Srinivas commented on HADOOP-7753:
-

Brandon, +1 for the change. What tests did you run?

> Support fadvise and sync_data_range in NativeIO, add ReadaheadPool class
> 
>
> Key: HADOOP-7753
> URL: https://issues.apache.org/jira/browse/HADOOP-7753
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io, native, performance
>Affects Versions: 0.23.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: 0.23.0
>
> Attachments: HADOOP-7753.branch-1.patch, HADOOP-7753.branch-1.patch, 
> hadoop-7753.txt, hadoop-7753.txt, hadoop-7753.txt, hadoop-7753.txt, 
> hadoop-7753.txt
>
>
> This JIRA adds JNI wrappers for sync_data_range and posix_fadvise. It also 
> implements a ReadaheadPool class for future use from HDFS and MapReduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-8365) Add flag to disable durable sync

2012-07-11 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HADOOP-8365.
-

  Resolution: Fixed
Release Note: This patch enables durable sync by default. Installation 
where HBase was not used, that used to run without setting 
{{dfs.support.append}} or setting it to false in the configurate, must set 
{{dfs.durable.sync}} to false to preserve the previous semantics.
Hadoop Flags: Incompatible change,Reviewed  (was: Reviewed)

> Add flag to disable durable sync
> 
>
> Key: HADOOP-8365
> URL: https://issues.apache.org/jira/browse/HADOOP-8365
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.1.0
>Reporter: Eli Collins
>Assignee: Eli Collins
>Priority: Blocker
> Fix For: 1.1.0
>
> Attachments: hadoop-8365.txt, hadoop-8365.txt
>
>
> Per HADOOP-8230 there's a request for a flag to disable the sync code paths 
> that dfs.support.append used to enable. The sync method itself will still be 
> available and have a broken implementation as that was the behavior before 
> HADOOP-8230. This config flag should default to false as the primary 
> motivation for HADOOP-8230 is so HBase works out-of-the-box with Hadoop 1.1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8365) Add flag to disable durable sync

2012-07-11 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8365:


Release Note: This patch enables durable sync by default. Installation 
where HBase was not used, that used to run without setting "dfs.support.append" 
or setting it to false explicitly in the configuration, must add a new flag 
"dfs.durable.sync" and set it to false to preserve the previous semantics.  
(was: This patch enables durable sync by default. Installation where HBase was 
not used, that used to run without setting {{dfs.support.append}} or setting it 
to false in the configurate, must set {{dfs.durable.sync}} to false to preserve 
the previous semantics.)

> Add flag to disable durable sync
> 
>
> Key: HADOOP-8365
> URL: https://issues.apache.org/jira/browse/HADOOP-8365
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.1.0
>Reporter: Eli Collins
>Assignee: Eli Collins
>Priority: Blocker
> Fix For: 1.1.0
>
> Attachments: hadoop-8365.txt, hadoop-8365.txt
>
>
> Per HADOOP-8230 there's a request for a flag to disable the sync code paths 
> that dfs.support.append used to enable. The sync method itself will still be 
> available and have a broken implementation as that was the behavior before 
> HADOOP-8230. This config flag should default to false as the primary 
> motivation for HADOOP-8230 is so HBase works out-of-the-box with Hadoop 1.1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7753) Support fadvise and sync_data_range in NativeIO, add ReadaheadPool class

2012-07-12 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-7753:


Fix Version/s: 1.2.0

I committed this patch to branch-1.

> Support fadvise and sync_data_range in NativeIO, add ReadaheadPool class
> 
>
> Key: HADOOP-7753
> URL: https://issues.apache.org/jira/browse/HADOOP-7753
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io, native, performance
>Affects Versions: 0.23.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: 1.2.0, 0.23.0
>
> Attachments: HADOOP-7753.branch-1.patch, HADOOP-7753.branch-1.patch, 
> hadoop-7753.txt, hadoop-7753.txt, hadoop-7753.txt, hadoop-7753.txt, 
> hadoop-7753.txt
>
>
> This JIRA adds JNI wrappers for sync_data_range and posix_fadvise. It also 
> implements a ReadaheadPool class for future use from HDFS and MapReduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8593) add the missed @Override to methods in Metric/Metric2 package

2012-07-13 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13414115#comment-13414115
 ] 

Suresh Srinivas commented on HADOOP-8593:
-

+1. I also think some of the code does not follow coding guidelines. We should 
fix that too.

> add  the missed @Override to methods in Metric/Metric2 package
> --
>
> Key: HADOOP-8593
> URL: https://issues.apache.org/jira/browse/HADOOP-8593
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Brandon Li
>Assignee: Brandon Li
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HADOOP-8593.patch
>
>
> Adding @Override to the proper methods to take advantage of the compiler 
> checking and make the code more readable. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8593) add the missed @Override to methods in Metric/Metric2 package

2012-07-13 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8593:


Fix Version/s: 3.0.0
   Status: Patch Available  (was: Open)

> add  the missed @Override to methods in Metric/Metric2 package
> --
>
> Key: HADOOP-8593
> URL: https://issues.apache.org/jira/browse/HADOOP-8593
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Brandon Li
>Assignee: Brandon Li
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HADOOP-8593.patch
>
>
> Adding @Override to the proper methods to take advantage of the compiler 
> checking and make the code more readable. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8593) add the missed @Override to methods in Metric/Metric2 package

2012-07-16 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8593:


Affects Version/s: 3.0.0
   1.0.0

> add  the missed @Override to methods in Metric/Metric2 package
> --
>
> Key: HADOOP-8593
> URL: https://issues.apache.org/jira/browse/HADOOP-8593
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 1.0.0, 3.0.0
>Reporter: Brandon Li
>Assignee: Brandon Li
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HADOOP-8593.patch
>
>
> Adding @Override to the proper methods to take advantage of the compiler 
> checking and make the code more readable. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8593) add the missed @Override to methods in Metric/Metric2 package

2012-07-16 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8593:


  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I committed the patch. Thank you Brandon.

> add  the missed @Override to methods in Metric/Metric2 package
> --
>
> Key: HADOOP-8593
> URL: https://issues.apache.org/jira/browse/HADOOP-8593
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 1.0.0, 3.0.0
>Reporter: Brandon Li
>Assignee: Brandon Li
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HADOOP-8593.patch
>
>
> Adding @Override to the proper methods to take advantage of the compiler 
> checking and make the code more readable. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8552) Conflict: Same security.log.file for multiple users.

2012-07-17 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13416598#comment-13416598
 ] 

Suresh Srinivas commented on HADOOP-8552:
-

Alejandro, when committing incompatible changes, could you please add the 
change description in CHANGES.txt under INCOMPATIBLE CHANGES section. Also 
could you please add release notes on what is incompatible here and how to get 
around it.

> Conflict: Same security.log.file for multiple users. 
> -
>
> Key: HADOOP-8552
> URL: https://issues.apache.org/jira/browse/HADOOP-8552
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf, security
>Affects Versions: 1.0.3, 2.0.0-alpha
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Fix For: 1.1.0, 2.0.1-alpha
>
> Attachments: HADOOP-8552_branch1.patch, HADOOP-8552_branch2.patch
>
>
> In log4j.properties, hadoop.security.log.file is set to SecurityAuth.audit. 
> In the presence of multiple users, this can lead to a potential conflict.
> Adding username to the log file would avoid this scenario.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8269) Fix some javadoc warnings on branch-1

2012-07-17 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8269:


Priority: Trivial  (was: Major)

> Fix some javadoc warnings on branch-1
> -
>
> Key: HADOOP-8269
> URL: https://issues.apache.org/jira/browse/HADOOP-8269
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Eli Collins
>Assignee: Eli Collins
>Priority: Trivial
> Fix For: 1.1.0
>
> Attachments: hadoop-8269.txt
>
>
> There are some javadoc warnings on branch-1, let's fix them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8552) Conflict: Same security.log.file for multiple users.

2012-07-17 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13416620#comment-13416620
 ] 

Suresh Srinivas commented on HADOOP-8552:
-

I also added this change in CHANGES.txt in branch 1.1.

> Conflict: Same security.log.file for multiple users. 
> -
>
> Key: HADOOP-8552
> URL: https://issues.apache.org/jira/browse/HADOOP-8552
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf, security
>Affects Versions: 1.0.3, 2.0.0-alpha
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Fix For: 1.1.0, 2.0.1-alpha
>
> Attachments: HADOOP-8552_branch1.patch, HADOOP-8552_branch2.patch
>
>
> In log4j.properties, hadoop.security.log.file is set to SecurityAuth.audit. 
> In the presence of multiple users, this can lead to a potential conflict.
> Adding username to the log file would avoid this scenario.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7753) Support fadvise and sync_data_range in NativeIO, add ReadaheadPool class

2012-07-17 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-7753:


Fix Version/s: (was: 1.2.0)
   1.1.0

> Support fadvise and sync_data_range in NativeIO, add ReadaheadPool class
> 
>
> Key: HADOOP-7753
> URL: https://issues.apache.org/jira/browse/HADOOP-7753
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io, native, performance
>Affects Versions: 0.23.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: 1.1.0, 0.23.0
>
> Attachments: HADOOP-7753.branch-1.patch, HADOOP-7753.branch-1.patch, 
> hadoop-7753.txt, hadoop-7753.txt, hadoop-7753.txt, hadoop-7753.txt, 
> hadoop-7753.txt
>
>
> This JIRA adds JNI wrappers for sync_data_range and posix_fadvise. It also 
> implements a ReadaheadPool class for future use from HDFS and MapReduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-10350) BUILDING.txt should mention openssl dependency required for hadoop-pipes

2014-04-10 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965610#comment-13965610
 ] 

Suresh Srinivas commented on HADOOP-10350:
--

bq. Will commit this soon.
[~vinayrpet], you need a +1 from a committer to commit the patch. That said, I 
am +1 for this change.

> BUILDING.txt should mention openssl dependency required for hadoop-pipes
> 
>
> Key: HADOOP-10350
> URL: https://issues.apache.org/jira/browse/HADOOP-10350
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HADOOP-10350.patch, HADOOP-10350.patch
>
>
> BUILDING.txt should mention openssl dependency required for hadoop-pipes



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9919) Rewrite hadoop-metrics2.properties

2014-04-21 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13975775#comment-13975775
 ] 

Suresh Srinivas commented on HADOOP-9919:
-

+1 for this change. [~ajisakaa], I will commit this shortly.

> Rewrite hadoop-metrics2.properties
> --
>
> Key: HADOOP-9919
> URL: https://issues.apache.org/jira/browse/HADOOP-9919
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.1.0-beta
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Attachments: HADOOP-9919.2.patch, HADOOP-9919.3.patch, 
> HADOOP-9919.4.patch, HADOOP-9919.patch
>
>
> The config for JobTracker and TaskTracker (comment outed) still exists in 
> hadoop-metrics2.properties as follows:
> {code}
> #jobtracker.sink.file_jvm.context=jvm
> #jobtracker.sink.file_jvm.filename=jobtracker-jvm-metrics.out
> #jobtracker.sink.file_mapred.context=mapred
> #jobtracker.sink.file_mapred.filename=jobtracker-mapred-metrics.out
> #tasktracker.sink.file.filename=tasktracker-metrics.out
> {code}
> These lines should be removed and a config for NodeManager should be added 
> instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9919) Update hadoop-metrics2.properties to Yarn

2014-04-21 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-9919:


Summary: Update hadoop-metrics2.properties to Yarn  (was: Rewrite 
hadoop-metrics2.properties)

> Update hadoop-metrics2.properties to Yarn
> -
>
> Key: HADOOP-9919
> URL: https://issues.apache.org/jira/browse/HADOOP-9919
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.1.0-beta
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Attachments: HADOOP-9919.2.patch, HADOOP-9919.3.patch, 
> HADOOP-9919.4.patch, HADOOP-9919.patch
>
>
> The config for JobTracker and TaskTracker (comment outed) still exists in 
> hadoop-metrics2.properties as follows:
> {code}
> #jobtracker.sink.file_jvm.context=jvm
> #jobtracker.sink.file_jvm.filename=jobtracker-jvm-metrics.out
> #jobtracker.sink.file_mapred.context=mapred
> #jobtracker.sink.file_mapred.filename=jobtracker-mapred-metrics.out
> #tasktracker.sink.file.filename=tasktracker-metrics.out
> {code}
> These lines should be removed and a config for NodeManager should be added 
> instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9919) Update hadoop-metrics2.properties examples to Yarn

2014-04-21 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-9919:


Summary: Update hadoop-metrics2.properties examples to Yarn  (was: Update 
hadoop-metrics2.properties to Yarn)

> Update hadoop-metrics2.properties examples to Yarn
> --
>
> Key: HADOOP-9919
> URL: https://issues.apache.org/jira/browse/HADOOP-9919
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.1.0-beta
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Attachments: HADOOP-9919.2.patch, HADOOP-9919.3.patch, 
> HADOOP-9919.4.patch, HADOOP-9919.patch
>
>
> The config for JobTracker and TaskTracker (comment outed) still exists in 
> hadoop-metrics2.properties as follows:
> {code}
> #jobtracker.sink.file_jvm.context=jvm
> #jobtracker.sink.file_jvm.filename=jobtracker-jvm-metrics.out
> #jobtracker.sink.file_mapred.context=mapred
> #jobtracker.sink.file_mapred.filename=jobtracker-mapred-metrics.out
> #tasktracker.sink.file.filename=tasktracker-metrics.out
> {code}
> These lines should be removed and a config for NodeManager should be added 
> instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9919) Update hadoop-metrics2.properties examples to Yarn

2014-04-21 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-9919:


   Resolution: Fixed
Fix Version/s: 2.5.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~ajisakaa] for the 
contribution.

> Update hadoop-metrics2.properties examples to Yarn
> --
>
> Key: HADOOP-9919
> URL: https://issues.apache.org/jira/browse/HADOOP-9919
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.1.0-beta
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Fix For: 2.5.0
>
> Attachments: HADOOP-9919.2.patch, HADOOP-9919.3.patch, 
> HADOOP-9919.4.patch, HADOOP-9919.patch
>
>
> The config for JobTracker and TaskTracker (comment outed) still exists in 
> hadoop-metrics2.properties as follows:
> {code}
> #jobtracker.sink.file_jvm.context=jvm
> #jobtracker.sink.file_jvm.filename=jobtracker-jvm-metrics.out
> #jobtracker.sink.file_mapred.context=mapred
> #jobtracker.sink.file_mapred.filename=jobtracker-mapred-metrics.out
> #tasktracker.sink.file.filename=tasktracker-metrics.out
> {code}
> These lines should be removed and a config for NodeManager should be added 
> instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Moved] (HADOOP-10562) Namenode exits on exception without printing stack trace in AbstractDelegationTokenSecretManager

2014-05-01 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas moved HDFS-6320 to HADOOP-10562:


  Component/s: (was: namenode)
Affects Version/s: (was: 1.2.1)
   (was: 2.0.0-alpha)
   2.0.0-alpha
   1.2.1
  Key: HADOOP-10562  (was: HDFS-6320)
  Project: Hadoop Common  (was: Hadoop HDFS)

> Namenode exits on exception without printing stack trace in 
> AbstractDelegationTokenSecretManager
> 
>
> Key: HADOOP-10562
> URL: https://issues.apache.org/jira/browse/HADOOP-10562
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1.2.1, 2.0.0-alpha
>Reporter: Suresh Srinivas
>
> Not printing the stack trace makes debugging harder.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10562) Namenode exits on exception without printing stack trace in AbstractDelegationTokenSecretManager

2014-05-01 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10562:
-

Priority: Critical  (was: Major)

> Namenode exits on exception without printing stack trace in 
> AbstractDelegationTokenSecretManager
> 
>
> Key: HADOOP-10562
> URL: https://issues.apache.org/jira/browse/HADOOP-10562
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha, 1.2.1
>Reporter: Suresh Srinivas
>Priority: Critical
>
> Not printing the stack trace makes debugging harder.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10562) Namenode exits on exception without printing stack trace in AbstractDelegationTokenSecretManager

2014-05-01 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10562:
-

Attachment: HADOOP-10562.patch

Patch to print the exception stack trace

> Namenode exits on exception without printing stack trace in 
> AbstractDelegationTokenSecretManager
> 
>
> Key: HADOOP-10562
> URL: https://issues.apache.org/jira/browse/HADOOP-10562
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha, 1.2.1
>Reporter: Suresh Srinivas
>Priority: Critical
> Attachments: HADOOP-10562.patch
>
>
> Not printing the stack trace makes debugging harder.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10562) Namenode exits on exception without printing stack trace in AbstractDelegationTokenSecretManager

2014-05-01 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10562:
-

Status: Patch Available  (was: Open)

> Namenode exits on exception without printing stack trace in 
> AbstractDelegationTokenSecretManager
> 
>
> Key: HADOOP-10562
> URL: https://issues.apache.org/jira/browse/HADOOP-10562
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1.2.1, 2.0.0-alpha
>Reporter: Suresh Srinivas
>Priority: Critical
> Attachments: HADOOP-10562.patch
>
>
> Not printing the stack trace makes debugging harder.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10562) Namenode exits on exception without printing stack trace in AbstractDelegationTokenSecretManager

2014-05-01 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-10562:
-

Attachment: HADOOP-10562.1.patch

Slightly updated patch with two additions:
# Print the current number of tokens (we saw namenode going out of memory while 
creating an array in this part of the code, this will help debug it)
# Some code cleanup

> Namenode exits on exception without printing stack trace in 
> AbstractDelegationTokenSecretManager
> 
>
> Key: HADOOP-10562
> URL: https://issues.apache.org/jira/browse/HADOOP-10562
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha, 1.2.1
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
>Priority: Critical
> Attachments: HADOOP-10562.1.patch, HADOOP-10562.branch-1.patch, 
> HADOOP-10562.patch
>
>
> Not printing the stack trace makes debugging harder.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10571) Use Log.*(Object, Throwable) overload to log exceptions

2014-05-02 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13988257#comment-13988257
 ] 

Suresh Srinivas commented on HADOOP-10571:
--

There are some place where we specifically avoid printing stack trace. When 
making this change, we need to be careful to keep the exception printing terse 
where necessary.

> Use Log.*(Object, Throwable) overload to log exceptions
> ---
>
> Key: HADOOP-10571
> URL: https://issues.apache.org/jira/browse/HADOOP-10571
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Arpit Agarwal
>
> When logging an exception, we often convert the exception to string or call 
> {{.getMessage}}. Instead we can use the log method overloads which take 
> {{Throwable}} as a parameter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10571) Use Log.*(Object, Throwable) overload to log exceptions

2014-05-02 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13988280#comment-13988280
 ] 

Suresh Srinivas commented on HADOOP-10571:
--

This might be an opportunity to add a comment to that says the exception is 
terse in Log message by design to avoid someone changing it to verbose stack 
trace.

> Use Log.*(Object, Throwable) overload to log exceptions
> ---
>
> Key: HADOOP-10571
> URL: https://issues.apache.org/jira/browse/HADOOP-10571
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-10571.01.patch
>
>
> When logging an exception, we often convert the exception to string or call 
> {{.getMessage}}. Instead we can use the log method overloads which take 
> {{Throwable}} as a parameter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10571) Use Log.*(Object, Throwable) overload to log exceptions

2014-05-05 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990036#comment-13990036
 ] 

Suresh Srinivas commented on HADOOP-10571:
--

I agree with [~arpitagarwal]. Lets decouple the current set of improvements 
from SLF4J.

> Use Log.*(Object, Throwable) overload to log exceptions
> ---
>
> Key: HADOOP-10571
> URL: https://issues.apache.org/jira/browse/HADOOP-10571
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-10571.01.patch
>
>
> When logging an exception, we often convert the exception to string or call 
> {{.getMessage}}. Instead we can use the log method overloads which take 
> {{Throwable}} as a parameter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   3   4   5   6   7   8   9   10   >