[jira] [Created] (HDFS-8610) if set several dirs which belongs to one disk in dfs.datanode.data.dir, NN calculate capacity wrong

2015-06-16 Thread tongshiquan (JIRA)
tongshiquan created HDFS-8610:
-

 Summary: if set several dirs which belongs to one disk in 
dfs.datanode.data.dir, NN calculate capacity wrong
 Key: HDFS-8610
 URL: https://issues.apache.org/jira/browse/HDFS-8610
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
Reporter: tongshiquan
Assignee: Ajith S
Priority: Minor


In my machine, disk info as below:
/dev/sdc1   8.1T  2.0T  5.7T  27% /export2
/dev/sdd1   8.1T  2.0T  5.7T  27% /export3
/dev/sde1   8.1T  2.8T  5.0T  36% /export4

then set dfs.datanode.data.dir as below, each disk have 10 dirs:
/export2/BigData/hadoop/data/dn,/export2/BigData/hadoop/data/dn1,/export2/BigData/hadoop/data/dn2,/export2/BigData/hadoop/data/dn3,/export2/BigData/hadoop/data/dn4,/export2/BigData/hadoop/data/dn5,/export2/BigData/hadoop/data/dn6,/export2/BigData/hadoop/data/dn7,/export2/BigData/hadoop/data/dn8,/export2/BigData/hadoop/data/dn9,/export2/BigData/hadoop/data/dn10,/export3/BigData/hadoop/data/dn,/export3/BigData/hadoop/data/dn1,/export3/BigData/hadoop/data/dn2,/export3/BigData/hadoop/data/dn3,/export3/BigData/hadoop/data/dn4,/export3/BigData/hadoop/data/dn5,/export3/BigData/hadoop/data/dn6,/export3/BigData/hadoop/data/dn7,/export3/BigData/hadoop/data/dn8,/export3/BigData/hadoop/data/dn9,/export3/BigData/hadoop/data/dn10,/export4/BigData/hadoop/data/dn,/export4/BigData/hadoop/data/dn1,/export4/BigData/hadoop/data/dn2,/export4/BigData/hadoop/data/dn3,/export4/BigData/hadoop/data/dn4,/export4/BigData/hadoop/data/dn5,/export4/BigData/hadoop/data/dn6,/export4/BigData/hadoop/data/dn7,/export4/BigData/hadoop/data/dn8,/export4/BigData/hadoop/data/dn9,/export4/BigData/hadoop/data/dn10

then NN will think in this DN have 8.1T * 30 = 243 TB, but actually it only 
have 24.3TB



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8581) count cmd calculate wrong when huge files exist in one folder

2015-06-11 Thread tongshiquan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tongshiquan updated HDFS-8581:
--
Description: 
If one directory such as /result exists about 20 files, then when execute 
hdfs dfs -count /, the result will go wrong. For all directories whose name 
after /result, file num will not be included.

My cluster see as snapshot, /result_1433858936 is the directory exist huge 
files, and files in /sparkJobHistory, /tmp, /user are not included

  was:
If one directory such as /result exists about 20 files, then when execute 
hdfs dfs -count /, the result will go wrong. For all directories whose name 
after /result, file num will not be included.

Here is my cluster:


 count cmd calculate wrong when huge files exist in one folder
 -

 Key: HDFS-8581
 URL: https://issues.apache.org/jira/browse/HDFS-8581
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Reporter: tongshiquan
Assignee: J.Andreina
Priority: Minor

 If one directory such as /result exists about 20 files, then when 
 execute hdfs dfs -count /, the result will go wrong. For all directories 
 whose name after /result, file num will not be included.
 My cluster see as snapshot, /result_1433858936 is the directory exist huge 
 files, and files in /sparkJobHistory, /tmp, /user are not included



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8581) count cmd calculate wrong when huge files exist in one folder

2015-06-11 Thread tongshiquan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tongshiquan updated HDFS-8581:
--
Description: 
If one directory such as /result exists about 20 files, then when execute 
hdfs dfs -count /, the result will go wrong. For all directories whose name 
after /result, file num will not be included.

Here is my cluster:

  was:If one directory such as /result exists about 20 files, then when 
execute hdfs dfs -count /, the result will go wrong. For all directories 
whose name after /result, file num will not be included.


 count cmd calculate wrong when huge files exist in one folder
 -

 Key: HDFS-8581
 URL: https://issues.apache.org/jira/browse/HDFS-8581
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Reporter: tongshiquan
Assignee: J.Andreina
Priority: Minor

 If one directory such as /result exists about 20 files, then when 
 execute hdfs dfs -count /, the result will go wrong. For all directories 
 whose name after /result, file num will not be included.
 Here is my cluster:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8581) count cmd calculate wrong when huge files exist in one folder

2015-06-11 Thread tongshiquan (JIRA)
tongshiquan created HDFS-8581:
-

 Summary: count cmd calculate wrong when huge files exist in one 
folder
 Key: HDFS-8581
 URL: https://issues.apache.org/jira/browse/HDFS-8581
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Reporter: tongshiquan
Assignee: J.Andreina
Priority: Minor


If one directory such as /result exists about 20 files, then when execute 
hdfs dfs -count /, the result will go wrong. For all directories whose name 
after /result, file num will not be included.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8581) count cmd calculate wrong when huge files exist in one folder

2015-06-11 Thread tongshiquan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tongshiquan updated HDFS-8581:
--
Description: 
If one directory such as /result exists about 20 files, then when execute 
hdfs dfs -count /, the result will go wrong. For all directories whose name 
after /result, file num will not be included.

My cluster see as below, /result_1433858936 is the directory exist huge 
files, and files in /sparkJobHistory, /tmp, /user are not included

vm-221:/export1/BigData/current # hdfs dfs -ls /
15/06/11 11:00:17 INFO hdfs.PeerCache: SocketCache disabled.
Found 9 items
-rw-r--r--   3 hdfs   supergroup  0 2015-06-08 12:10 
/PRE_CREATE_DIR.SUCCESS
drwxr-x---   - flume  hadoop  0 2015-06-08 12:08 /flume
drwx--   - hbase  hadoop  0 2015-06-10 15:25 /hbase
drwxr-xr-x   - hdfs   supergroup  0 2015-06-10 17:19 /hyt
drwxrwxrwx   - mapred hadoop  0 2015-06-08 12:08 /mr-history
drwxr-xr-x   - hdfs   supergroup  0 2015-06-09 22:10 /result_1433858936
drwxrwxrwx   - spark  supergroup  0 2015-06-10 19:15 /sparkJobHistory
drwxrwxrwx   - hdfs   hadoop  0 2015-06-08 12:14 /tmp
drwxrwxrwx   - hdfs   hadoop  0 2015-06-09 21:57 /user
vm-221:/export1/BigData/current # 
vm-221:/export1/BigData/current # hdfs dfs -count /
15/06/11 11:00:24 INFO hdfs.PeerCache: SocketCache disabled.
1043   171536 1756375688 /
vm-221:/export1/BigData/current # 
vm-221:/export1/BigData/current # hdfs dfs -count /PRE_CREATE_DIR.SUCCESS
15/06/11 11:00:30 INFO hdfs.PeerCache: SocketCache disabled.
   01  0 /PRE_CREATE_DIR.SUCCESS
vm-221:/export1/BigData/current # 
vm-221:/export1/BigData/current # hdfs dfs -count /flume
15/06/11 11:00:41 INFO hdfs.PeerCache: SocketCache disabled.
   10  0 /flume
vm-221:/export1/BigData/current # 
vm-221:/export1/BigData/current # hdfs dfs -count /hbase
15/06/11 11:00:49 INFO hdfs.PeerCache: SocketCache disabled.
  36   18  14807 /hbase
vm-221:/export1/BigData/current # 
vm-221:/export1/BigData/current # hdfs dfs -count /hyt
15/06/11 11:01:09 INFO hdfs.PeerCache: SocketCache disabled.
   10  0 /hyt
vm-221:/export1/BigData/current # 
vm-221:/export1/BigData/current # hdfs dfs -count /mr-history
15/06/11 11:01:18 INFO hdfs.PeerCache: SocketCache disabled.
   30  0 /mr-history
vm-221:/export1/BigData/current # 
vm-221:/export1/BigData/current # hdfs dfs -count /result_1433858936
15/06/11 11:01:29 INFO hdfs.PeerCache: SocketCache disabled.
1001   171517 1756360881 /result_1433858936
vm-221:/export1/BigData/current # 
vm-221:/export1/BigData/current # hdfs dfs -count /sparkJobHistory
15/06/11 11:01:41 INFO hdfs.PeerCache: SocketCache disabled.
   13  21785 /sparkJobHistory
vm-221:/export1/BigData/current # 
vm-221:/export1/BigData/current # hdfs dfs -count /tmp
15/06/11 11:01:48 INFO hdfs.PeerCache: SocketCache disabled.
  176  35958 /tmp
vm-221:/export1/BigData/current # 
vm-221:/export1/BigData/current # hdfs dfs -count /user
15/06/11 11:01:55 INFO hdfs.PeerCache: SocketCache disabled.
  121  19077 /user

  was:
If one directory such as /result exists about 20 files, then when execute 
hdfs dfs -count /, the result will go wrong. For all directories whose name 
after /result, file num will not be included.

My cluster see as snapshot, /result_1433858936 is the directory exist huge 
files, and files in /sparkJobHistory, /tmp, /user are not included


 count cmd calculate wrong when huge files exist in one folder
 -

 Key: HDFS-8581
 URL: https://issues.apache.org/jira/browse/HDFS-8581
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Reporter: tongshiquan
Assignee: J.Andreina
Priority: Minor

 If one directory such as /result exists about 20 files, then when 
 execute hdfs dfs -count /, the result will go wrong. For all directories 
 whose name after /result, file num will not be included.
 My cluster see as below, /result_1433858936 is the directory exist huge 
 files, and files in /sparkJobHistory, /tmp, /user are not included
 vm-221:/export1/BigData/current # hdfs dfs -ls /
 15/06/11 11:00:17 INFO hdfs.PeerCache: SocketCache disabled.
 Found 9 items
 -rw-r--r--   3 hdfs   supergroup  0 2015-06-08 12:10 
 /PRE_CREATE_DIR.SUCCESS
 drwxr-x---   - flume  hadoop  0 2015-06-08 12:08 /flume
 drwx--   - hbase  hadoop  0 2015-06-10 15:25 /hbase
 drwxr-xr-x   - hdfs   supergroup  0 2015-06-10 17:19 /hyt
 drwxrwxrwx   - 

[jira] [Updated] (HDFS-8525) API getUsed() returns the file lengh only from root /

2015-06-03 Thread tongshiquan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tongshiquan updated HDFS-8525:
--
Assignee: J.Andreina

 API getUsed() returns the file lengh only from root / 
 

 Key: HDFS-8525
 URL: https://issues.apache.org/jira/browse/HDFS-8525
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
Reporter: tongshiquan
Assignee: J.Andreina
Priority: Minor

 getUsed should return total HDFS used, compared to getStatus.getUsed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8525) API getUsed() returns the file lengh only from root /

2015-06-03 Thread tongshiquan (JIRA)
tongshiquan created HDFS-8525:
-

 Summary: API getUsed() returns the file lengh only from root / 
 Key: HDFS-8525
 URL: https://issues.apache.org/jira/browse/HDFS-8525
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
Reporter: tongshiquan
Priority: Minor


getUsed should return total HDFS used, compared to getStatus.getUsed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8476) quota can't limit the file which put before setting the storage policy

2015-06-03 Thread tongshiquan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tongshiquan updated HDFS-8476:
--
Assignee: (was: kanaka kumar avvaru)

 quota can't limit the file which put before setting the storage policy
 --

 Key: HDFS-8476
 URL: https://issues.apache.org/jira/browse/HDFS-8476
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
Reporter: tongshiquan
Priority: Minor
  Labels: QBST
 Attachments: screenshot-1.png


 test steps:
 1. hdfs dfs -mkdir /HOT
 2. hdfs dfs -put 1G.txt /HOT/file1
 3. hdfs dfsadmin -setSpaceQuota 6442450944 -storageType DISK /HOT
 4. hdfs storagepolicies -setStoragePolicy -path /HOT -policy HOT
 5. hdfs dfs -put 1G.txt /HOT/file2
 6. hdfs dfs -put 1G.txt /HOT/file3
 7. hdfs dfs -count -q -h -v -t DISK /HOT
 In step6 file put should fail, because /HOT/file1 and /HOT/file2 have reach 
 the directory /HOT space quota 6GB (1G*3 replicas + 1G*3 replicas), but here 
 it success, and in step7 count shows remaining quota is -3GB
 FYI, if change the turn of step3 and step4, then it turns out normal



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8476) quota can't limit the file which put before setting the storage policy

2015-06-03 Thread tongshiquan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tongshiquan updated HDFS-8476:
--
Assignee: kanaka kumar avvaru

 quota can't limit the file which put before setting the storage policy
 --

 Key: HDFS-8476
 URL: https://issues.apache.org/jira/browse/HDFS-8476
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
Reporter: tongshiquan
Assignee: kanaka kumar avvaru
Priority: Minor
  Labels: QBST
 Attachments: screenshot-1.png


 test steps:
 1. hdfs dfs -mkdir /HOT
 2. hdfs dfs -put 1G.txt /HOT/file1
 3. hdfs dfsadmin -setSpaceQuota 6442450944 -storageType DISK /HOT
 4. hdfs storagepolicies -setStoragePolicy -path /HOT -policy HOT
 5. hdfs dfs -put 1G.txt /HOT/file2
 6. hdfs dfs -put 1G.txt /HOT/file3
 7. hdfs dfs -count -q -h -v -t DISK /HOT
 In step6 file put should fail, because /HOT/file1 and /HOT/file2 have reach 
 the directory /HOT space quota 6GB (1G*3 replicas + 1G*3 replicas), but here 
 it success, and in step7 count shows remaining quota is -3GB
 FYI, if change the turn of step3 and step4, then it turns out normal



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8476) quota can't limit the file which put before setting the storage policy

2015-06-03 Thread tongshiquan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14571942#comment-14571942
 ] 

tongshiquan commented on HDFS-8476:
---

removed kanaka kumar avvaru by mistake, assign again

 quota can't limit the file which put before setting the storage policy
 --

 Key: HDFS-8476
 URL: https://issues.apache.org/jira/browse/HDFS-8476
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
Reporter: tongshiquan
Assignee: kanaka kumar avvaru
Priority: Minor
  Labels: QBST
 Attachments: screenshot-1.png


 test steps:
 1. hdfs dfs -mkdir /HOT
 2. hdfs dfs -put 1G.txt /HOT/file1
 3. hdfs dfsadmin -setSpaceQuota 6442450944 -storageType DISK /HOT
 4. hdfs storagepolicies -setStoragePolicy -path /HOT -policy HOT
 5. hdfs dfs -put 1G.txt /HOT/file2
 6. hdfs dfs -put 1G.txt /HOT/file3
 7. hdfs dfs -count -q -h -v -t DISK /HOT
 In step6 file put should fail, because /HOT/file1 and /HOT/file2 have reach 
 the directory /HOT space quota 6GB (1G*3 replicas + 1G*3 replicas), but here 
 it success, and in step7 count shows remaining quota is -3GB
 FYI, if change the turn of step3 and step4, then it turns out normal



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8476) quota can't limit the file which put before setting the storage policy

2015-05-28 Thread tongshiquan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tongshiquan updated HDFS-8476:
--
Attachment: screenshot-1.png

 quota can't limit the file which put before setting the storage policy
 --

 Key: HDFS-8476
 URL: https://issues.apache.org/jira/browse/HDFS-8476
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
Reporter: tongshiquan
Assignee: kanaka kumar avvaru
Priority: Minor
  Labels: QBST
 Attachments: screenshot-1.png


 test steps:
 1. hdfs dfs -mkdir /HOT
 2. hdfs dfs -put 1G.txt /HOT/file1
 3. hdfs dfsadmin -setSpaceQuota 6442450944 -storageType DISK /HOT
 4. hdfs storagepolicies -setStoragePolicy -path /HOT -policy HOT
 5. hdfs dfs -put 1G.txt /HOT/file2
 6. hdfs dfs -put 1G.txt /HOT/file3
 7. hdfs dfs -count -q -h -v -t DISK /HOT
 In step6 file put should fail, because /HOT/file1 and /HOT/file2 have reach 
 the directory /HOT space quota 6GB (1G*3 replicas + 1G*3 replicas), but here 
 it success, and in step7 count shows remaining quota is -3GB
 FYI, if change the turn of step3 and step4, then it turns out normal



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8476) quota can't limit the file which put before setting the storage policy

2015-05-28 Thread tongshiquan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14562441#comment-14562441
 ] 

tongshiquan commented on HDFS-8476:
---

Xiaoyu Yao, My cluster have 3 nodes, 2NN and 3DN, HA mode. Each file have 3 
replicas, maybe it's one of the reason.

Have add screenshot

 quota can't limit the file which put before setting the storage policy
 --

 Key: HDFS-8476
 URL: https://issues.apache.org/jira/browse/HDFS-8476
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
Reporter: tongshiquan
Assignee: kanaka kumar avvaru
Priority: Minor
  Labels: QBST
 Attachments: screenshot-1.png


 test steps:
 1. hdfs dfs -mkdir /HOT
 2. hdfs dfs -put 1G.txt /HOT/file1
 3. hdfs dfsadmin -setSpaceQuota 6442450944 -storageType DISK /HOT
 4. hdfs storagepolicies -setStoragePolicy -path /HOT -policy HOT
 5. hdfs dfs -put 1G.txt /HOT/file2
 6. hdfs dfs -put 1G.txt /HOT/file3
 7. hdfs dfs -count -q -h -v -t DISK /HOT
 In step6 file put should fail, because /HOT/file1 and /HOT/file2 have reach 
 the directory /HOT space quota 6GB (1G*3 replicas + 1G*3 replicas), but here 
 it success, and in step7 count shows remaining quota is -3GB
 FYI, if change the turn of step3 and step4, then it turns out normal



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8476) quota can't limit the file which put before setting the storage policy

2015-05-26 Thread tongshiquan (JIRA)
tongshiquan created HDFS-8476:
-

 Summary: quota can't limit the file which put before setting the 
storage policy
 Key: HDFS-8476
 URL: https://issues.apache.org/jira/browse/HDFS-8476
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
Reporter: tongshiquan
Priority: Minor


1. hdfs dfs -mkdir /HOT
2. hdfs dfs -put 1G.txt /HOT/file1
3. hdfs dfsadmin -setSpaceQuota 6442450944 -storageType DISK /HOT
4. hdfs storagepolicies -setStoragePolicy -path /HOT -policy HOT
5. hdfs dfs -put 1G.txt /HOT/file2
6. hdfs dfs -put 1G.txt /HOT/file3
7. hdfs dfs -count -q -h -v -t DISK /HOT




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8476) quota can't limit the file which put before setting the storage policy

2015-05-26 Thread tongshiquan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tongshiquan updated HDFS-8476:
--
Description: 
test steps:
1. hdfs dfs -mkdir /HOT
2. hdfs dfs -put 1G.txt /HOT/file1
3. hdfs dfsadmin -setSpaceQuota 6442450944 -storageType DISK /HOT
4. hdfs storagepolicies -setStoragePolicy -path /HOT -policy HOT
5. hdfs dfs -put 1G.txt /HOT/file2
6. hdfs dfs -put 1G.txt /HOT/file3
7. hdfs dfs -count -q -h -v -t DISK /HOT

In step6 file put should fail, because /HOT/file1 and /HOT/file2 have reach the 
directory /HOT space quota 6GB (1G*3 replicas + 1G*3 replicas), but here it 
success, and in step7 count shows remaining quota is -3GB

FYI, if change the turn of step3 and step4, then it turns out normal


  was:
1. hdfs dfs -mkdir /HOT
2. hdfs dfs -put 1G.txt /HOT/file1
3. hdfs dfsadmin -setSpaceQuota 6442450944 -storageType DISK /HOT
4. hdfs storagepolicies -setStoragePolicy -path /HOT -policy HOT
5. hdfs dfs -put 1G.txt /HOT/file2
6. hdfs dfs -put 1G.txt /HOT/file3
7. hdfs dfs -count -q -h -v -t DISK /HOT



 quota can't limit the file which put before setting the storage policy
 --

 Key: HDFS-8476
 URL: https://issues.apache.org/jira/browse/HDFS-8476
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
Reporter: tongshiquan
Priority: Minor

 test steps:
 1. hdfs dfs -mkdir /HOT
 2. hdfs dfs -put 1G.txt /HOT/file1
 3. hdfs dfsadmin -setSpaceQuota 6442450944 -storageType DISK /HOT
 4. hdfs storagepolicies -setStoragePolicy -path /HOT -policy HOT
 5. hdfs dfs -put 1G.txt /HOT/file2
 6. hdfs dfs -put 1G.txt /HOT/file3
 7. hdfs dfs -count -q -h -v -t DISK /HOT
 In step6 file put should fail, because /HOT/file1 and /HOT/file2 have reach 
 the directory /HOT space quota 6GB (1G*3 replicas + 1G*3 replicas), but here 
 it success, and in step7 count shows remaining quota is -3GB
 FYI, if change the turn of step3 and step4, then it turns out normal



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8470) fsimage loading progress always show 0

2015-05-24 Thread tongshiquan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tongshiquan updated HDFS-8470:
--
Attachment: screenshot-1.png

 fsimage loading progress always show 0
 --

 Key: HDFS-8470
 URL: https://issues.apache.org/jira/browse/HDFS-8470
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
Reporter: tongshiquan
Priority: Minor
 Attachments: screenshot-1.png






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8470) fsimage loading progress always show 0

2015-05-24 Thread tongshiquan (JIRA)
tongshiquan created HDFS-8470:
-

 Summary: fsimage loading progress always show 0
 Key: HDFS-8470
 URL: https://issues.apache.org/jira/browse/HDFS-8470
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
Reporter: tongshiquan
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8464) hdfs namenode UI shows Max Non Heap Memory is -1 B

2015-05-22 Thread tongshiquan (JIRA)
tongshiquan created HDFS-8464:
-

 Summary: hdfs namenode UI shows Max Non Heap Memory is -1 B
 Key: HDFS-8464
 URL: https://issues.apache.org/jira/browse/HDFS-8464
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
 Environment: suse11.3
Reporter: tongshiquan
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8464) hdfs namenode UI shows Max Non Heap Memory is -1 B

2015-05-22 Thread tongshiquan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tongshiquan updated HDFS-8464:
--
Attachment: screenshot-1.png

 hdfs namenode UI shows Max Non Heap Memory is -1 B
 

 Key: HDFS-8464
 URL: https://issues.apache.org/jira/browse/HDFS-8464
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
 Environment: suse11.3
Reporter: tongshiquan
Priority: Minor
 Attachments: screenshot-1.png






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)