[jira] [Updated] (HDFS-119) logSync() may block NameNode forever.

2012-04-13 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-119:
-

  Component/s: name-node
Fix Version/s: 1.1.0

> logSync() may block NameNode forever.
> -
>
> Key: HDFS-119
> URL: https://issues.apache.org/jira/browse/HDFS-119
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Reporter: Konstantin Shvachko
>Assignee: Suresh Srinivas
> Fix For: 0.21.0, 1.1.0
>
> Attachments: HDFS-119-branch-1.0.patch, HDFS-119-branch-1.0.patch, 
> HDFS-119.patch, HDFS-119.patch
>
>
> # {{FSEditLog.logSync()}} first waits until {{isSyncRunning}} is false and 
> then performs syncing to file streams by calling 
> {{EditLogOutputStream.flush()}}.
> If an exception is thrown after {{isSyncRunning}} is set to {{true}} all 
> threads will always wait on this condition.
> An {{IOException}} may be thrown by {{EditLogOutputStream.setReadyToFlush()}} 
> or a {{RuntimeException}} may be thrown by {{EditLogOutputStream.flush()}} or 
> by {{processIOError()}}.
> # The loop that calls {{eStream.flush()}} for multiple 
> {{EditLogOutputStream}}-s is not synchronized, which means that another 
> thread may encounter an error and modify {{editStreams}} by say calling 
> {{processIOError()}}. Then the iterating process in {{logSync()}} will break 
> with {{IndexOutOfBoundException}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-119) logSync() may block NameNode forever.

2012-04-13 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-119:
-

Attachment: HDFS-119-branch-1.0.patch

Updated patch to reflect current branch state. No code changes, just line 
numbers.
Plus the code style change (indentation), suggested by Brandon.

> logSync() may block NameNode forever.
> -
>
> Key: HDFS-119
> URL: https://issues.apache.org/jira/browse/HDFS-119
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Konstantin Shvachko
>Assignee: Suresh Srinivas
> Fix For: 0.21.0
>
> Attachments: HDFS-119-branch-1.0.patch, HDFS-119-branch-1.0.patch, 
> HDFS-119.patch, HDFS-119.patch
>
>
> # {{FSEditLog.logSync()}} first waits until {{isSyncRunning}} is false and 
> then performs syncing to file streams by calling 
> {{EditLogOutputStream.flush()}}.
> If an exception is thrown after {{isSyncRunning}} is set to {{true}} all 
> threads will always wait on this condition.
> An {{IOException}} may be thrown by {{EditLogOutputStream.setReadyToFlush()}} 
> or a {{RuntimeException}} may be thrown by {{EditLogOutputStream.flush()}} or 
> by {{processIOError()}}.
> # The loop that calls {{eStream.flush()}} for multiple 
> {{EditLogOutputStream}}-s is not synchronized, which means that another 
> thread may encounter an error and modify {{editStreams}} by say calling 
> {{processIOError()}}. Then the iterating process in {{logSync()}} will break 
> with {{IndexOutOfBoundException}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-119) logSync() may block NameNode forever.

2012-04-06 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-119:
-

Attachment: HDFS-119-branch-1.0.patch

Here is the patch for branch-1.0

> logSync() may block NameNode forever.
> -
>
> Key: HDFS-119
> URL: https://issues.apache.org/jira/browse/HDFS-119
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Konstantin Shvachko
>Assignee: Suresh Srinivas
> Fix For: 0.21.0
>
> Attachments: HDFS-119-branch-1.0.patch, HDFS-119.patch, HDFS-119.patch
>
>
> # {{FSEditLog.logSync()}} first waits until {{isSyncRunning}} is false and 
> then performs syncing to file streams by calling 
> {{EditLogOutputStream.flush()}}.
> If an exception is thrown after {{isSyncRunning}} is set to {{true}} all 
> threads will always wait on this condition.
> An {{IOException}} may be thrown by {{EditLogOutputStream.setReadyToFlush()}} 
> or a {{RuntimeException}} may be thrown by {{EditLogOutputStream.flush()}} or 
> by {{processIOError()}}.
> # The loop that calls {{eStream.flush()}} for multiple 
> {{EditLogOutputStream}}-s is not synchronized, which means that another 
> thread may encounter an error and modify {{editStreams}} by say calling 
> {{processIOError()}}. Then the iterating process in {{logSync()}} will break 
> with {{IndexOutOfBoundException}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2991) failure to load edits: ClassCastException

2012-04-01 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2991:
--

Target Version/s: 0.23.1, 0.24.0, 0.22.1  (was: 0.22.1, 0.23.1, 0.24.0)
   Fix Version/s: (was: 0.23.2)
  0.22.1

Committed to branch 0.22.1. It is highly recommended to do -savenamespace if 
append was used on the cluster running 0.22.1 prio to this patch.

> failure to load edits: ClassCastException
> -
>
> Key: HDFS-2991
> URL: https://issues.apache.org/jira/browse/HDFS-2991
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.24.0, 0.23.1
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Blocker
> Fix For: 0.22.1
>
> Attachments: hdfs-2991-0.22.txt, hdfs-2991.txt, hdfs-2991.txt, 
> image-with-buggy-append.tgz
>
>
> In doing scale testing of trunk at r1291606, I hit the following:
> java.io.IOException: Error replaying edit log at offset 1354251
> Recent opcode offsets: 1350014 1350176 1350312 1354251
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:418)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:79)
> ...
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.server.namenode.INodeFile cannot be cast to 
> org.apache.hadoop.hdfs.server.namenode.INodeFileUnderConstruction
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:213)
> ... 13 more

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2991) failure to load edits: ClassCastException

2012-03-30 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2991:
--

Attachment: hdfs-2991-0.22.txt

Here a patch for 0.22 branch.
The test is a bit simpler than Todd's, but it fails without the patch and 
succeeds with.

> failure to load edits: ClassCastException
> -
>
> Key: HDFS-2991
> URL: https://issues.apache.org/jira/browse/HDFS-2991
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.24.0, 0.23.1
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Blocker
> Fix For: 0.23.2
>
> Attachments: hdfs-2991-0.22.txt, hdfs-2991.txt, hdfs-2991.txt, 
> image-with-buggy-append.tgz
>
>
> In doing scale testing of trunk at r1291606, I hit the following:
> java.io.IOException: Error replaying edit log at offset 1354251
> Recent opcode offsets: 1350014 1350176 1350312 1354251
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:418)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:79)
> ...
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.server.namenode.INodeFile cannot be cast to 
> org.apache.hadoop.hdfs.server.namenode.INodeFileUnderConstruction
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:213)
> ... 13 more

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1601) Pipeline ACKs are sent as lots of tiny TCP packets

2012-03-15 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1601:
--

Fix Version/s: 0.22.1

> Pipeline ACKs are sent as lots of tiny TCP packets
> --
>
> Key: HDFS-1601
> URL: https://issues.apache.org/jira/browse/HDFS-1601
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Affects Versions: 0.22.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: 0.23.0, 0.22.1
>
> Attachments: hdfs-1601-22.txt, hdfs-1601.txt, hdfs-1601.txt
>
>
> I noticed in an hbase benchmark that the packet counts in my network 
> monitoring seemed high, so took a short pcap trace and found that each 
> pipeline ACK was being sent as five packets, the first four of which only 
> contain one byte. We should buffer these bytes and send the PipelineAck as 
> one TCP packet.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2991) failure to load edits: ClassCastException

2012-02-23 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2991:
--

Target Version/s: 0.24.0, 0.23.1, 0.22.1  (was: 0.23.1, 0.24.0)

Seems like 0.22 has the same problem.
{{startFileInternal()}} journals OP_ADD for the new file via {{dir.addFile()}}, 
but never does it for opening for append.
On the patch.
# Todd, you should first logOpenFile() then 
convertLastBlockToUnderConstruction(), because the latter can throw 
IOException, and we will end up with the hanging OP_ADD in edits, leading to 
fake recovery on startup.
# You do not need to logOpenFile() in case of file creation. It is already done 
in dir.addFile(). Would be very confusing to journal the same transaction twice.
# Not wild about changing LAYOUT_VERSION to overcome the bug. Should we rather 
come up with a repair tool? Should be easy to implement with OIV. 
Changing LAYOUT_VERSION will not be as simple as just incrementing. We will 
have to do a simultaneous jump for all versions, as we did in the past.

> failure to load edits: ClassCastException
> -
>
> Key: HDFS-2991
> URL: https://issues.apache.org/jira/browse/HDFS-2991
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.24.0, 0.23.1
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Blocker
> Attachments: hdfs-2991.txt, image-with-buggy-append.tgz
>
>
> In doing scale testing of trunk at r1291606, I hit the following:
> java.io.IOException: Error replaying edit log at offset 1354251
> Recent opcode offsets: 1350014 1350176 1350312 1354251
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:418)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:93)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:79)
> ...
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.server.namenode.INodeFile cannot be cast to 
> org.apache.hadoop.hdfs.server.namenode.INodeFileUnderConstruction
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:213)
> ... 13 more

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2886) CreateEditLogs should generate a realistic edit log.

2012-02-06 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2886:
--

  Resolution: Fixed
   Fix Version/s: 0.22.1
  0.23.1
  0.24.0
Target Version/s: 0.24.0, 0.23.1, 0.22.1  (was: 0.22.1, 0.23.1, 0.24.0)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I just committed this.

> CreateEditLogs should generate a realistic edit log.
> 
>
> Key: HDFS-2886
> URL: https://issues.apache.org/jira/browse/HDFS-2886
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: name-node
>Affects Versions: 0.24.0, 0.23.1, 0.22.1
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Fix For: 0.24.0, 0.23.1, 0.22.1
>
> Attachments: createLog-0.22.patch, createLog-trunk.patch
>
>
> CreateEditsLog generates non-standard transactions. In real life first 
> transaction that creates a file does not contain blocks. While CreateEditsLog 
> adds blocks to this transaction. Change CreateEditsLog to produce real-life 
> transaction. 
> Also cleanup unused parameters for {{FSDirectory.updateFile()}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2886) CreateEditLogs should generate a realistic edit log.

2012-02-03 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2886:
--

Attachment: createLog-trunk.patch
createLog-0.22.patch

Fixed CreateEditLog. Cleaned up parameters. Added () in TestEditLog.

> CreateEditLogs should generate a realistic edit log.
> 
>
> Key: HDFS-2886
> URL: https://issues.apache.org/jira/browse/HDFS-2886
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: name-node
>Affects Versions: 0.24.0, 0.23.1, 0.22.1
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Attachments: createLog-0.22.patch, createLog-trunk.patch
>
>
> CreateEditsLog generates non-standard transactions. In real life first 
> transaction that creates a file does not contain blocks. While CreateEditsLog 
> adds blocks to this transaction. Change CreateEditsLog to produce real-life 
> transaction. 
> Also cleanup unused parameters for {{FSDirectory.updateFile()}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2886) CreateEditLogs should generate a realistic edit log.

2012-02-03 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2886:
--

Target Version/s: 0.24.0, 0.23.1, 0.22.1  (was: 0.22.1, 0.23.1, 0.24.0)
  Status: Patch Available  (was: Open)

> CreateEditLogs should generate a realistic edit log.
> 
>
> Key: HDFS-2886
> URL: https://issues.apache.org/jira/browse/HDFS-2886
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: name-node
>Affects Versions: 0.24.0, 0.23.1, 0.22.1
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Attachments: createLog-0.22.patch, createLog-trunk.patch
>
>
> CreateEditsLog generates non-standard transactions. In real life first 
> transaction that creates a file does not contain blocks. While CreateEditsLog 
> adds blocks to this transaction. Change CreateEditsLog to produce real-life 
> transaction. 
> Also cleanup unused parameters for {{FSDirectory.updateFile()}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2718) Optimize OP_ADD in edits loading

2012-02-02 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2718:
--

  Resolution: Fixed
   Fix Version/s: 0.22.1
  0.23.1
  0.24.0
Target Version/s: 0.24.0, 0.23.1, 0.22.1  (was: 0.22.1, 0.23.1, 0.24.0)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I just committed this.

> Optimize OP_ADD in edits loading
> 
>
> Key: HDFS-2718
> URL: https://issues.apache.org/jira/browse/HDFS-2718
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.22.0, 0.24.0, 1.0.0
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Fix For: 0.24.0, 0.23.1, 0.22.1
>
> Attachments: editsLoader-0.22.patch, editsLoader-0.22.patch, 
> editsLoader-0.22.patch, editsLoader-trunk.patch, editsLoader-trunk.patch, 
> editsLoader-trunk.patch, editsLoader-trunk.patch
>
>
> During loading the edits journal FSEditLog.loadEditRecords() processes OP_ADD 
> inefficiently. It first removes the existing INodeFile from the directory 
> tree, then adds it back as a regular INodeFile, and then replaces it with 
> INodeFileUnderConstruction if files is not closed. This slows down edits 
> loading. OP_ADD should be done in one shot and retain previously existing 
> data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2877) If locking of a storage dir fails, it will remove the other NN's lock file on exit

2012-02-02 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2877:
--

Target Version/s: 0.23.1, 1.1.0, 0.22.1  (was: 1.1.0, 0.23.1)

Good catch. The fix looks good. Adding 0.22 to the targets.

> If locking of a storage dir fails, it will remove the other NN's lock file on 
> exit
> --
>
> Key: HDFS-2877
> URL: https://issues.apache.org/jira/browse/HDFS-2877
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.23.0, 0.24.0, 1.0.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hdfs-2877.txt
>
>
> In {{Storage.tryLock()}}, we call {{lockF.deleteOnExit()}} regardless of 
> whether we successfully lock the directory. So, if another NN has the 
> directory locked, then we'll fail to lock it the first time we start another 
> NN. But our failed start attempt will still remove the other NN's lockfile, 
> and a second attempt will erroneously start.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2718) Optimize OP_ADD in edits loading

2012-02-01 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2718:
--

Attachment: editsLoader-trunk.patch
editsLoader-0.22.patch

I tried to move updateFiles() into EditLogLoader as Aaron suggested. It is 
possible, but I didn't like how it looked. Yes this method is called only in 
EditLogLoader, but by functionality it clearly belongs to FSDirectory, as it 
updates the file in the directory tree, same as addFile() or delete(). And 
methods should be assigned to classes based on the functionality rather than 
usage. So I left it in FSDirectory.

"Unprotected" means as I understand it that the method does not hold the lock, 
so the naming was consistent, but I agree we have too many "unprotected" 
methods for no apparent reason, so I renamed it as suggested.

If-s are half that way and half another. I am probably guilty of a lot of them, 
but don't see the point of fighting it now.

> Optimize OP_ADD in edits loading
> 
>
> Key: HDFS-2718
> URL: https://issues.apache.org/jira/browse/HDFS-2718
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.22.0, 0.24.0, 1.0.0
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Attachments: editsLoader-0.22.patch, editsLoader-0.22.patch, 
> editsLoader-0.22.patch, editsLoader-trunk.patch, editsLoader-trunk.patch, 
> editsLoader-trunk.patch, editsLoader-trunk.patch
>
>
> During loading the edits journal FSEditLog.loadEditRecords() processes OP_ADD 
> inefficiently. It first removes the existing INodeFile from the directory 
> tree, then adds it back as a regular INodeFile, and then replaces it with 
> INodeFileUnderConstruction if files is not closed. This slows down edits 
> loading. OP_ADD should be done in one shot and retain previously existing 
> data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2718) Optimize OP_ADD in edits loading

2012-01-30 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2718:
--

Target Version/s: 0.24.0, 0.23.1, 0.22.1  (was: 0.22.1, 0.23.1, 0.24.0)
  Status: Patch Available  (was: Open)

> Optimize OP_ADD in edits loading
> 
>
> Key: HDFS-2718
> URL: https://issues.apache.org/jira/browse/HDFS-2718
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 1.0.0, 0.22.0, 0.24.0
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Attachments: editsLoader-0.22.patch, editsLoader-0.22.patch, 
> editsLoader-trunk.patch, editsLoader-trunk.patch, editsLoader-trunk.patch
>
>
> During loading the edits journal FSEditLog.loadEditRecords() processes OP_ADD 
> inefficiently. It first removes the existing INodeFile from the directory 
> tree, then adds it back as a regular INodeFile, and then replaces it with 
> INodeFileUnderConstruction if files is not closed. This slows down edits 
> loading. OP_ADD should be done in one shot and retain previously existing 
> data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2718) Optimize OP_ADD in edits loading

2012-01-30 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2718:
--

Attachment: editsLoader-trunk.patch

> Optimize OP_ADD in edits loading
> 
>
> Key: HDFS-2718
> URL: https://issues.apache.org/jira/browse/HDFS-2718
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.22.0, 0.24.0, 1.0.0
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Attachments: editsLoader-0.22.patch, editsLoader-0.22.patch, 
> editsLoader-trunk.patch, editsLoader-trunk.patch, editsLoader-trunk.patch
>
>
> During loading the edits journal FSEditLog.loadEditRecords() processes OP_ADD 
> inefficiently. It first removes the existing INodeFile from the directory 
> tree, then adds it back as a regular INodeFile, and then replaces it with 
> INodeFileUnderConstruction if files is not closed. This slows down edits 
> loading. OP_ADD should be done in one shot and retain previously existing 
> data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2718) Optimize OP_ADD in edits loading

2012-01-30 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2718:
--

Attachment: editsLoader-0.22.patch

> Optimize OP_ADD in edits loading
> 
>
> Key: HDFS-2718
> URL: https://issues.apache.org/jira/browse/HDFS-2718
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.22.0, 0.24.0, 1.0.0
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Attachments: editsLoader-0.22.patch, editsLoader-0.22.patch, 
> editsLoader-trunk.patch, editsLoader-trunk.patch
>
>
> During loading the edits journal FSEditLog.loadEditRecords() processes OP_ADD 
> inefficiently. It first removes the existing INodeFile from the directory 
> tree, then adds it back as a regular INodeFile, and then replaces it with 
> INodeFileUnderConstruction if files is not closed. This slows down edits 
> loading. OP_ADD should be done in one shot and retain previously existing 
> data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2718) Optimize OP_ADD in edits loading

2012-01-30 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2718:
--

Target Version/s: 0.24.0, 0.23.1, 0.22.1  (was: 0.22.1, 0.23.1, 0.24.0)
  Status: Open  (was: Patch Available)

> Optimize OP_ADD in edits loading
> 
>
> Key: HDFS-2718
> URL: https://issues.apache.org/jira/browse/HDFS-2718
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 1.0.0, 0.22.0, 0.24.0
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Attachments: editsLoader-0.22.patch, editsLoader-0.22.patch, 
> editsLoader-trunk.patch, editsLoader-trunk.patch
>
>
> During loading the edits journal FSEditLog.loadEditRecords() processes OP_ADD 
> inefficiently. It first removes the existing INodeFile from the directory 
> tree, then adds it back as a regular INodeFile, and then replaces it with 
> INodeFileUnderConstruction if files is not closed. This slows down edits 
> loading. OP_ADD should be done in one shot and retain previously existing 
> data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2718) Optimize OP_ADD in edits loading

2012-01-23 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2718:
--

Attachment: editsLoader-trunk.patch

> Optimize OP_ADD in edits loading
> 
>
> Key: HDFS-2718
> URL: https://issues.apache.org/jira/browse/HDFS-2718
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.22.0, 0.24.0, 1.0.0
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Attachments: editsLoader-0.22.patch, editsLoader-trunk.patch, 
> editsLoader-trunk.patch
>
>
> During loading the edits journal FSEditLog.loadEditRecords() processes OP_ADD 
> inefficiently. It first removes the existing INodeFile from the directory 
> tree, then adds it back as a regular INodeFile, and then replaces it with 
> INodeFileUnderConstruction if files is not closed. This slows down edits 
> loading. OP_ADD should be done in one shot and retain previously existing 
> data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2718) Optimize OP_ADD in edits loading

2012-01-23 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2718:
--

Target Version/s: 0.24.0, 0.23.1, 0.22.1  (was: 0.22.1, 0.23.1, 0.24.0)
  Status: Patch Available  (was: Open)

> Optimize OP_ADD in edits loading
> 
>
> Key: HDFS-2718
> URL: https://issues.apache.org/jira/browse/HDFS-2718
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 1.0.0, 0.22.0, 0.24.0
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Attachments: editsLoader-0.22.patch, editsLoader-trunk.patch
>
>
> During loading the edits journal FSEditLog.loadEditRecords() processes OP_ADD 
> inefficiently. It first removes the existing INodeFile from the directory 
> tree, then adds it back as a regular INodeFile, and then replaces it with 
> INodeFileUnderConstruction if files is not closed. This slows down edits 
> loading. OP_ADD should be done in one shot and retain previously existing 
> data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2718) Optimize OP_ADD in edits loading

2012-01-23 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2718:
--

Target Version/s: 0.24.0, 0.23.1, 0.22.1  (was: 0.22.1)
Assignee: Konstantin Shvachko

> Optimize OP_ADD in edits loading
> 
>
> Key: HDFS-2718
> URL: https://issues.apache.org/jira/browse/HDFS-2718
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.22.0, 0.24.0, 1.0.0
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Attachments: editsLoader-0.22.patch, editsLoader-trunk.patch
>
>
> During loading the edits journal FSEditLog.loadEditRecords() processes OP_ADD 
> inefficiently. It first removes the existing INodeFile from the directory 
> tree, then adds it back as a regular INodeFile, and then replaces it with 
> INodeFileUnderConstruction if files is not closed. This slows down edits 
> loading. OP_ADD should be done in one shot and retain previously existing 
> data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2718) Optimize OP_ADD in edits loading

2012-01-23 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2718:
--

Attachment: editsLoader-trunk.patch

Here are the patches for trunk and 0.22 branch. The patch for trunk is 
applicable to 0.23 branch as well.
I tried to make the two patches as close to one another as possible.
No new tests included, but I modified existing tests so that they tested the 
new functionality.
I ran all tests for 0.22, Everything passed. For trunk I ran some key tests and 
will let Jenkins validate everything else.
Merging with HA branch shouldn't be hard. In 0.22 I tested this patch with 
block synchronization turned on - also works fine.
Can somebody please review this.

> Optimize OP_ADD in edits loading
> 
>
> Key: HDFS-2718
> URL: https://issues.apache.org/jira/browse/HDFS-2718
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.22.0, 0.24.0, 1.0.0
>Reporter: Konstantin Shvachko
> Attachments: editsLoader-0.22.patch, editsLoader-trunk.patch
>
>
> During loading the edits journal FSEditLog.loadEditRecords() processes OP_ADD 
> inefficiently. It first removes the existing INodeFile from the directory 
> tree, then adds it back as a regular INodeFile, and then replaces it with 
> INodeFileUnderConstruction if files is not closed. This slows down edits 
> loading. OP_ADD should be done in one shot and retain previously existing 
> data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2718) Optimize OP_ADD in edits loading

2012-01-23 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2718:
--

Attachment: editsLoader-0.22.patch

> Optimize OP_ADD in edits loading
> 
>
> Key: HDFS-2718
> URL: https://issues.apache.org/jira/browse/HDFS-2718
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.22.0, 0.24.0, 1.0.0
>Reporter: Konstantin Shvachko
> Attachments: editsLoader-0.22.patch
>
>
> During loading the edits journal FSEditLog.loadEditRecords() processes OP_ADD 
> inefficiently. It first removes the existing INodeFile from the directory 
> tree, then adds it back as a regular INodeFile, and then replaces it with 
> INodeFileUnderConstruction if files is not closed. This slows down edits 
> loading. OP_ADD should be done in one shot and retain previously existing 
> data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1910) when dfs.name.dir and dfs.name.edits.dir are same fsimage will be saved twice every time

2012-01-09 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1910:
--

Attachment: saveImageOnce-v1.1.patch

> when dfs.name.dir and dfs.name.edits.dir are same fsimage will be saved twice 
> every time
> 
>
> Key: HDFS-1910
> URL: https://issues.apache.org/jira/browse/HDFS-1910
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Gokul
>Priority: Minor
>  Labels: critical-0.22.0
> Fix For: 0.22.1
>
> Attachments: saveImageOnce-v0.22.patch, saveImageOnce-v1.1.patch
>
>
> when image and edits dir are configured same, the fsimage flushing from 
> memory to disk will be done twice whenever saveNamespace is done. this may 
> impact the performance of backupnode/snn where it does a saveNamespace during 
> every checkpointing time.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1910) when dfs.name.dir and dfs.name.edits.dir are same fsimage will be saved twice every time

2011-12-28 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1910:
--

Target Version/s: 1.1.0, 0.22.1  (was: 0.22.1, 1.1.0)
   Fix Version/s: 0.22.1
Hadoop Flags: Reviewed

I just committed this to 0.22 branch. Leaving open pending decision on 1.1.0

> when dfs.name.dir and dfs.name.edits.dir are same fsimage will be saved twice 
> every time
> 
>
> Key: HDFS-1910
> URL: https://issues.apache.org/jira/browse/HDFS-1910
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Gokul
>Priority: Minor
>  Labels: critical-0.22.0
> Fix For: 0.22.1
>
> Attachments: saveImageOnce-v0.22.patch
>
>
> when image and edits dir are configured same, the fsimage flushing from 
> memory to disk will be done twice whenever saveNamespace is done. this may 
> impact the performance of backupnode/snn where it does a saveNamespace during 
> every checkpointing time.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2718) Optimize OP_ADD in edits loading

2011-12-22 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2718:
--

Target Version/s: 0.22.1

I remember this was discussed on several occasions but couldn't find a jira for 
that, let me know if you know one. This exists in all HDFS versions.

> Optimize OP_ADD in edits loading
> 
>
> Key: HDFS-2718
> URL: https://issues.apache.org/jira/browse/HDFS-2718
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.22.0, 0.24.0, 1.0.0
>Reporter: Konstantin Shvachko
>
> During loading the edits journal FSEditLog.loadEditRecords() processes OP_ADD 
> inefficiently. It first removes the existing INodeFile from the directory 
> tree, then adds it back as a regular INodeFile, and then replaces it with 
> INodeFileUnderConstruction if files is not closed. This slows down edits 
> loading. OP_ADD should be done in one shot and retain previously existing 
> data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1910) when dfs.name.dir and dfs.name.edits.dir are same fsimage will be saved twice every time

2011-12-19 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1910:
--

Attachment: saveImageOnce-v0.22.patch

Here is a simple fix for branch 0.22.

> when dfs.name.dir and dfs.name.edits.dir are same fsimage will be saved twice 
> every time
> 
>
> Key: HDFS-1910
> URL: https://issues.apache.org/jira/browse/HDFS-1910
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Gokul
>Priority: Minor
>  Labels: critical-0.22.0
> Attachments: saveImageOnce-v0.22.patch
>
>
> when image and edits dir are configured same, the fsimage flushing from 
> memory to disk will be done twice whenever saveNamespace is done. this may 
> impact the performance of backupnode/snn where it does a saveNamespace during 
> every checkpointing time.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2698) BackupNode is downloading image from NameNode for every checkpoint

2011-12-16 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2698:
--

Attachment: rollFSImage.patch

Same fix plus a modification to TestBackupNode, which tests the condition.
The test succeeds with the patch and fails without.

> BackupNode is downloading image from NameNode for every checkpoint
> --
>
> Key: HDFS-2698
> URL: https://issues.apache.org/jira/browse/HDFS-2698
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.22.0
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Attachments: rollFSImage.patch, rollFSImage.patch
>
>
> BackupNode can make periodic checkpoints without downloading image and edits 
> files from the NameNode, but with just saving the namespace to local disks. 
> This is not happening because NN renews checkpoint time after every 
> checkpoint, thus making its image ahead of the BN's even though they are in 
> sync.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2698) BackupNode is downloading image from NameNode for every checkpoint

2011-12-16 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2698:
--

Attachment: rollFSImage.patch

One line fix, but important for checkpoint performance.
This is laso mentioned in the 
[comment|https://issues.apache.org/jira/browse/HDFS-903?focusedCommentId=13036705&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13036705]
 for HADOOP-903. 

> BackupNode is downloading image from NameNode for every checkpoint
> --
>
> Key: HDFS-2698
> URL: https://issues.apache.org/jira/browse/HDFS-2698
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.22.0
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Attachments: rollFSImage.patch
>
>
> BackupNode can make periodic checkpoints without downloading image and edits 
> files from the NameNode, but with just saving the namespace to local disks. 
> This is not happening because NN renews checkpoint time after every 
> checkpoint, thus making its image ahead of the BN's even though they are in 
> sync.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1910) when dfs.name.dir and dfs.name.edits.dir are same fsimage will be saved twice every time

2011-12-12 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1910:
--

Fix Version/s: 0.22.1
   1.0.0

> when dfs.name.dir and dfs.name.edits.dir are same fsimage will be saved twice 
> every time
> 
>
> Key: HDFS-1910
> URL: https://issues.apache.org/jira/browse/HDFS-1910
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Gokul
>Priority: Minor
>  Labels: critical-0.22.0
> Fix For: 1.0.0, 0.22.1
>
>
> when image and edits dir are configured same, the fsimage flushing from 
> memory to disk will be done twice whenever saveNamespace is done. this may 
> impact the performance of backupnode/snn where it does a saveNamespace during 
> every checkpointing time.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1108) Log newly allocated blocks

2011-12-07 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1108:
--

Status: Open  (was: Patch Available)

Why is it patch available if it is not for trunk.

> Log newly allocated blocks
> --
>
> Key: HDFS-1108
> URL: https://issues.apache.org/jira/browse/HDFS-1108
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: name-node
>Reporter: dhruba borthakur
>Assignee: Todd Lipcon
> Fix For: HA branch (HDFS-1623)
>
> Attachments: HDFS-1108.patch, hdfs-1108-habranch.txt, 
> hdfs-1108-habranch.txt, hdfs-1108-habranch.txt, hdfs-1108-habranch.txt, 
> hdfs-1108-habranch.txt, hdfs-1108.txt
>
>
> The current HDFS design says that newly allocated blocks for a file are not 
> persisted in the NN transaction log when the block is allocated. Instead, a 
> hflush() or a close() on the file persists the blocks into the transaction 
> log. It would be nice if we can immediately persist newly allocated blocks 
> (as soon as they are allocated) for specific files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1108) Log newly allocated blocks

2011-12-07 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1108:
--

Attachment: hdfs-1108-habranch.txt

Two suggestions.
# {{dfs.persist.blocks}} should control whether the block(s) are {{logSync()}} 
ed. While {{persistBlocks()}} should be called unconditionally.
# When a file is closing the blocks should be {{logSync()}} ed unconditionally. 
This is current behavior, which should be retained.

For BackupNode approach it is important that blocks are persisted, that is sent 
to the backup stream. The the backup stream can then take care of delivering 
the transaction to the BackupNode. If persistBlock() is not called BackupNode 
will not have any way to know about the blocks being created.
{{logSync()}} is not required for BackupNode, but is required for your shared 
storage solution.

I think the patch should work for both approaches.

> Log newly allocated blocks
> --
>
> Key: HDFS-1108
> URL: https://issues.apache.org/jira/browse/HDFS-1108
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: name-node
>Reporter: dhruba borthakur
>Assignee: Todd Lipcon
> Fix For: HA branch (HDFS-1623)
>
> Attachments: HDFS-1108.patch, hdfs-1108-habranch.txt, 
> hdfs-1108-habranch.txt, hdfs-1108-habranch.txt, hdfs-1108-habranch.txt, 
> hdfs-1108-habranch.txt, hdfs-1108.txt
>
>
> The current HDFS design says that newly allocated blocks for a file are not 
> persisted in the NN transaction log when the block is allocated. Instead, a 
> hflush() or a close() on the file persists the blocks into the transaction 
> log. It would be nice if we can immediately persist newly allocated blocks 
> (as soon as they are allocated) for specific files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2064) Warm HA NameNode going Hot

2011-12-02 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2064:
--

Attachment: failover-v-0.22.patch

Updated patch for current branch state.

> Warm HA NameNode going Hot
> --
>
> Key: HDFS-2064
> URL: https://issues.apache.org/jira/browse/HDFS-2064
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: name-node
>Affects Versions: 0.22.0
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Blocker
> Attachments: WarmHA-GoingHot.pdf, failover-v-0.22.patch, 
> failover-v-0.22.patch
>
>
> This is the design for automatic hot HA for HDFS NameNode. It involves use of 
> HA software and LoadReplicator - external to Hadoop components, which 
> substantially simplify the architecture by separating HA- from 
> Hadoop-specific problems. Without the external components it provides warm 
> standby with manual failover.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2287) TestParallelRead has a small off-by-one bug

2011-11-28 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2287:
--

Attachment: hdfs-2287-v-0.22.patch

> TestParallelRead has a small off-by-one bug
> ---
>
> Key: HDFS-2287
> URL: https://issues.apache.org/jira/browse/HDFS-2287
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.23.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Trivial
> Fix For: 0.24.0
>
> Attachments: hdfs-2287-v-0.22.patch, hdfs-2287.txt
>
>
> Noticed this bug when I was running TestParallelRead - a simple off-by-one 
> error in some internal bounds checking.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1408) Herriot NN and DN clients should vend statistics

2011-11-23 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1408:
--

Fix Version/s: 0.22.0

> Herriot NN and DN clients should vend statistics
> 
>
> Key: HDFS-1408
> URL: https://issues.apache.org/jira/browse/HDFS-1408
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Al Thompson
>Assignee: Konstantin Boudnik
> Fix For: 0.22.0
>
> Attachments: HADOOP-6927.y20s.patch, HDFS-1408.patch, 
> HDFS-1408.patch, HDFS-1408.patch, HDFS-1408.patch, add_JMXlisteners.patch, 
> jmx.patch, jmx.patch, jmx.patch, jmx.patch
>
>
> The HDFS web user interface serves useful information through dfshealth.jsp 
> and dfsnodelist.jsp.
> The Herriot interface to the namenode and datanode (as implemented in 
> NNClient and DNClient, respectively) would benefit from the addition of some 
> way to channel this information. In the case of DNClient this can be an 
> injected method that returns a DatanodeDescriptor relevant to the underlying 
> datanode.
> There seems to be no analagous NamenodeDescriptor. It may be useful to add 
> this as a facade to a visitor that aggregates values across the filesystem 
> datanodes. These values are (from dfshealth JSP):
> Configured Capacity
> DFS Used
> Non DFS Used
> DFS Remaining
> DFS Used%
> DFS Remaining%
> Live Nodes
> Dead Nodes
> Decommissioning Nodes
> Number of Under-Replicated Blocks
> Attributes reflecting the web user interface header may also be useful such 
> as When-Started, Version, When-Compiled, and Upgrade-Status.
> A NamenodeDescriptor would essentially "push down" the code in dfshealth web 
> UI behind a more general abstraction. If it is objectionable to make this 
> class available in HDFS, perhaps this could be packaged in a Herriot specific 
> way.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1387) Update HDFS permissions guide for security

2011-11-23 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1387:
--

Fix Version/s: 0.22.0

> Update HDFS permissions guide for security
> --
>
> Key: HDFS-1387
> URL: https://issues.apache.org/jira/browse/HDFS-1387
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, security
>Affects Versions: 0.22.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: 0.22.0
>
> Attachments: hdfs-1387.txt, hdfs_permissions_guide.pdf
>
>
> The HDFS permissions guide currently makes several statements that are no 
> longer correct now that we provide security.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1003) authorization checks for inter-server protocol (based on HADOOP-6600)

2011-11-23 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1003:
--

Fix Version/s: 0.22.0

> authorization checks for inter-server protocol (based on HADOOP-6600)
> -
>
> Key: HDFS-1003
> URL: https://issues.apache.org/jira/browse/HDFS-1003
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Boris Shkolnik
>Assignee: Boris Shkolnik
> Fix For: 0.22.0
>
> Attachments: HDFS-1003.patch
>
>
> authorization checks for inter-server protocol (based on HADOOP-6600)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1157) Modifications introduced by HDFS-1150 are breaking aspect's bindings

2011-11-23 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1157:
--

Fix Version/s: 0.22.0

> Modifications introduced by HDFS-1150 are breaking aspect's bindings
> 
>
> Key: HDFS-1157
> URL: https://issues.apache.org/jira/browse/HDFS-1157
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Fix For: 0.22.0
>
> Attachments: hdfs-1157.patch
>
>
> Modifications introduced by HDFS-1150 to DataNode class brakes the binding of 
> some of Herriot (test framework) bindings. This JIRA is to track the fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1190) Remove unused getNamenode() method from DataNode.

2011-11-23 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1190:
--

Fix Version/s: 0.22.0

> Remove unused getNamenode() method from DataNode.
> -
>
> Key: HDFS-1190
> URL: https://issues.apache.org/jira/browse/HDFS-1190
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Jeff Ames
>Assignee: Jeff Ames
>Priority: Minor
> Fix For: 0.22.0
>
> Attachments: HDFS-1190.patch
>
>
> This patch removes the getNamenode() method from DataNode, which appears to 
> be unused.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1562) Add rack policy tests

2011-11-23 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1562:
--

Fix Version/s: 0.22.0

> Add rack policy tests
> -
>
> Key: HDFS-1562
> URL: https://issues.apache.org/jira/browse/HDFS-1562
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: name-node, test
>Affects Versions: 0.22.0
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 0.22.0
>
> Attachments: hdfs-1562-1.patch, hdfs-1562-2.patch, hdfs-1562-3.patch, 
> hdfs-1562-4.patch, hdfs-1562-5.patch
>
>
> The existing replication tests (TestBlocksWithNotEnoughRacks, 
> TestPendingReplication, TestOverReplicatedBlocks, TestReplicationPolicy, 
> TestUnderReplicatedBlocks, and TestReplication) are missing tests for rack 
> policy violations.  This jira adds the following tests which I created when 
> generating a new patch for HDFS-15.
> * Test that blocks that have a sufficient number of total replicas, but are 
> not replicated cross rack, get replicated cross rack when a rack becomes 
> available.
> * Test that new blocks for an underreplicated file will get replicated cross 
> rack. 
> * Mark a block as corrupt, test that when it is re-replicated that it is 
> still replicated across racks.
> * Reduce the replication factor of a file, making sure that the only block 
> that is across racks is not removed when deleting replicas.
> * Test that when a block is replicated because a replica is lost due to host 
> failure the the rack policy is preserved.
> * Test that when the execss replicas of a block are reduced due to a node 
> re-joining the cluster the rack policy is not violated.
> * Test that rack policy is still respected when blocks are replicated due to 
> node decommissioning.
> * Test that rack policy is still respected when blocks are replicated due to 
> node decommissioning, even when the blocks are over-replicated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1205) FSDatasetAsyncDiskService should name its threads

2011-11-23 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1205:
--

Fix Version/s: 0.22.0

> FSDatasetAsyncDiskService should name its threads
> -
>
> Key: HDFS-1205
> URL: https://issues.apache.org/jira/browse/HDFS-1205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Affects Versions: 0.22.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: 0.22.0
>
> Attachments: hdfs-1205-0.20.txt, hdfs-1205.txt, hdfs-1205.txt
>
>
> FSDatasetAsyncService creates threads but doesn't name them. The 
> ThreadFactory should name them with the volume they work on as well as a 
> thread index.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1308) job conf key for the services name of DelegationToken for HFTP url is constructed incorrectly in HFTPFileSystem (part of MR-1718)

2011-11-23 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1308:
--

Fix Version/s: 0.22.0

>  job conf key for the services name of DelegationToken for HFTP url is 
> constructed incorrectly in HFTPFileSystem (part of MR-1718)
> --
>
> Key: HDFS-1308
> URL: https://issues.apache.org/jira/browse/HDFS-1308
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Boris Shkolnik
>Assignee: Boris Shkolnik
> Fix For: 0.22.0
>
> Attachments: HDFS-1308-1.patch, HDFS-1308.patch
>
>
> change HFTP init code that checks for existing delegation tokens

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1044) Cannot submit mapreduce job from secure client to unsecure sever

2011-11-23 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1044:
--

Fix Version/s: 0.22.0

> Cannot submit mapreduce job from secure client to unsecure sever
> 
>
> Key: HDFS-1044
> URL: https://issues.apache.org/jira/browse/HDFS-1044
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Boris Shkolnik
>Assignee: Boris Shkolnik
> Fix For: 0.22.0
>
> Attachments: HDFS-1044-1.patch, HDFS-1044-1.patch, 
> HDFS-1044-BP20-2.patch, HDFS-1044-BP20-3.patch, HDFS-1044-BP20-5.patch, 
> HDFS-1044-BP20-6.patch, HDFS-1044-BP20.patch
>
>
> Looks like it tries to get DelegationToken and fails because SecureManger on 
> Server doesn't start in non-secure environment.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1301) TestHDFSProxy need to use server side conf for ProxyUser stuff.

2011-11-23 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1301:
--

Fix Version/s: 0.22.0

> TestHDFSProxy need to use server side conf for ProxyUser stuff.
> ---
>
> Key: HDFS-1301
> URL: https://issues.apache.org/jira/browse/HDFS-1301
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Boris Shkolnik
>Assignee: Boris Shkolnik
> Fix For: 0.22.0
>
> Attachments: HDFS-1301-1.patch, HDFS-1301-BP20-1.patch, 
> HDFS-1301-BP20.patch, HDFS-1301.patch
>
>
> currently TestHdfsProxy sets hadoop.proxyuser.USER.groups in local copy of 
> configuration. 
> But ProxyUsers only looks at the server side config.
> For test we can uses static method in ProxyUsers to load the config.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1021) specify correct server principal for RefreshAuthorizationPolicyProtocol and RefreshUserToGroupMappingsProtocol protocols in DFSAdmin (for HADOOP-6612)

2011-11-23 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1021:
--

Fix Version/s: 0.22.0

> specify correct server principal for RefreshAuthorizationPolicyProtocol and 
> RefreshUserToGroupMappingsProtocol protocols in DFSAdmin (for HADOOP-6612)
> --
>
> Key: HDFS-1021
> URL: https://issues.apache.org/jira/browse/HDFS-1021
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Reporter: Boris Shkolnik
>Assignee: Boris Shkolnik
> Fix For: 0.22.0
>
> Attachments: HDFS-1021.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1146) Javadoc for getDelegationTokenSecretManager in FSNamesystem

2011-11-23 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1146:
--

Fix Version/s: 0.22.0

> Javadoc for getDelegationTokenSecretManager in FSNamesystem
> ---
>
> Key: HDFS-1146
> URL: https://issues.apache.org/jira/browse/HDFS-1146
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Fix For: 0.22.0
>
> Attachments: HDFS-1146-y20.1.patch, HDFS-1146.1.patch
>
>
> Javadoc is missing for public method getDelegationTokenSecretManager in 
> FSNamesystem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1334) open in HftpFileSystem does not add delegation tokens to the url.

2011-11-23 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1334:
--

Fix Version/s: 0.22.0

> open in HftpFileSystem does not add delegation tokens to the url.
> -
>
> Key: HDFS-1334
> URL: https://issues.apache.org/jira/browse/HDFS-1334
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Fix For: 0.22.0
>
> Attachments: HDFS-1334.1.patch
>
>
> open method in HftpFileSystem uses ByteRangeInputStream for url connection. 
> But it does not add the delegation tokens, even if security is enabled, to 
> the url before passing it to the ByteRangeInputStream. Therefore request 
> fails if security is enabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1946) HDFS part of HADOOP-7291

2011-11-23 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1946:
--

Fix Version/s: 0.22.0

> HDFS part of HADOOP-7291
> 
>
> Key: HDFS-1946
> URL: https://issues.apache.org/jira/browse/HDFS-1946
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 0.22.0
>
> Attachments: hdfs-1946-1.patch
>
>
> The hudson-test-patch target needs to be updated to not pass python.home 
> since this argument is no longer needed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2514) Link resolution bug for intermediate symlinks with relative targets

2011-11-23 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2514:
--

Fix Version/s: 0.22.0

> Link resolution bug for intermediate symlinks with relative targets
> ---
>
> Key: HDFS-2514
> URL: https://issues.apache.org/jira/browse/HDFS-2514
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.21.0, 0.22.0, 0.23.0
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 0.22.0
>
> Attachments: hdfs-2514-1.patch, hdfs-2514-2.patch, hdfs-2514-3.patch
>
>
> There's a bug in the way the Namenode resolves intermediate symlinks (ie the 
> symlink is not the final path component) in paths when the symlink's target 
> is a relative path. Will post the full description in the first comment.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1038) In nn_browsedfscontent.jsp fetch delegation token only if security is enabled.

2011-11-23 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1038:
--

Fix Version/s: 0.22.0

> In nn_browsedfscontent.jsp fetch delegation token only if security is enabled.
> --
>
> Key: HDFS-1038
> URL: https://issues.apache.org/jira/browse/HDFS-1038
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Fix For: 0.22.0
>
> Attachments: HDFS-1038-y20.1.patch, HDFS-1038.1.patch, 
> HDFS-1038.2.patch, HDFS-1038.3.patch
>
>
> nn_browsedfscontent.jsp  calls getDelegationToken even if security is 
> disabled, which causes NPE. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1039) Service should be set in the token in JspHelper.getUGI

2011-11-23 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1039:
--

Fix Version/s: 0.22.0

> Service should be set in the token in JspHelper.getUGI
> --
>
> Key: HDFS-1039
> URL: https://issues.apache.org/jira/browse/HDFS-1039
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Fix For: 0.22.0
>
> Attachments: HDFS-1039-y20.1.patch, HDFS-1039-y20.2.1.patch, 
> HDFS-1039-y20.2.patch, HDFS-1039.2.patch, HDFS-1039.4.patch
>
>
>  The delegation token added to the UGI in getUGI method in the JspHelper does 
> not have service set. Therefore, this token cannot be used to connect to the 
> namenode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1055) Improve thread naming for DataXceivers

2011-11-23 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1055:
--

Fix Version/s: 0.22.0

> Improve thread naming for DataXceivers
> --
>
> Key: HDFS-1055
> URL: https://issues.apache.org/jira/browse/HDFS-1055
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Affects Versions: 0.22.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: 0.22.0
>
> Attachments: dataxceiver-merged.patch, dataxceiver.patch, 
> hdfs-1055-1.patch, hdfs-1055-branch20.txt
>
>
> The DataXceiver threads are named using the default Daemon naming, which is 
> Runnable.toString(). Currently this isn't implemented, so threads have names 
> like org.apache.hadoop.hdfs.server.datanode.DataXceiver@579c9a6b. It would be 
> very handy for debugging (and even ops maybe) to have a better name like 
> "DataXceiver for client 1.2.3.4 [reading block_234254242]"

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1036) in DelegationTokenFetch dfs.getURI returns no port

2011-11-23 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1036:
--

Fix Version/s: 0.22.0

> in DelegationTokenFetch dfs.getURI returns no port
> --
>
> Key: HDFS-1036
> URL: https://issues.apache.org/jira/browse/HDFS-1036
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Boris Shkolnik
>Assignee: Boris Shkolnik
> Fix For: 0.22.0
>
> Attachments: HDFS-1036-BP20-1.patch, HDFS-1036-BP20-Fix.patch, 
> HDFS-1036-BP20.patch, HDFS-1036-doc.patch, HDFS-1036.patch, fetchdt_doc.patch
>
>
> dfs.getUri().getPort() returns -1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1500) TestOfflineImageViewer failing on trunk

2011-11-23 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1500:
--

Fix Version/s: 0.22.0

> TestOfflineImageViewer failing on trunk
> ---
>
> Key: HDFS-1500
> URL: https://issues.apache.org/jira/browse/HDFS-1500
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test, tools
>Affects Versions: 0.22.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: 0.22.0
>
> Attachments: hdfs-1500.txt
>
>
> Testcase: testOIV took 22.679 sec
>   FAILED
> Failed reading valid file: No image processor to read version -26 is 
> available.
> junit.framework.AssertionFailedError: Failed reading valid file: No image 
> processor to read version -26 is available.
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer.outputOfLSVisitor(TestOfflineImageViewer.java:171)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer.testOIV(TestOfflineImageViewer.java:86)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1417) Add @Override annotation to SimulatedFSDataset methods that implement FSDatasetInterface

2011-11-23 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1417:
--

Fix Version/s: 0.22.0

> Add @Override annotation to SimulatedFSDataset methods that implement 
> FSDatasetInterface
> 
>
> Key: HDFS-1417
> URL: https://issues.apache.org/jira/browse/HDFS-1417
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Fix For: 0.22.0
>
> Attachments: HDFS-1417.patch
>
>
> @Override annotations are inconsistently added to methods implementing the 
> interface.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1409) The "register" method of the BackupNode class should be "UnsupportedActionException("register")"

2011-11-22 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1409:
--

Component/s: name-node
   Assignee: Ching-Shen Chen

> The "register" method of the BackupNode class should be 
> "UnsupportedActionException("register")"
> 
>
> Key: HDFS-1409
> URL: https://issues.apache.org/jira/browse/HDFS-1409
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Ching-Shen Chen
>Assignee: Ching-Shen Chen
>Priority: Trivial
> Fix For: 0.22.0
>
> Attachments: HDFS-1409.patch, HDFS-1409.patch
>
>
> The register method of the BackupNode class should be 
> "UnsupportedActionException("register")" rather than  
> "UnsupportedActionException("journal")".

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1935) Build should not redownload ivy on every invocation

2011-11-22 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1935:
--

Component/s: build
   Priority: Minor  (was: Trivial)
   Assignee: Joep Rottinghuis

> Build should not redownload ivy on every invocation
> ---
>
> Key: HDFS-1935
> URL: https://issues.apache.org/jira/browse/HDFS-1935
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.22.0
>Reporter: Todd Lipcon
>Assignee: Joep Rottinghuis
>Priority: Minor
>  Labels: newbie
> Fix For: 0.22.0
>
> Attachments: diff, hdfs-1935.patch, hdfs-1935.txt
>
>
> Currently we re-download ivy every time we build. If the jar already exists, 
> we should skip this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2002) Incorrect computation of needed blocks in getTurnOffTip()

2011-11-01 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2002:
--

   Resolution: Fixed
Fix Version/s: 0.24.0
   0.23.0
   0.22.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I just committed this. Thank you Plamen.

> Incorrect computation of needed blocks in getTurnOffTip()
> -
>
> Key: HDFS-2002
> URL: https://issues.apache.org/jira/browse/HDFS-2002
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.22.0
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
>  Labels: newbie
> Fix For: 0.22.0, 0.23.0, 0.24.0
>
> Attachments: HADOOP-2002_TRUNK.patch, hdfs-2002.patch, 
> testsafemode.patch, testsafemode.patch
>
>
> {{SafeModeInfo.getTurnOffTip()}} under-reports the number of blocks needed to 
> reach the safemode threshold.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2285) BackupNode should reject requests trying to modify namespace

2011-10-29 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2285:
--

   Resolution: Fixed
Fix Version/s: 0.24.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I just committed this to trunk. Thank you Uma.

The patch does not apply to 0.23 branch any more. I recommend it for back 
porting.

> BackupNode should reject requests trying to modify namespace
> 
>
> Key: HDFS-2285
> URL: https://issues.apache.org/jira/browse/HDFS-2285
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.22.0, 0.24.0
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Fix For: 0.22.0, 0.24.0
>
> Attachments: BNsafemode.patch, HDFS-2285.patch, HDFS-2285.patch
>
>
> I am trying to remove file from BackupNode using
> {code}hadoop fs -fs hdfs://backup.node.com:50100 -rm /README.txt{code}
> which is supposed to fail. But it seems to be hanging forever.
> Needs some investigation. It used to throw SafeModeException if I remember 
> correctly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2285) BackupNode should reject requests trying to modify namespace

2011-10-29 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2285:
--

Attachment: HDFS-2285.patch

I removed changes to BackupImage, which were unused imports only.

> BackupNode should reject requests trying to modify namespace
> 
>
> Key: HDFS-2285
> URL: https://issues.apache.org/jira/browse/HDFS-2285
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.22.0, 0.24.0
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Fix For: 0.22.0
>
> Attachments: BNsafemode.patch, HDFS-2285.patch, HDFS-2285.patch
>
>
> I am trying to remove file from BackupNode using
> {code}hadoop fs -fs hdfs://backup.node.com:50100 -rm /README.txt{code}
> which is supposed to fail. But it seems to be hanging forever.
> Needs some investigation. It used to throw SafeModeException if I remember 
> correctly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2307) More Coverage needed for FSDirectory

2011-10-25 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2307:
--

Fix Version/s: (was: 0.22.0)

> More Coverage needed for FSDirectory
> 
>
> Key: HDFS-2307
> URL: https://issues.apache.org/jira/browse/HDFS-2307
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Benoy Antony
> Attachments: 59.html
>
>
> The unit tests do not cover some of the symlink logic in FSDirectory. 
> The impact of adding a symlink on the nameQuota is not covered.
> The unit test coverage for FSDirectory is attached.  The uncovered lines are 
> in  addToParent function. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1900) Use the block size key defined by common

2011-10-25 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1900:
--

Status: Open  (was: Patch Available)

> Use the block size key defined by common 
> -
>
> Key: HDFS-1900
> URL: https://issues.apache.org/jira/browse/HDFS-1900
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Eli Collins
>Assignee: Abel Perez
>  Labels: newbie
> Fix For: 0.22.0
>
> Attachments: HDFS-1900.txt
>
>
> HADOOP-4952 added a dfs.block.size key to common configuration, defined in 
> o.a.h.fs.FsConfig. This conflicts with the original HDFS block size key of 
> the same name, which is now deprecated in favor of dfs.blocksize. It doesn't 
> make sense to have two different keys for the block size (ie they can 
> disagree). Why doesn't HDFS just use the key defined in common?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2498) TestParallelRead times out consistently on Jenkins

2011-10-25 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2498:
--

Priority: Blocker  (was: Major)

Upgrading to a blocker as test failure.

> TestParallelRead times out consistently on Jenkins
> --
>
> Key: HDFS-2498
> URL: https://issues.apache.org/jira/browse/HDFS-2498
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Konstantin Shvachko
>Priority: Blocker
> Fix For: 0.22.0
>
> Attachments: TestParallelRead.txt
>
>
> During last several Jenkins builds TestParallelRead consistently fails. See 
> Hadoop-Hdfs-22-branch for logs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2498) TestParallelRead times out consistently on Jenkins

2011-10-24 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2498:
--

Attachment: TestParallelRead.txt

I cannot reproduce it locally. So I ran Jenkins with {{-Dtest.output=yes}}. 
Here is the log.

> TestParallelRead times out consistently on Jenkins
> --
>
> Key: HDFS-2498
> URL: https://issues.apache.org/jira/browse/HDFS-2498
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Konstantin Shvachko
> Fix For: 0.22.0
>
> Attachments: TestParallelRead.txt
>
>
> During last several Jenkins builds TestParallelRead consistently fails. See 
> Hadoop-Hdfs-22-branch for logs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2452) OutOfMemoryError in DataXceiverServer takes down the DataNode

2011-10-23 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2452:
--

Fix Version/s: 0.24.0
   0.23.0

> OutOfMemoryError in DataXceiverServer takes down the DataNode
> -
>
> Key: HDFS-2452
> URL: https://issues.apache.org/jira/browse/HDFS-2452
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 0.22.0, 0.23.0, 0.24.0
>Reporter: Konstantin Shvachko
>Assignee: Uma Maheswara Rao G
> Fix For: 0.22.0, 0.23.0, 0.24.0
>
> Attachments: HDFS-2452-22Branch.2.patch, HDFS-2452-22branch.1.patch, 
> HDFS-2452-22branch.patch, HDFS-2452-22branch.patch, HDFS-2452-22branch.patch, 
> HDFS-2452-22branch.patch, HDFS-2452-22branch_with-around_.patch, 
> HDFS-2452-22branch_with-around_.patch, HDFS-2452-Trunk-src-fix.patch, 
> HDFS-2452-Trunk.patch, HDFS-2452.patch, HDFS-2452.patch
>
>
> OutOfMemoryError brings down DataNode, when DataXceiverServer tries to spawn 
> a new data transfer thread.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2491) TestBalancer can fail when datanode utilization and avgUtilization is exactly same.

2011-10-22 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2491:
--

Target Version/s: 0.22.0, 0.24.0  (was: 0.24.0, 0.22.0)
   Fix Version/s: 0.24.0
  0.22.0

> TestBalancer can fail when datanode utilization and avgUtilization is exactly 
> same.
> ---
>
> Key: HDFS-2491
> URL: https://issues.apache.org/jira/browse/HDFS-2491
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.22.0, 0.23.0, 0.24.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Fix For: 0.22.0, 0.24.0
>
> Attachments: HDFS-2492-22Branch.patch, HDFS-2492.patch
>
>
> Stack Trace:
> junit.framework.AssertionFailedError: 127.0.0.1:60986is not an underUtilized 
> node: utilization=22.0 avgUtilization=22.0 threshold=10.0
> at 
> org.apache.hadoop.hdfs.server.balancer.Balancer.initNodes(Balancer.java:1014)
> at 
> org.apache.hadoop.hdfs.server.balancer.Balancer.initNodes(Balancer.java:953)
> at org.apache.hadoop.hdfs.server.balancer.Balancer.run(Balancer.java:1502)
> at 
> org.apache.hadoop.hdfs.server.balancer.TestBalancer.runBalancer(TestBalancer.java:247)
> at 
> org.apache.hadoop.hdfs.server.balancer.TestBalancer.test(TestBalancer.java:234)
> at 
> org.apache.hadoop.hdfs.server.balancer.TestBalancer.twoNodeTest(TestBalancer.java:312)
> at 
> org.apache.hadoop.hdfs.server.balancer.TestBalancer.__CLR2_4_39j3j5b10ou(TestBalancer.java:328)
> at 
> org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer0(TestBalancer.java:324)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2491) TestBalancer can fail when datanode utilization and avgUtilization is exactly same.

2011-10-22 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2491:
--

  Resolution: Fixed
Target Version/s: 0.22.0, 0.24.0
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I lust committed it. Thank you Uma.

> TestBalancer can fail when datanode utilization and avgUtilization is exactly 
> same.
> ---
>
> Key: HDFS-2491
> URL: https://issues.apache.org/jira/browse/HDFS-2491
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.22.0, 0.23.0, 0.24.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-2492-22Branch.patch, HDFS-2492.patch
>
>
> Stack Trace:
> junit.framework.AssertionFailedError: 127.0.0.1:60986is not an underUtilized 
> node: utilization=22.0 avgUtilization=22.0 threshold=10.0
> at 
> org.apache.hadoop.hdfs.server.balancer.Balancer.initNodes(Balancer.java:1014)
> at 
> org.apache.hadoop.hdfs.server.balancer.Balancer.initNodes(Balancer.java:953)
> at org.apache.hadoop.hdfs.server.balancer.Balancer.run(Balancer.java:1502)
> at 
> org.apache.hadoop.hdfs.server.balancer.TestBalancer.runBalancer(TestBalancer.java:247)
> at 
> org.apache.hadoop.hdfs.server.balancer.TestBalancer.test(TestBalancer.java:234)
> at 
> org.apache.hadoop.hdfs.server.balancer.TestBalancer.twoNodeTest(TestBalancer.java:312)
> at 
> org.apache.hadoop.hdfs.server.balancer.TestBalancer.__CLR2_4_39j3j5b10ou(TestBalancer.java:328)
> at 
> org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer0(TestBalancer.java:324)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2286) DataXceiverServer logs AsynchronousCloseException at shutdown

2011-10-17 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2286:
--

Fix Version/s: 0.22.0

> DataXceiverServer logs AsynchronousCloseException at shutdown
> -
>
> Key: HDFS-2286
> URL: https://issues.apache.org/jira/browse/HDFS-2286
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Affects Versions: 0.22.0, 0.23.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Trivial
> Fix For: 0.22.0, 0.23.0
>
> Attachments: HDFS-2286.22branch.patch, hdfs-2286.txt
>
>
> During DN shutdown, the acceptor thread gets an AsynchronousCloseException, 
> and logs it at WARN level. This exception is excepted, since another thread 
> is closing the listener socket, so we should just swallow it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2286) DataXceiverServer logs AsynchronousCloseException at shutdown

2011-10-17 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2286:
--

Affects Version/s: 0.22.0

> DataXceiverServer logs AsynchronousCloseException at shutdown
> -
>
> Key: HDFS-2286
> URL: https://issues.apache.org/jira/browse/HDFS-2286
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Affects Versions: 0.22.0, 0.23.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Trivial
> Fix For: 0.22.0, 0.23.0
>
> Attachments: HDFS-2286.22branch.patch, hdfs-2286.txt
>
>
> During DN shutdown, the acceptor thread gets an AsynchronousCloseException, 
> and logs it at WARN level. This exception is excepted, since another thread 
> is closing the listener socket, so we should just swallow it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2012) Recurring failure of TestBalancer due to incorrect treatment of nodes whose utilization equals avgUtilization.

2011-10-14 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2012:
--

Fix Version/s: 0.24.0
  Summary: Recurring failure of TestBalancer due to incorrect treatment 
of nodes whose utilization equals avgUtilization.  (was: Recurring failure of 
TestBalancer on branch-0.22)

> Recurring failure of TestBalancer due to incorrect treatment of nodes whose 
> utilization equals avgUtilization.
> --
>
> Key: HDFS-2012
> URL: https://issues.apache.org/jira/browse/HDFS-2012
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer, test
>Affects Versions: 0.22.0
>Reporter: Aaron T. Myers
>Assignee: Uma Maheswara Rao G
>Priority: Blocker
> Fix For: 0.22.0, 0.24.0
>
> Attachments: HDFS-2012-0.22Branch.patch, HDFS-2012-Trunk.patch, 
> HDFS-2012.patch, TestBalancerLog.html
>
>
> This has been failing on Hudson for the last two builds and fails on my local 
> box as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2012) Recurring failure of TestBalancer due to incorrect treatment of nodes whose utilization equals avgUtilization.

2011-10-14 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2012:
--

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I just committed this. Thank you Uma.

> Recurring failure of TestBalancer due to incorrect treatment of nodes whose 
> utilization equals avgUtilization.
> --
>
> Key: HDFS-2012
> URL: https://issues.apache.org/jira/browse/HDFS-2012
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer, test
>Affects Versions: 0.22.0
>Reporter: Aaron T. Myers
>Assignee: Uma Maheswara Rao G
>Priority: Blocker
> Fix For: 0.22.0, 0.24.0
>
> Attachments: HDFS-2012-0.22Branch.patch, HDFS-2012-Trunk.patch, 
> HDFS-2012.patch, TestBalancerLog.html
>
>
> This has been failing on Hudson for the last two builds and fails on my local 
> box as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1935) Build should not redownload ivy on every invocation

2011-10-05 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-1935:
--

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Build should not redownload ivy on every invocation
> ---
>
> Key: HDFS-1935
> URL: https://issues.apache.org/jira/browse/HDFS-1935
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 0.22.0
>Reporter: Todd Lipcon
>Priority: Trivial
>  Labels: newbie
> Fix For: 0.22.0
>
> Attachments: diff, hdfs-1935.patch, hdfs-1935.txt
>
>
> Currently we re-download ivy every time we build. If the jar already exists, 
> we should skip this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2012) Recurring failure of TestBalancer on branch-0.22

2011-09-29 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2012:
--

Attachment: TestBalancerLog.html

Here is the history of 
[failures|https://builds.apache.org/view/G-L/view/Hadoop/job/Hadoop-Hdfs-22-branch/lastCompletedBuild/testReport/org.apache.hadoop.hdfs.server.balancer/TestBalancer/testBalancer0/history/]
The reason is an assert in the test:
{code}
junit.framework.AssertionFailedError: 127.0.0.1:53207is not an underUtilized 
node
at 
org.apache.hadoop.hdfs.server.balancer.Balancer.initNodes(Balancer.java:1011)
at 
org.apache.hadoop.hdfs.server.balancer.Balancer.initNodes(Balancer.java:953)
at 
org.apache.hadoop.hdfs.server.balancer.Balancer.run(Balancer.java:1496)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.runBalancer(TestBalancer.java:247)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.test(TestBalancer.java:234)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.oneNodeTest(TestBalancer.java:307)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.__CLR2_4_39j3j5b10o2(TestBalancer.java:327)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer0(TestBalancer.java:324)
{code}
See the full run log attached.

> Recurring failure of TestBalancer on branch-0.22
> 
>
> Key: HDFS-2012
> URL: https://issues.apache.org/jira/browse/HDFS-2012
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer, test
>Affects Versions: 0.22.0
>Reporter: Aaron T. Myers
>Priority: Blocker
> Fix For: 0.22.0
>
> Attachments: TestBalancerLog.html
>
>
> This has been failing on Hudson for the last two builds and fails on my local 
> box as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2388) Remove dependency on different version of slf4j in avro

2011-09-29 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2388:
--

Attachment: ivyXml-hdf.patch

> Remove dependency on different version of slf4j in avro
> ---
>
> Key: HDFS-2388
> URL: https://issues.apache.org/jira/browse/HDFS-2388
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.22.0
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Fix For: 0.22.0
>
> Attachments: ivyXml-hdf.patch
>
>
> This is the HDFS part of HADOOP-7697.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2383) TestDfsOverAvroRpc is failing on 0.22

2011-09-28 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2383:
--

Attachment: removeAfroPRCTest.patch

The removes Avro RPC test.

> TestDfsOverAvroRpc is failing on 0.22
> -
>
> Key: HDFS-2383
> URL: https://issues.apache.org/jira/browse/HDFS-2383
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Konstantin Shvachko
>Priority: Blocker
> Fix For: 0.22.0
>
> Attachments: removeAfroPRCTest.patch
>
>
> {{TestDfsOverAvroRpc.testWorkingDirectory()}} is failing. Possible the result 
> of Avro upgrade.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2377) hdfs script reporting Unrecognized option: -jvm

2011-09-27 Thread Konstantin Shvachko (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-2377:
--

 Priority: Blocker  (was: Major)
Fix Version/s: 0.22.0

I'll incorporate HDFS-1943 into 0.22.

> hdfs script reporting Unrecognized option: -jvm
> ---
>
> Key: HDFS-2377
> URL: https://issues.apache.org/jira/browse/HDFS-2377
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.22.0
>Reporter: Roman Shaposhnik
>Priority: Blocker
> Fix For: 0.22.0
>
>
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
> Unrecognized option: -jvm
> Could not create the Java virtual machine.
> The following chunk of code in hdfs script looks suspicious:
> {noformat}
>  if [[ $EUID -eq 0 ]]; then
> HADOOP_OPTS="$HADOOP_OPTS -jvm server $HADOOP_DATANODE_OPTS"
>   else
> HADOOP_OPTS="$HADOOP_OPTS -server $HADOOP_DATANODE_OPTS"
>   fi
> {noformat}
> I'm really not sure what was meant by it

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira