Re: Please join me in welcoming the following people as committers to the Hadoop project
congrats all..I hope i'll be committer some day as well:) On Thu, Jan 6, 2011 at 10:23 AM, li ping wrote: > congratulations > > On Thu, Jan 6, 2011 at 11:40 AM, Ian Holsman wrote: > > > On behalf of the Apache Hadoop PMC, I would like to extend a warm welcome > > to the following people, > > who have all chosen to accept the role of committers on Hadoop. > > > > In no alphabetical order: > > > > - Aaron Kimball > > - Allen Wittenauer > > - Amar Kamat > > - Dmytro Molkov > > - Jitendra Pandey > > - Kan Zhang > > - Ravi Gummadi > > - Sreekanth Ramakrishna > > - Todd Lipcon > > > > I appreciate all the hard work these people have put into the project so > > far, and look forward to future contributions they will make to Hadoop in > > the future > > > > Well done guys! > > > > > > --Ian > > > > > > > -- > -李平 > -- Deepak Sharma http://www.linkedin.com/in/rikindia
Re: Please join me in welcoming the following people as committers to the Hadoop project
congratulations On Thu, Jan 6, 2011 at 11:40 AM, Ian Holsman wrote: > On behalf of the Apache Hadoop PMC, I would like to extend a warm welcome > to the following people, > who have all chosen to accept the role of committers on Hadoop. > > In no alphabetical order: > > - Aaron Kimball > - Allen Wittenauer > - Amar Kamat > - Dmytro Molkov > - Jitendra Pandey > - Kan Zhang > - Ravi Gummadi > - Sreekanth Ramakrishna > - Todd Lipcon > > I appreciate all the hard work these people have put into the project so > far, and look forward to future contributions they will make to Hadoop in > the future > > Well done guys! > > > --Ian > > -- -李平
Re: Please join me in welcoming the following people as committers to the Hadoop project
Congratulations! Great to see the community growing once again. On Wed, Jan 5, 2011 at 8:34 PM, Jay Booth wrote: > Congrats, all! > > On Wed, Jan 5, 2011 at 11:09 PM, Stack wrote: > > > Congrats lads. > > St.Ack > > > > On Wed, Jan 5, 2011 at 7:40 PM, Ian Holsman wrote: > > > On behalf of the Apache Hadoop PMC, I would like to extend a warm > welcome > > to the following people, > > > who have all chosen to accept the role of committers on Hadoop. > > > > > > In no alphabetical order: > > > > > > - Aaron Kimball > > > - Allen Wittenauer > > > - Amar Kamat > > > - Dmytro Molkov > > > - Jitendra Pandey > > > - Kan Zhang > > > - Ravi Gummadi > > > - Sreekanth Ramakrishna > > > - Todd Lipcon > > > > > > I appreciate all the hard work these people have put into the project > so > > far, and look forward to future contributions they will make to Hadoop in > > the future > > > > > > Well done guys! > > > > > > > > > --Ian > > > > > > > > >
Re: Please join me in welcoming the following people as committers to the Hadoop project
Congrats, all! On Wed, Jan 5, 2011 at 11:09 PM, Stack wrote: > Congrats lads. > St.Ack > > On Wed, Jan 5, 2011 at 7:40 PM, Ian Holsman wrote: > > On behalf of the Apache Hadoop PMC, I would like to extend a warm welcome > to the following people, > > who have all chosen to accept the role of committers on Hadoop. > > > > In no alphabetical order: > > > > - Aaron Kimball > > - Allen Wittenauer > > - Amar Kamat > > - Dmytro Molkov > > - Jitendra Pandey > > - Kan Zhang > > - Ravi Gummadi > > - Sreekanth Ramakrishna > > - Todd Lipcon > > > > I appreciate all the hard work these people have put into the project so > far, and look forward to future contributions they will make to Hadoop in > the future > > > > Well done guys! > > > > > > --Ian > > > > >
Re: Please join me in welcoming the following people as committers to the Hadoop project
Congrats lads. St.Ack On Wed, Jan 5, 2011 at 7:40 PM, Ian Holsman wrote: > On behalf of the Apache Hadoop PMC, I would like to extend a warm welcome to > the following people, > who have all chosen to accept the role of committers on Hadoop. > > In no alphabetical order: > > - Aaron Kimball > - Allen Wittenauer > - Amar Kamat > - Dmytro Molkov > - Jitendra Pandey > - Kan Zhang > - Ravi Gummadi > - Sreekanth Ramakrishna > - Todd Lipcon > > I appreciate all the hard work these people have put into the project so far, > and look forward to future contributions they will make to Hadoop in the > future > > Well done guys! > > > --Ian > >
Please join me in welcoming the following people as committers to the Hadoop project
On behalf of the Apache Hadoop PMC, I would like to extend a warm welcome to the following people, who have all chosen to accept the role of committers on Hadoop. In no alphabetical order: - Aaron Kimball - Allen Wittenauer - Amar Kamat - Dmytro Molkov - Jitendra Pandey - Kan Zhang - Ravi Gummadi - Sreekanth Ramakrishna - Todd Lipcon I appreciate all the hard work these people have put into the project so far, and look forward to future contributions they will make to Hadoop in the future Well done guys! --Ian
[jira] Created: (HDFS-1571) Improve performance of audit log
Improve performance of audit log Key: HDFS-1571 URL: https://issues.apache.org/jira/browse/HDFS-1571 Project: Hadoop HDFS Issue Type: Improvement Reporter: Dmytro Molkov We currently use java Formatter for audit log formatting. I've done some experiments with it versus simply creating a string of the same format using StringBuilder and apparently if we were to use StringBuilder we can make audit log function 5-10x faster. Since it is a useful feature and the namenode under heavy load logs a lot this performance improvement is worth it. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (HDFS-1570) Use readlink to get absolute paths in the scripts
Use readlink to get absolute paths in the scripts - Key: HDFS-1570 URL: https://issues.apache.org/jira/browse/HDFS-1570 Project: Hadoop HDFS Issue Type: Improvement Components: scripts Reporter: Eli Collins Assignee: Eli Collins Priority: Minor Fix For: 0.22.0, 0.23.0 MR side of HADOOP-7089. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (HDFS-1569) Use readlink to get absolute paths in the scripts
Use readlink to get absolute paths in the scripts -- Key: HDFS-1569 URL: https://issues.apache.org/jira/browse/HDFS-1569 Project: Hadoop HDFS Issue Type: Improvement Components: scripts Reporter: Eli Collins Assignee: Eli Collins Priority: Minor Fix For: 0.22.0, 0.23.0 HDFS side of HADOOP-7089. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Created: (HDFS-1568) Improve DataXceiver error logging
Improve DataXceiver error logging - Key: HDFS-1568 URL: https://issues.apache.org/jira/browse/HDFS-1568 Project: Hadoop HDFS Issue Type: Improvement Components: data-node Affects Versions: 0.23.0 Reporter: Todd Lipcon Assignee: Todd Lipcon Priority: Minor In supporting customers we often see things like SocketTimeoutExceptions or EOFExceptions coming from DataXceiver, but the logging isn't very good. For example, if we get an IOE while setting up a connection to the downstream mirror in writeBlock, the IP of the downstream mirror isn't logged on the DN side. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Resolved: (HDFS-523) Create performance/scalability tests for append feature
[ https://issues.apache.org/jira/browse/HDFS-523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Boudnik resolved HDFS-523. - Resolution: Later This might (or mightn't) be addressed in a later time subject or resources availability. > Create performance/scalability tests for append feature > --- > > Key: HDFS-523 > URL: https://issues.apache.org/jira/browse/HDFS-523 > Project: Hadoop HDFS > Issue Type: Test > Components: test >Reporter: Konstantin Boudnik > Fix For: 0.22.0 > > > This is a placeholder for a test suite, which isn't likely to be addressed > during the feature implementation time. > Obviously, scalability is very important for the new append feature. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
Hadoop-Hdfs-trunk - Build # 543 - Still Failing
See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/543/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 9 lines...] at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:479) at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:411) at hudson.model.Run.run(Run.java:1324) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:139) Caused by: java.io.IOException: Remote call on hadoop8 failed at hudson.remoting.Channel.call(Channel.java:639) at hudson.FilePath.act(FilePath.java:742) ... 10 more Caused by: java.lang.NoClassDefFoundError: java/net/SocketTimeoutException at org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:380) at org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:275) at org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:263) at org.tmatesoft.svn.core.internal.io.dav.DAVConnection.doPropfind(DAVConnection.java:126) at org.tmatesoft.svn.core.internal.io.dav.DAVUtil.getProperties(DAVUtil.java:73) at org.tmatesoft.svn.core.internal.io.dav.DAVUtil.getResourceProperties(DAVUtil.java:79) at org.tmatesoft.svn.core.internal.io.dav.DAVUtil.getPropertyValue(DAVUtil.java:93) at org.tmatesoft.svn.core.internal.io.dav.DAVUtil.getBaselineProperties(DAVUtil.java:245) at org.tmatesoft.svn.core.internal.io.dav.DAVUtil.getBaselineInfo(DAVUtil.java:184) at org.tmatesoft.svn.core.internal.io.dav.DAVRepository.getLatestRevision(DAVRepository.java:182) at org.tmatesoft.svn.core.wc.SVNBasicClient.getRevisionNumber(SVNBasicClient.java:482) at org.tmatesoft.svn.core.wc.SVNBasicClient.getLocations(SVNBasicClient.java:873) at org.tmatesoft.svn.core.wc.SVNBasicClient.createRepository(SVNBasicClient.java:534) at org.tmatesoft.svn.core.wc.SVNUpdateClient.doCheckout(SVNUpdateClient.java:901) at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:678) at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:596) at hudson.FilePath$FileCallableWrapper.call(FilePath.java:1899) at hudson.remoting.UserRequest.perform(UserRequest.java:114) at hudson.remoting.UserRequest.perform(UserRequest.java:48) at hudson.remoting.Request$2.run(Request.java:270) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) Caused by: java.lang.ClassNotFoundException: Classloading from system classloader disabled at hudson.remoting.RemoteClassLoader$ClassLoaderProxy.fetch2(RemoteClassLoader.java:399) at sun.reflect.GeneratedMethodAccessor103.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:274) at hudson.remoting.Request$2.run(Request.java:270) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) [FINDBUGS] Skipping publisher since build result is FAILURE Publishing Javadoc Archiving artifacts Recording test results Recording fingerprints Publishing Clover coverage report... No Clover report will be published due to a Build Failure Email was triggered for: Failure Sending email for trigger: Failure ### ## FAILED TESTS (if any) ## No tests ran.
[jira] Resolved: (HDFS-810) Number of Under-Replicated Blocks information posted on WebUI is inconsistent with CLI Fsck report.
[ https://issues.apache.org/jira/browse/HDFS-810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko resolved HDFS-810. -- Resolution: Cannot Reproduce Please feel free to reopen, provided that there is more details on how the inconsistency can be observed. > Number of Under-Replicated Blocks information posted on WebUI is > inconsistent with CLI Fsck report. > - > > Key: HDFS-810 > URL: https://issues.apache.org/jira/browse/HDFS-810 > Project: Hadoop HDFS > Issue Type: Bug > Components: name-node >Affects Versions: 0.22.0 >Reporter: Ravi Phulari > Fix For: 0.22.0 > > > Number of Under-Replicated Blocks show on WebUI is inconsistent with Under > replicated blocks shown on Fsck. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
[jira] Resolved: (HDFS-831) Multiple slf4j bindings cause warnings in HDFS.
[ https://issues.apache.org/jira/browse/HDFS-831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko resolved HDFS-831. -- Resolution: Duplicate Fix Version/s: (was: 0.22.0) 0.21.0 Fixed in HADOOP-6395. > Multiple slf4j bindings cause warnings in HDFS. > > > Key: HDFS-831 > URL: https://issues.apache.org/jira/browse/HDFS-831 > Project: Hadoop HDFS > Issue Type: Bug > Components: build >Affects Versions: 0.22.0 >Reporter: Ravi Phulari > Fix For: 0.21.0 > > > Multiple slf4j bindings warnings are emitted in log while running > run-test-hdfs. > To fix this we need to remove one of the not used *slf4j* jar. > - Standard Error - > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/Users/rphulari/.ivy2/cache/org.slf4j/slf4j-simple/jars/slf4j-simple-1.5.8.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/Users/rphulari/.ivy2/cache/org.slf4j/slf4j-log4j12/jars/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > 1 [main] INFO org.mortbay.log - Logging to > org.slf4j.impl.SimpleLogger(org.mortbay.log) via org.mortbay.log.Slf4jLog > 128 [main] INFO org.mortbay.log - jetty-6.1.14 > 1097 [main] INFO org.mortbay.log - Started > selectchannelconnec...@localhost:58380 > 1521 [main] INFO org.mortbay.log - jetty-6.1.14 > 1912 [main] INFO org.mortbay.log - Started > selectchannelconnec...@localhost:58383 > 2130 [main] INFO org.mortbay.log - jetty-6.1.14 > 2373 [main] INFO org.mortbay.log - Started > selectchannelconnec...@localhost:58386 > 2582 [main] INFO org.mortbay.log - jetty-6.1.14 -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.