[jira] [Created] (HADOOP-7390) VersionInfo not generated properly in git after unsplit

2011-06-14 Thread Thomas Graves (JIRA)
VersionInfo not generated properly in git after unsplit
---

 Key: HADOOP-7390
 URL: https://issues.apache.org/jira/browse/HADOOP-7390
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Thomas Graves
Priority: Minor


The version information generated during the build of common when running from 
git has revision and branch Unknown. I believe this started after the unsplit:

@HadoopVersionAnnotation(version=0.22.0-SNAPSHOT, revision=Unknown, 
branch=Unknown,
 user=tgraves, date=Tue Jun 14 13:39:10 UTC 2011, 
url=file:///home/tgraves/git/hadoop-common/common,
 srcChecksum=0f78ea668971fe51e7ebf4f97f84eed2)

The ./src/saveVersion.sh script which generates the package-info.java file with 
the version info looks for the presence of .git directory and that is now a 
level up instead of in the common directory.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-7391) Copy the interrface classification documentation of HADOOP-5073 into javadoc

2011-06-14 Thread Sanjay Radia (JIRA)
Copy the interrface classification documentation of HADOOP-5073 into javadoc


 Key: HADOOP-7391
 URL: https://issues.apache.org/jira/browse/HADOOP-7391
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Fix For: 0.22.0


The documentation for interface classification in Jira Hadoop-5073 was not 
copied to the Javadoc
of the classification.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-7393) PathData saves original string value, inviting failure when CWD changes

2011-06-14 Thread Matt Foley (JIRA)
PathData saves original string value, inviting failure when CWD changes
---

 Key: HADOOP-7393
 URL: https://issues.apache.org/jira/browse/HADOOP-7393
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0
Reporter: Matt Foley


PathData#string stores the pathstring originally used to construct the Path, 
and returns it from various methods, apparently in an attempt to improve the 
user experience for the shell.

However, the current working directory may change, and if so this string value 
becomes meaningless and/or incorrect in context.


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: help me to solve Exception

2011-06-14 Thread Niels Basjes
 11/06/04 01:47:09 WARN hdfs.DFSClient: DataStreamer Exception: 
 org.apache.hadoop.ipc.RemoteException: java.io.IOException: File 
 /user/eng-zinab/inn/In (copy) could only be replicated to 0 nodes, instead of 
 1

Do you have a datanode running?

-- 
Best regards / Met vriendelijke groeten,

Niels Basjes


Re: help me to solve Exception

2011-06-14 Thread Zinab Ahmed Mahmoud Elgendy
yes


 11/06/04 01:47:09 WARN hdfs.DFSClient: DataStreamer Exception: 
 org.apache.hadoop.ipc.RemoteException: java.io.IOException: File 
 /user/eng-zinab/inn/In (copy) could only be replicated to 0 nodes, instead of 
 1

Do you have a datanode running?

-- 
Best regards / Met vriendelijke groeten,

Niels Basjes

Re: Exception in thread main java.lang.NoClassDefFoundError: org/apache/fop/messaging/MessageHandler

2011-06-14 Thread Aaron T. Myers
Hey Thomas,

I believe you're getting this error because you're setting forrest.home to
use Forrest 0.9. For some reason I'm not quite sure of (perhaps a Forrest
bug?) trying to build the docs with 0.9 gives me the same error.

You can grab Forrest 0.8 from here:
http://archive.apache.org/dist/forrest/0.8/

--
Aaron T. Myers
Software Engineer, Cloudera



On Mon, Jun 13, 2011 at 10:26 PM, Thomas Anderson
t.dt.aander...@gmail.comwrote:

 When compiling source against the latest trunk in common repository,
 it throws fop exception, I can't found where to add this lib e.g fop
 v1.0.  Any config file needed to be altered so that the compilation
 would work?

 Thanks.

[exec] Exception in thread main java.lang.NoClassDefFoundError:
 org/apache/fop/messaging/MessageHandler
 [exec] at

 org.apache.cocoon.serialization.FOPSerializer.configure(FOPSerializer.java:122)
 [exec] at

 org.apache.avalon.framework.container.ContainerUtil.configure(ContainerUtil.java:201)
 [exec] at

 org.apache.avalon.excalibur.component.DefaultComponentFactory.newInstance(DefaultComponentFactory.java:289)
 [exec] at

 org.apache.avalon.excalibur.pool.InstrumentedResourceLimitingPool.newPoolable(InstrumentedResourceLimitingPool.java:655)
 [exec] at

 org.apache.avalon.excalibur.pool.InstrumentedResourceLimitingPool.get(InstrumentedResourceLimitingPool.java:371)
 [exec] at

 org.apache.avalon.excalibur.component.PoolableComponentHandler.doGet(PoolableComponentHandler.java:198)
 [exec] at

 org.apache.avalon.excalibur.component.ComponentHandler.get(ComponentHandler.java:381)
 [exec] at

 org.apache.avalon.excalibur.component.ExcaliburComponentSelector.select(ExcaliburComponentSelector.java:215)
 [exec] at

 org.apache.cocoon.components.ExtendedComponentSelector.select(ExtendedComponentSelector.java:268)
 [exec] at

 org.apache.cocoon.components.pipeline.AbstractProcessingPipeline.setSerializer(AbstractProcessingPipeline.java:311)
 [exec] at

 org.apache.cocoon.components.pipeline.impl.AbstractCachingProcessingPipeline.setSerializer(AbstractCachingProcessingPipeline.java:171)
 [exec] at

 org.apache.cocoon.components.treeprocessor.sitemap.SerializeNode.invoke(SerializeNode.java:120)
 [exec] at

 org.apache.cocoon.components.treeprocessor.AbstractParentProcessingNode.invokeNodes(AbstractParentProcessingNode.java:69)
 [exec] at

 org.apache.cocoon.components.treeprocessor.sitemap.SelectNode.invoke(SelectNode.java:103)
 [exec] at

 org.apache.cocoon.components.treeprocessor.AbstractParentProcessingNode.invokeNodes(AbstractParentProcessingNode.java:47)
 [exec] at

 org.apache.cocoon.components.treeprocessor.sitemap.PreparableMatchNode.invoke(PreparableMatchNode.java:131)
 [exec] at

 org.apache.cocoon.components.treeprocessor.AbstractParentProcessingNode.invokeNodes(AbstractParentProcessingNode.java:69)
 [exec] at

 org.apache.cocoon.components.treeprocessor.sitemap.PipelineNode.invoke(PipelineNode.java:143)
 [exec] at

 org.apache.cocoon.components.treeprocessor.AbstractParentProcessingNode.invokeNodes(AbstractParentProcessingNode.java:69)
 [exec] at

 org.apache.cocoon.components.treeprocessor.sitemap.PipelinesNode.invoke(PipelinesNode.java:93)
 [exec] at

 org.apache.cocoon.components.treeprocessor.ConcreteTreeProcessor.process(ConcreteTreeProcessor.java:235)
 [exec] at

 org.apache.cocoon.components.treeprocessor.ConcreteTreeProcessor.process(ConcreteTreeProcessor.java:177)
 [exec] at

 org.apache.cocoon.components.treeprocessor.TreeProcessor.process(TreeProcessor.java:254)
 [exec] at

 org.apache.cocoon.components.treeprocessor.sitemap.MountNode.invoke(MountNode.java:118)
 [exec] at

 org.apache.cocoon.components.treeprocessor.AbstractParentProcessingNode.invokeNodes(AbstractParentProcessingNode.java:69)
 [exec] at

 org.apache.cocoon.components.treeprocessor.sitemap.SelectNode.invoke(SelectNode.java:98)
 [exec] at

 org.apache.cocoon.components.treeprocessor.AbstractParentProcessingNode.invokeNodes(AbstractParentProcessingNode.java:69)
 [exec] at

 org.apache.cocoon.components.treeprocessor.sitemap.PipelineNode.invoke(PipelineNode.java:143)
 [exec] at

 org.apache.cocoon.components.treeprocessor.AbstractParentProcessingNode.invokeNodes(AbstractParentProcessingNode.java:69)
 [exec] at

 org.apache.cocoon.components.treeprocessor.sitemap.PipelinesNode.invoke(PipelinesNode.java:93)
 [exec] at

 org.apache.cocoon.components.treeprocessor.ConcreteTreeProcessor.process(ConcreteTreeProcessor.java:235)
 [exec] at

 org.apache.cocoon.components.treeprocessor.ConcreteTreeProcessor.process(ConcreteTreeProcessor.java:177)
 [exec] at

 org.apache.cocoon.components.treeprocessor.TreeProcessor.process(TreeProcessor.java:254)
 [exec] at 

Re: Exception in thread main java.lang.NoClassDefFoundError: org/apache/fop/messaging/MessageHandler

2011-06-14 Thread Aaron T. Myers
Also, I've filed https://issues.apache.org/jira/browse/HADOOP-7394 to track
this issue.

--
Aaron T. Myers
Software Engineer, Cloudera



On Tue, Jun 14, 2011 at 7:06 PM, Aaron T. Myers a...@cloudera.com wrote:

 Hey Thomas,

 I believe you're getting this error because you're setting forrest.home to
 use Forrest 0.9. For some reason I'm not quite sure of (perhaps a Forrest
 bug?) trying to build the docs with 0.9 gives me the same error.

 You can grab Forrest 0.8 from here:
 http://archive.apache.org/dist/forrest/0.8/

 --
 Aaron T. Myers
 Software Engineer, Cloudera



 On Mon, Jun 13, 2011 at 10:26 PM, Thomas Anderson 
 t.dt.aander...@gmail.com wrote:

 When compiling source against the latest trunk in common repository,
 it throws fop exception, I can't found where to add this lib e.g fop
 v1.0.  Any config file needed to be altered so that the compilation
 would work?

 Thanks.

[exec] Exception in thread main java.lang.NoClassDefFoundError:
 org/apache/fop/messaging/MessageHandler
 [exec] at

 org.apache.cocoon.serialization.FOPSerializer.configure(FOPSerializer.java:122)
 [exec] at

 org.apache.avalon.framework.container.ContainerUtil.configure(ContainerUtil.java:201)
 [exec] at

 org.apache.avalon.excalibur.component.DefaultComponentFactory.newInstance(DefaultComponentFactory.java:289)
 [exec] at

 org.apache.avalon.excalibur.pool.InstrumentedResourceLimitingPool.newPoolable(InstrumentedResourceLimitingPool.java:655)
 [exec] at

 org.apache.avalon.excalibur.pool.InstrumentedResourceLimitingPool.get(InstrumentedResourceLimitingPool.java:371)
 [exec] at

 org.apache.avalon.excalibur.component.PoolableComponentHandler.doGet(PoolableComponentHandler.java:198)
 [exec] at

 org.apache.avalon.excalibur.component.ComponentHandler.get(ComponentHandler.java:381)
 [exec] at

 org.apache.avalon.excalibur.component.ExcaliburComponentSelector.select(ExcaliburComponentSelector.java:215)
 [exec] at

 org.apache.cocoon.components.ExtendedComponentSelector.select(ExtendedComponentSelector.java:268)
 [exec] at

 org.apache.cocoon.components.pipeline.AbstractProcessingPipeline.setSerializer(AbstractProcessingPipeline.java:311)
 [exec] at

 org.apache.cocoon.components.pipeline.impl.AbstractCachingProcessingPipeline.setSerializer(AbstractCachingProcessingPipeline.java:171)
 [exec] at

 org.apache.cocoon.components.treeprocessor.sitemap.SerializeNode.invoke(SerializeNode.java:120)
 [exec] at

 org.apache.cocoon.components.treeprocessor.AbstractParentProcessingNode.invokeNodes(AbstractParentProcessingNode.java:69)
 [exec] at

 org.apache.cocoon.components.treeprocessor.sitemap.SelectNode.invoke(SelectNode.java:103)
 [exec] at

 org.apache.cocoon.components.treeprocessor.AbstractParentProcessingNode.invokeNodes(AbstractParentProcessingNode.java:47)
 [exec] at

 org.apache.cocoon.components.treeprocessor.sitemap.PreparableMatchNode.invoke(PreparableMatchNode.java:131)
 [exec] at

 org.apache.cocoon.components.treeprocessor.AbstractParentProcessingNode.invokeNodes(AbstractParentProcessingNode.java:69)
 [exec] at

 org.apache.cocoon.components.treeprocessor.sitemap.PipelineNode.invoke(PipelineNode.java:143)
 [exec] at

 org.apache.cocoon.components.treeprocessor.AbstractParentProcessingNode.invokeNodes(AbstractParentProcessingNode.java:69)
 [exec] at

 org.apache.cocoon.components.treeprocessor.sitemap.PipelinesNode.invoke(PipelinesNode.java:93)
 [exec] at

 org.apache.cocoon.components.treeprocessor.ConcreteTreeProcessor.process(ConcreteTreeProcessor.java:235)
 [exec] at

 org.apache.cocoon.components.treeprocessor.ConcreteTreeProcessor.process(ConcreteTreeProcessor.java:177)
 [exec] at

 org.apache.cocoon.components.treeprocessor.TreeProcessor.process(TreeProcessor.java:254)
 [exec] at

 org.apache.cocoon.components.treeprocessor.sitemap.MountNode.invoke(MountNode.java:118)
 [exec] at

 org.apache.cocoon.components.treeprocessor.AbstractParentProcessingNode.invokeNodes(AbstractParentProcessingNode.java:69)
 [exec] at

 org.apache.cocoon.components.treeprocessor.sitemap.SelectNode.invoke(SelectNode.java:98)
 [exec] at

 org.apache.cocoon.components.treeprocessor.AbstractParentProcessingNode.invokeNodes(AbstractParentProcessingNode.java:69)
 [exec] at

 org.apache.cocoon.components.treeprocessor.sitemap.PipelineNode.invoke(PipelineNode.java:143)
 [exec] at

 org.apache.cocoon.components.treeprocessor.AbstractParentProcessingNode.invokeNodes(AbstractParentProcessingNode.java:69)
 [exec] at

 org.apache.cocoon.components.treeprocessor.sitemap.PipelinesNode.invoke(PipelinesNode.java:93)
 [exec] at

 org.apache.cocoon.components.treeprocessor.ConcreteTreeProcessor.process(ConcreteTreeProcessor.java:235)
 [exec] at

 

Re: help me to solve Exception

2011-06-14 Thread Uma Maheswara Rao G 72686
Hi Zinab,

1) First you can check all DNs are running or not
   Because NN will take some time (heartbeat expiry period) to detect the DN 
shutdown. UI may show as live nodes at that time.

2) When NN choosing the DNs , it will check whether Node is good or not.
   Here it will check multiple conditions
 * If node has not enough space.
 * Node traffic...Node is too busy
 * If DN decommissioned.
 * If rack has too many choosen nodes

   If your target nodes are in above state, then no good node will be there to 
write. Then the below exception can come.


Regards,
Uma Mahesh

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained here in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
email in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Zinab Ahmed Mahmoud Elgendy zinabelge...@yahoo.com
Date: Wednesday, June 15, 2011 3:43 am
Subject: help me to solve Exception
To: common-dev@hadoop.apache.org common-dev@hadoop.apache.org

 can anyone help me to find a solution of this exception ?
 
 11/06/04 01:47:09 WARN hdfs.DFSClient: DataStreamer Exception: 
 org.apache.hadoop.ipc.RemoteException: java.io.IOException: File 
 /user/eng-zinab/inn/In (copy) could only be replicated to 0 nodes, 
 instead of 1
     at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)   
  at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)   
  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
     at java.lang.reflect.Method.invoke(Method.java:597)
     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
     at java.security.AccessController.doPrivileged(Native Method)
     at javax.security.auth.Subject.doAs(Subject.java:396)
     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
 
     at org.apache.hadoop.ipc.Client.call(Client.java:740)
     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
     at $Proxy0.addBlock(Unknown Source)
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)   
  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
     at java.lang.reflect.Method.invoke(Method.java:597)
     at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)   
  at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
     at $Proxy0.addBlock(Unknown Source)
     at 
 org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)   
  at 
 org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
     at 
 org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)   
  at 
 org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
 
 11/06/04 01:47:09 WARN hdfs.DFSClient: Error Recovery for block 
 null bad datanode[0] nodes == null
 11/06/04 01:47:09 WARN hdfs.DFSClient: Could not get block 
 locations. Source file /user/eng-zinab/inn/In (copy) - Aborting...
 copyFromLocal: java.io.IOException: File /user/eng-zinab/inn/In 
 (copy) could only be replicated to 0 nodes, instead of 1
 11/06/04 01:47:09 ERROR hdfs.DFSClient: Exception closing file 
 /user/eng-zinab/inn/In (copy) : 
 org.apache.hadoop.ipc.RemoteException: java.io.IOException: File 
 /user/eng-zinab/inn/In (copy) could only be replicated to 0 nodes, 
 instead of 1
     at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)   
  at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)   
  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
     at java.lang.reflect.Method.invoke(Method.java:597)
     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
     

Re: Problems about the job counters

2011-06-14 Thread Harsh J
Hello,

When you have a Reduce phase, the mapper needs to (sort and)
materialize KVs to local files to let reducers fetch it. This is where
the FILE_BYTES_* counters appear from. Similarly, the Reducer fetches
and stores on local disk and merge sorts them again, thus they appear
for reduce phase as well.

In a map-only job, you should not generally see any FILE_BYTES_* counters.

On Wed, Jun 15, 2011 at 9:32 AM, hailong.yang1115
hailong.yang1...@gmail.com wrote:

 Dear all,

 I am trying to the built-in example wordcount with nearly 15GB input. When 
 the Hadoop job finished, I got the following counters.


 CounterMapReduceTotal
 Job CountersLaunched reduce tasks001
 Rack-local map tasks0035
 Launched map tasks002,318
 Data-local map tasks002,283
 FileSystemCountersFILE_BYTES_READ22,863,580,65617,654,943,34140,518,523,997
 HDFS_BYTES_READ154,400,997,4590154,400,997,459
 FILE_BYTES_WRITTEN33,490,829,40317,654,943,34151,145,772,744
 HDFS_BYTES_WRITTEN02,747,356,7042,747,356,704


 My question is what does the FILE_BYTES_READ counter mean? And what is the 
 difference between FILE_BYTES_READ and HDFS_BYTES_READ? In my opinion, all 
 the input is located in HDFS, so where does FILE_BYTES_READ come from during 
 the map phase?


 Any help will be appreciated!

 Hailong

 2011-06-15



 ***
 * Hailong Yang, PhD. Candidate
 * Sino-German Joint Software Institute,
 * School of Computer ScienceEngineering, Beihang University
 * Phone: (86-010)82315908
 * Email: hailong.yang1...@gmail.com
 * Address: G413, New Main Building in Beihang University,
 *              No.37 XueYuan Road,HaiDian District,
 *              Beijing,P.R.China,100191
 ***




-- 
Harsh J