Unsubscribe
Error when appending a file on HDFS
Hi, I used *hdfs.ext.avro.**AvroWriter* to write an Avro file on HDFS as given in, http://hdfscli.readthedocs.io/en/latest/api.html#hdfs.ext.avro.AvroWriter with AvroWriter(client, hdfs_file, append=True, codec="snappy") as writer: >writer.write(data) > When I call above in a loop, I get, java.lang.Exception: Shell Process Exception: Python HdfsError raised > Traceback (most recent call last): > File "Hdfsfile.py", line 49, in process > writer.write(data) > File "/home/ram/lib/python2.7/site-packages/hdfs/ext/avro/__init__.py", > line 277, in __exit__ > self._fo.__exit__(*exc_info) > File "/home/ram/lib/python2.7/site-packages/hdfs/util.py", line 99, in > __exit__ > raise self._err # pylint: disable=raising-bad-type > *HdfsError: Failed to APPEND_FILE /user/ram/level for > DFSClient_NONMAPREDUCE_-1757292245_79 on 172.26.83.17 because this file > lease is currently owned by DFSClient_NONMAPREDUCE_-668446345_78 on > 172.26.83.17* > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2979) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2726) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:3033) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:3002) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:739) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:429) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2206) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2202) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2200) > It seems like the resource is used by previous append Is there a way to check whether the file exist, each time the append is called? Thanks, Ram
hadoop mapreduce job rest api
Hi, I want to submit a mapreduce job using rest api, and get the status of the job every n interval. Is there a way to do it? Thanks
Decommission datanode
Hi, I don't have much data, but it took around 40 minutes to decommission. How long will it take to decommission a datanode? Is there any way to optimize the process? Thanks.
check decommission status
Hi, Is there a java api to get decommission status for a particular data node? Thanks.
Re: Hadoop 2.6 issue
Anand, Try Oracle JDK instead of Open JDK. Regards, Ramkumar Bashyam On Wed, Apr 1, 2015 at 1:25 PM, Anand Murali anand_vi...@yahoo.com wrote: Tried export in hadoop-env.sh. Does not work either Anand Murali 11/7, 'Anand Vihar', Kandasamy St, Mylapore Chennai - 600 004, India Ph: (044)- 28474593/ 43526162 (voicemail) On Wednesday, April 1, 2015 1:03 PM, Jianfeng (Jeff) Zhang jzh...@hortonworks.com wrote: Try to export JAVA_HOME in hadoop-env.sh Best Regard, Jeff Zhang From: Anand Murali anand_vi...@yahoo.com Reply-To: user@hadoop.apache.org user@hadoop.apache.org, Anand Murali anand_vi...@yahoo.com Date: Wednesday, April 1, 2015 at 2:28 PM To: user@hadoop.apache.org user@hadoop.apache.org Subject: Hadoop 2.6 issue Dear All: I am unable to start Hadoop even after setting HADOOP_INSTALL,JAVA_HOME and JAVA_PATH. Please find below error message anand_vihar@Latitude-E5540:~/hadoop-2.6.0$ start-dfs.sh --config /home/anand_vihar/hadoop-2.6.0/conf Starting namenodes on [localhost] localhost: Error: JAVA_HOME is not set and could not be found. cat: /home/anand_vihar/hadoop-2.6.0/conf/slaves: No such file or directory Starting secondary namenodes [0.0.0.0] 0.0.0.0: Error: JAVA_HOME is not set and could not be found. anand_vihar@Latitude-E5540:~/hadoop-2.6.0$ echo * $JAVA_HOME /usr/lib/jvm/java-1.7.0-openjdk-amd64* anand_vihar@Latitude-E5540:~/hadoop-2.6.0$ echo * $HADOOP_INSTALL /home/anand_vihar/hadoop-2.6.0* anand_vihar@Latitude-E5540:~/hadoop-2.6.0$ echo* $PATH* :/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/anand_vihar/hadoop-2.6.0/bin:/home/anand_vihar/hadoop-2.6.0/sbin:/usr/lib/jvm/java-1.7.0-openjdk-amd64:/usr/lib/jvm/java-1.7.0-openjdk-amd64 anand_vihar@Latitude-E5540:~/hadoop-2.6.0$ I HAVE MADE NO CHANGES IN HADOOP_ENV.sh and run it succesfully. Core-site.xml ?xml version=1.0? !--core-site.xml-- configuration property namefs.default.name/name valuehdfs://localhost//value /property /configuration HDFS-site.xml ?xml version=1.0? !-- hdfs-site.xml -- configuration property namedfs.replication/name value1/value /property /configuration Mapred-site.xml ?xml version=1.0? !--mapred-site.xml-- configuration property namemapred.job.tracker/name valuelocalhost:8021/value /property /configuration Shall be thankful, if somebody can advise. Regards, Anand Murali 11/7, 'Anand Vihar', Kandasamy St, Mylapore Chennai - 600 004, India Ph: (044)- 28474593/ 43526162 (voicemail)
Re: changing log verbosity
Hi Jonathan, For Audit Log you can look log4.properties file. By default, the log4j.properties file has the log threshold set to WARN. By setting this level to INFO, audit logging can be turned on. The following snippet shows the log4j.properties configuration when HDFS and MapReduce audit logs are turned on. # # hdfs audit logging # hdfs.audit.logger=INFO,NullAppender hdfs.audit.log.maxfilesize=256MB hdfs.audit.log.maxbackupindex=20 log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger} log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender log4j.appender.RFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout log4j.appender.RFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p%c{2}: %m%n log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize} log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex} # # mapred audit logging # mapred.audit.logger=INFO,NullAppender mapred.audit.log.maxfilesize=256MB mapred.audit.log.maxbackupindex=20 log4j.logger.org.apache.hadoop.mapred.AuditLogger=${mapred.audit.logger} log4j.additivity.org.apache.hadoop.mapred.AuditLogger=false log4j.appender.MRAUDIT=org.apache.log4j.RollingFileAppender log4j.appender.MRAUDIT.File=${hadoop.log.dir}/mapred-audit.log log4j.appender.MRAUDIT.layout=org.apache.log4j.PatternLayout log4j.appender.MRAUDIT.layout.ConversionPattern=%d{ISO8601} %p%c{2}: %m%n log4j.appender.MRAUDIT.MaxFileSize=${mapred.audit.log.maxfilesize} log4j.appender.MRAUDIT.MaxBackupIndex=${mapred.audit.log.maxbackupindex} Regards, Ramkumar Bashyam On Tue, Feb 24, 2015 at 2:36 PM, Jonathan Aquilina jaquil...@eagleeyet.net wrote: How does one go about changing the log verbosity in hadoop? What configuration file should I be looking at? -- Regards, Jonathan Aquilina Founder Eagle Eye T
Re: unsubscribe
Check http://hadoop.apache.org/mailing_lists.html#User Regards, Ramkumar Bashyam On Sun, Feb 22, 2015 at 1:48 PM, Mainak Bandyopadhyay mainak.bandyopadh...@gmail.com wrote: unsubscribe.
Re: unscubscribe
Check http://hadoop.apache.org/mailing_lists.html#User Regards, Ramkumar Bashyam On Mon, Feb 23, 2015 at 12:29 AM, Umesh Reddy ur2...@yahoo.com wrote: unsubscribe
Re: unsubscribe
Check http://hadoop.apache.org/mailing_lists.html#User Regards, Ramkumar Bashyam On Wed, Jan 7, 2015 at 7:01 PM, Kiran Prasad Gorigay kiranprasa...@imimobile.com wrote: unsubscribe
Re: unsubscribe me
Email to user-unsubscr...@hadoop.apache.org to unsubscribe. Regards, Ramkumar Bashyam On Wed, Dec 3, 2014 at 4:43 PM, chandu banavaram chandu.banava...@gmail.com wrote: please unsubscribe me