RE: hbase use error

2014-09-16 Thread QiXiangming
Dear Yu, this is a snap of log , thank you for diagnosis! it seems the upstream socket cracks. but i can't find valuable clue from other datanode's log! -- 2014-09-17 11:02:30,058 INFO org.apac

RE: hbase use error

2014-09-16 Thread mike Zarrin
Please unsubscribe me From: Ted Yu [mailto:yuzhih...@gmail.com] Sent: Tuesday, September 16, 2014 7:34 PM To: common-u...@hadoop.apache.org Cc: common-...@hadoop.apache.org Subject: Re: hbase use error Which hadoop release are you using ? Can you pastebin more of the server logs ? bq.

Re: hbase use error

2014-09-16 Thread Ted Yu
Please pastebin the log instead of sending to me. See https://issues.apache.org/jira/browse/HBASE-11339 It is under development. On Tue, Sep 16, 2014 at 7:47 PM, QiXiangming wrote: > thank you for your reply so quickly. > we cdh 5. > store files directly in hbase , not path. > > i have read som

RE: hbase use error

2014-09-16 Thread QiXiangming
thank you for your reply so quickly.we cdh 5.store files directly in hbase , not path. i have read some hbase schema design , says that it is recommended that for large file store path in hbase , and put real content in hdfs sequencefile. but i think 20M is not to big. i download those log now ,

Re: Programatic way to recursively chmod a directory

2014-09-16 Thread Mohammad Islam
Hi Naga, Thanks a lot for your answer. I think it will definitely answer the part of my question. For example, do you know if mkdirs works with permission? Regards, Mohammad On Tuesday, September 16, 2014 6:40 PM, Naganarasimha G R (Naga) wrote: Hi Mohammad, If the user you are trying w

Re: hbase use error

2014-09-16 Thread Ted Yu
Which hadoop release are you using ? Can you pastebin more of the server logs ? bq. load file larger than 20M Do you store such file(s) directly on hdfs and put its path in hbase ? See HBASE-11339 HBase MOB On Tue, Sep 16, 2014 at 7:29 PM, QiXiangming wrote: > hello ,everyone > i use h

hbase use error

2014-09-16 Thread QiXiangming
hello ,everyonei use hbase to store small pic or files , and meet an exception raised from hdfs, as following : slave2:50010:DataXceiver error processing WRITE_BLOCK operation src: /192.168.20.246:33162 dest: /192.168.20.247:50010 java.io.IOException: Premature EOF from inputStream

hadoop cluster crash problem

2014-09-16 Thread Li Li
hi all, I know it's not a problem related to hadoop but administrator can not find any clues. I have a machine with 24 core and 64GB memory with ubuntu 12.04 LTS. we use virtual box to create 4 virtual machine. Each vm has 10GB memory and 6 core. I have setup a small hadoop 1.2.1 cluste

RE: Programatic way to recursively chmod a directory

2014-09-16 Thread Naganarasimha G R (Naga)
Hi Mohammad, If the user you are trying with has rights on the folder then you can try umask of 000 (like 777 - 000 =777 effective rights ) or you can check what is effective umask setting which you want to have, so that files created by this user by default has the desired rights. Regards, N

RE: Is it a bug in CombineFileSplit?

2014-09-16 Thread Naganarasimha G R (Naga)
Hi Wang, Seems like its a defect, are you planning to raise a defect ? if not I can raise and fix Regards, Naga Huawei Technologies Co., Ltd. Phone: Fax: Mobile: +91 9980040283 Email: naganarasimh...@huawei.com Huawei Technologies Co., Ltd. Bantian,

Is it a bug in CombineFileSplit?

2014-09-16 Thread Benyi Wang
I use Spark's SerializableWritable to wrap CombineFileSplit so I can pass around the splits. But I ran into Serialization issues. In researching why my code fails, I found that this might be a bug in CombineFileSplit: CombineFileSplit doesn't serialize locations in write(DataOutput out) and deseri

Re: Programatic way to recursively chmod a directory

2014-09-16 Thread Jagat Singh
I did not try this. What is the error or return code when you use FileUtil https://hadoop.apache.org/docs/stable/api/org/apache/hadoop/fs/FileUtil.html#chmod(java.lang.String, java.lang.String, boolean) chmod public static int *chmod*

Programatic way to recursively chmod a directory

2014-09-16 Thread Mohammad Islam
Hi, Is there a *programatic* solution to do it recursively? I'm using Hadoop 2.3.0. I tried the followings: 1. I tried FileSystem.mkdirs( path, permission), it created the directory but the permission is not set correctly. 2. I tried FileSystem.setPermisison(path, permission), it changes only

Re: DataNode not recognising available disk space

2014-09-16 Thread Yusaku Sako
Charles, If the newly added slave node has an extra disk that the other nodes don't have, then you will have to do the following on the Ambari Web UI so that "dfs.data.dir" is reflected to include this extra drive for that node: * Go to Services > HDFS > Configs * Click on "Manage Config Groups" l

Re: Cannot start DataNode after adding new volume

2014-09-16 Thread Charles Robertson
Hi Samir, That was it - I changed ownership of the /usr/lib/hadoop dir to hdfs:hadoop and tried again and the DataNode has started successfully. Thank you! Charles On 16 September 2014 13:47, Samir Ahmic wrote: > Hi Charles, > > From log it looks like that DataNode process don't have permissio

Re: Cannot start DataNode after adding new volume

2014-09-16 Thread Samir Ahmic
Hi Charles, >From log it looks like that DataNode process don't have permissions to write to "/usr/lib/hadoop" dir. Can you check permissions on " /usr/lib/hadoop" for user under which DataNode process is started. (probably hdfs user but not sure). Cheers Samir On Tue, Sep 16, 2014 at 2:40 PM,

Re: Cannot start DataNode after adding new volume

2014-09-16 Thread Charles Robertson
I've found this in the logs: 014-09-16 11:00:31,287 INFO datanode.DataNode (SignalLogger.java:register(91)) - registered UNIX signal handlers for [TERM, HUP, INT] 2014-09-16 11:00:31,521 WARN common.Util (Util.java:stringAsURI(56)) - Path /hadoop/hdfs/data should be specified as a URI in configu

DataNode not recognising available disk space

2014-09-16 Thread Charles Robertson
Hi all, I've added a new slave node to my cluster with a (single) larger disk size (100Gb) than on the other nodes. However, Amabri is reporting a total of 8.6 Gb disk space. lsblk correctly reports the disk size. Does anyone why this might be? As I understand things you need to tell HDFS how muc

RE: How can I increase the speed balancing?

2014-09-16 Thread John Lilley
Srikanth, The cluster is idle while balancing, and it seems to move about 2MB/minute. There is no discernable CPU load. john From: Srikanth upputuri [mailto:srikanth.upput...@huawei.com] Sent: Thursday, September 04, 2014 12:57 AM To: user@hadoop.apache.org Subject: RE: How can I increase the sp

Re: Cannot start DataNode after adding new volume

2014-09-16 Thread Charles Robertson
Hi Susheel, Tried that - same result. DataNode still not starting. Thanks, Charles On 16 September 2014 11:49, Susheel Kumar Gadalay wrote: > The VERSION file has to be same across all the data nodes directories. > > So I suggested to copy it as it is using OS command and start data node. > >

Re: Cannot start DataNode after adding new volume

2014-09-16 Thread Susheel Kumar Gadalay
The VERSION file has to be same across all the data nodes directories. So I suggested to copy it as it is using OS command and start data node. On 9/16/14, Charles Robertson wrote: > Hi Susheel, > > Thanks for the reply. I'm not entirely sure what you mean. > > When I created the new directory o

Re: Cannot start DataNode after adding new volume

2014-09-16 Thread Charles Robertson
Hi Susheel, Thanks for the reply. I'm not entirely sure what you mean. When I created the new directory on the new volume I simply created an empty directory. I see from the existing data node directory that it has a sub-directory called current containing a file called VERSION. Your advice is t

Re: Cannot start DataNode after adding new volume

2014-09-16 Thread Susheel Kumar Gadalay
Is it something to do current/VERSION file in data node directory. Just copy from the existing directory and start. On 9/16/14, Charles Robertson wrote: > Hi all, > > I am running out of space on a data node, so added a new volume to the > host, mounted it and made sure the permissions were set

Cannot start DataNode after adding new volume

2014-09-16 Thread Charles Robertson
Hi all, I am running out of space on a data node, so added a new volume to the host, mounted it and made sure the permissions were set OK. Then I updated the 'DataNode Directories' property in Ambari to include the new path (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I restarted t

About extra containers being allocated in distributed shell example.

2014-09-16 Thread Smita Deshpande
Hi, In YARN distributed shell example, I am setting up my request for containers to the RM using the following call (I am asking for 9 containers here) private ContainerRequest setupContainerAskForRM(Resource capability) {} But when actually RMCallbackHandler a

RE: issue about let a common user run application on YARN (with kerberose)

2014-09-16 Thread Liu, Yi A
You need to do authentication using “kinit” (in linux) for the user you want to use. Then you can run your application. For more information, please refer to: http://hadoop.apache.org/docs/r2.5.0/hadoop-project-dist/hadoop-common/SecureMode.html Regards, Yi Liu From: ch huang [mailto:justlo...

Different commit-ids in github across releases

2014-09-16 Thread Rajat Jain
I notice that each hadoop release-tag in github (for example, release-2.5.0 tag and release-2.5.1 tag) have different commit-ids for the same commit (eg. this and this