Hi,
I am trying to run Apache Hadoop project on parallel filesystem like
lustre. I have 1 MDS, 2 OSS/OST and 1 Lustre Client.
My lustre client shows:
Code:
[root@lustreclient1 ~]# lfs df -h
UUID bytesUsed Available Use% Mounted on
lustre-MDT_UUID 4.5G
I tried running the below command but got the below error.
I have not put it into HDFS since Lustre is what I am trying to implement with.
[code]
#bin/hadoop jar hadoop-examples-1.1.1.jar wordcount
/user/hadoop/hadoop /user/hadoop-output
13/02/17 17:02:50 INFO util.NativeCodeLoader: Loaded the
Great !!! I tried removing entry from mapred-site.xml and it seems to run well.
Here are the logs now:
[code]
[root@alpha hadoop]# bin/hadoop jar hadoop-examples-1.1.1.jar
wordcount /user/ha
doop/hadoop/
/user/hadoop/hadoop/output
13/02/17 17:14:37 INFO
Hi,
I think you might be better served with your Hadoop setup by posting to
the Hadoop discussion list. Once you have it setup and working, if you
run into Lustre related issues, please feel free to post those here.
Good luck!
-cf
On 02/17/2013 04:47 AM, linux freaker wrote:
Great !!! I
Hi,
Additional logging from the MDS and OSS's is required to really tell
whats going on, that said you can try and verify that your OSS nodes can
successfully contact your MDS and MGS nodes, lctl ping will indicate
this. After that if you find they are successfully contacting each other
you
Hi,
I am getting that occasionnally and try to remount another time,
which works. I am interested in finding out what's happenning too.
Thanks.
On 01/07/12 07:19, Ashok nulguda wrote:
Dear All,
We have Lustre 1.8.4 installed with 2 MDS servers and 2 OSS servers
with 17 OSTes and 1 MDT
Dear All,
We have Lustre 1.8.4 installed with 2 MDS servers and 2 OSS servers with 17
OSTes and 1 MDT with ha configured on both my MDS and OSS.
problem:-
Some of my OSTes are not mounting on my OSS servers.
When i try to maunully mount it through errors failed: Transport
endpoint is not
How are your OSTs connected to your OSSs?
-cf
-Original message-
From: Ashok nulguda ashok0...@gmail.com
To: Lustre Discussion list Lustre-discuss@lists.lustre.org
Sent: Sat, Jan 7, 2012 00:19:59 MST
Subject: [Lustre-discuss] Need Help
Hi,
I just upgraded our servers from RHEL 5.4 - RHEL 5.5 and went from lustre
1.8.3 to 1.8.5.
Now when I try to mount the OSTs I'm getting:
[root@aoss1 ~]# mount -t lustre /dev/disk/by-label/scratch2-OST0001
/mnt/lustre/local/scratch2-OST0001
mount.lustre: mount
Did you also install the correct e2fsprogs?
cliffw
On Fri, Jul 1, 2011 at 5:45 PM, Mervini, Joseph A jame...@sandia.govwrote:
Hi,
I just upgraded our servers from RHEL 5.4 - RHEL 5.5 and went from lustre
1.8.3 to 1.8.5.
Now when I try to mount the OSTs I'm getting:
[root@aoss1 ~]# mount
Ashok nulguda wrote:
Dear All,
How to forcefully shutdown the luster service from client and OST and
MDS server when IO are opening.
For the servers, you can just umount them. There will not be any file
system corruption, but files will not have the latest data -- the cache
on the clients
Dear All,
How to forcefully shutdown the luster service from client and OST and MDS
server when IO are opening.
Thanks and Regards
Ashok
--
Ashok Y. Nulguda
System Administrator
Tata Elxsi,
Pune
mobile:-+91-9689945767
___
Lustre-discuss mailing
Dear All,
How to forcefully shutdown the luster.
Thanks and Regards
Ashok
--
Ashok Y. Nulguda
System Administrator
Tata Elxsi,
Pune
mobile:-+91-9689945767
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
Cheers Andreas. I had actually found that, but there doesn't seem to be
that much documentation about it. Or I didn't find it :) Plus it
appeared to find the users that were problematic whenever I tried it, so
I wondered if that is all there is, or if there's some other mechanism I
could test
Hi!
On Fri, Sep 24, 2010 at 09:18:15AM +0100, Tina Friedrich wrote:
Cheers Andreas. I had actually found that, but there doesn't seem to be
that much documentation about it. Or I didn't find it :) Plus it
appeared to find the users that were problematic whenever I tried it, so
I wondered
Actually, what I hit was one of the LDAP server private to the MDS
errounously had a size limit set where the others are unlimited. They're
round robin'd which is why I was seeing an inermittent effect. So not a
client issue, the clients would not have used this server for their
lookups.
I think there is a bit of confusion here. The MDS is doing the initial
authorization for the file, using l_getgroups to access the group information
from LDAP (or whatever database is used).
Daniel's point was that after the client has gotten access to the file, it will
cache this file locally
In fact, the issues occurred when MDS does the upcall (default
processed by user space l_getgroups) for user/group information
related with this RPC, one UID for each upcall, and all the
supplementary groups (not more than sysconf(_SC_NGROUPS_MAX) count) of
this UID will be returned. The
Hello List,
I'm after debugging hints...
I have a couple of users that intermittently get I/O errors when trying
to ls a directory (as in, within half an hour, works - doesn't work -
works...).
Users/groups are kept in ldap; as far as I can see/check, the ldap
information is consistend
Hi,
thanks for the answer. I found it in the meantime; one of our ldap
servers had a wrong size limit entry.
The logs I had of course already looked at - they didn't yield much in
terms of why, only what (as in, I could see it was permission errors,
but they do of course not really tell you
On 9/23/10 10:03 PM, Tina Friedrich wrote:
Hi,
thanks for the answer. I found it in the meantime; one of our ldap
servers had a wrong size limit entry.
The logs I had of course already looked at - they didn't yield much in
terms of why, only what (as in, I could see it was permission
On 2010-09-23, at 08:03, Tina Friedrich wrote:
Still - could someone point me to the bit in the documentation that best
describes how the MDS queries that sort of information (group/passwd
info, I mean)? Or how to best test that it's mechanisms are working? For
example, in this case, I
22 matches
Mail list logo