[Lustre-discuss] Packaged kerberized VM client image Re: Migrating virtual machines over Lustre using Proxmox

2011-07-14 Thread Josephine Palencia

Hi Paul,

I wanted to signify our interest in your project as we have
something similar and related.

As part of the OSG ExTENCI project, we've set up a kerberized lustre
fs that uses virtual (VM) lustre clients in remote sites. With proper 
network tuning/route analysis, we observe that it is possible to
saturate the full IO bandwidth even for remote VM lustre clients
and obtain good IO rates.

So far, we've made available kerberized XEN VM lustre image 
(ftp://ftp.psc.edu/pub/jwan/Lustre-2.1/2.0.62/vm-images/) that
ExTENCI Tier3 remote sites can download and just boot up after
being given the proper kerberos principals.

We will also provide the kerberized images for KVM (Proxmox)
and VMware.

Currently, we use Lustre 2.1 (2.0.62) with 2.0.63 for clients.
PSC locally runs the same set up on a separate kerberos realm.

We invite collaboration with other parties who might be
interested in trying packaged kerberized lustre VM  clients
at their  sites.

Regards,
josephine



On Sat, 9 Jul 2011, Paul Gray wrote:

> Like most of the readers on the list, my background with Lustre
> originates from cluster environments.  But as virtualization trends seem
> to be here to stay, the question of using Lustre to support large-scale
> distributed virtualization naturally arises.  Being able to leverage
> Lustre benefits in a VM cloud would seem to have quite a few advantages.
>
> As a test case, at UNI we extended the Proxmox Virtualization
> Environment to support *live* Virtual Machine migration across separate
> physical (bare-metal) hosts of the Proxmox virtualization cluster,
> supported by a distributed Lustre filesystem.
>
> If you aren't familiar with Proxmox and live migration support over
> Lustre, what we deployed at UNI is akin to being able to do VMWare's
> VMotion over Lustre (without the associated license costs).
>
> We put together two screencasts showing the prototype deployment and
> wanted to share the proof-of-concept results with the community:
>
> *)  A small demonstration of live migration with a small Debian VM whose
> root filesystem is supported over a distributed lustre implementation
> can be found here:
>   http://dragon.cs.uni.edu/flash/proxmoxlustre.html
>
> *)  A short screencast showing live migration over Lustre using the
> Proxmox GUI can be viewed here:
>http://dragon.cs.uni.edu/flash/gui-migration.html
>
> Our immediate interests are in the performance of large (in terms of
> quantity), dynamic, live migrations that would leverage our
> high-throughput IB-based Lustre subsystem from our clusters.  We'd
> welcome your comments, feedback, questions or requests for specific
> benchmarks to explore.
>
> ADVthanksANCE
> -- 
> Paul Gray -o)
> 314 East Gym, Dept. of Computer Science   /\\
> University of Northern Iowa  _\_V
>  Message void if penguin violated ...  Don't mess with the penguin
>  No one says, "Hey, I can't read that ASCII attachment ya sent me."
> ___
> Lustre-discuss mailing list
> Lustre-discuss@lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] Using a secondary krb5.conf.lustre for kerberized lustre

2011-02-28 Thread Josephine Palencia

Hi,

We're trying to implement a second krb5.conf called krb5.conf.lustre to 
be used by the client to mount the lustre fs with kerberos turned on.

The primary krb5.conf of the client uses the local site's kerberos realm
while the secondary krb5.conf.lustre is modified to use the kerberos realm
in which the lustre servers reside.

At the client, before mounting the lustre fs, we set the parameter

export KRB5_CONFIG=/etc/krb5.conf.lustre

1. Will this achieve what I intend it to do?
(does kerberos lustre use this variable?)

2. I also noticed the error below  on the client related to this.

Feb 28 14:52:32 bluemoon kernel: Lustre: 2172:0:(gss_svc_upcall.c:1447:gss_init_
svc_upcall()) Init channel is not opened by lsvcgssd, following request 
might be dropped until lsvcgssd is active

Feedbacks would be appreciated.

Suggestions as to where I can/should post this is welcomed also.


Thanks,
josephine


___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] Kerberos auth by lnet

2010-07-29 Thread Josephine Palencia

Correction:

> Problem:
> 
>  I can change all kerberos flavors modifications on default and tcp0,
> tcp21 and get indication  on MDS server showing the changes ONLY for 
default and tcp0 (not tcp21).


On Thu, 29 Jul 2010, Josephine Palencia wrote:

>
> Hello,
>
> I set up 3 lustre networks for the following:
>
> tcp0  for kerberized connections lustre 2.0
> tcp21 for no kerberos auth connections to lustre 2.0
> tcp18 for no kerberos auth connections to lustre 1.83
>
> Below are the kerb specifications by network:
>
>  Secure RPC Config Rules:
>  BEER.srpc.flavor.tcp=krb5p
>  BEER.srpc.flavor.tcp21=null
>  BEER.srpc.flavor.default=krb5n
>
> The name of the lustre filesystem is /beer.
>
> youngs.beer.psc.edu on tcp0
> youngs-145.beer.psc.edu on tcp21
>
> [r...@youngs ~]# !lctl list_nids
> lctl list_nids list_nids
> 128.182.58@tcp
> 128.182.145@tcp21
>
> [r...@youngs ~]# df -h
> FilesystemSize  Used Avail Use% Mounted on
> /dev/mapper/VolGroup00-LogVol00
>   4.8G  2.0G  2.6G  43% /
> /dev/hda1  99M   24M   71M  25% /boot
> tmpfs 506M 0  506M   0% /dev/shm
> guinness.beer.psc@tcp0:/BEER
>99G   50G   47G  52% /beer
> guinness-145.beer.psc@tcp21:/BEER
>99G   50G   47G  52% /beer-145
>
> Problem:
> 
>  I can change all kerberos flavors modifications on default and tcp0,
> tcp21 and get indication  on MDS server showing the changes.
> I don't see such confirmation  for tcp21. But I can certainly mount as
> shown above. I suspect that tcp21 is defaulting to krb5p and thus
> requiring still auth for users. /beer and /beer-145 are NFS-exported to
> other systems residing in same and different kerberos realms.
> Root can access the filesystems with no problem but users require
> authentication.
>
> My modprobe is of the form
> options lnet ip2nets="tcp0(eth0) 128.182.58.*; tcp21(eth1) 128.182.145.*"
> routes="tcp0 128.182.145@tcp21; tcp21 128.182.58@tcp0"
>
> Question:
> 
> What's the interoperability between lustre 2.0* and lustre 1.83?
> Officially it is not compatible and/or supported?
> But unofficially, we can try it? Or absolutely not compatible.
>
> I would appreciate any feedback/corrections.
>
> Thanks,
> josephine
>
> Reference:
> --
> Lustre version: 2.0.5 Alpha
> Lustre release: 1.9.280
> Kernel: 2.6.18_128.7.1
>
>
>
> ___
> Lustre-discuss mailing list
> Lustre-discuss@lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] Kerberos auth by lnet

2010-07-29 Thread Josephine Palencia

Hello,

I set up 3 lustre networks for the following:

tcp0  for kerberized connections lustre 2.0
tcp21 for no kerberos auth connections to lustre 2.0
tcp18 for no kerberos auth connections to lustre 1.83

Below are the kerb specifications by network:

  Secure RPC Config Rules:
  BEER.srpc.flavor.tcp=krb5p
  BEER.srpc.flavor.tcp21=null
  BEER.srpc.flavor.default=krb5n

The name of the lustre filesystem is /beer.

youngs.beer.psc.edu on tcp0
youngs-145.beer.psc.edu on tcp21

[r...@youngs ~]# !lctl list_nids
lctl list_nids list_nids
128.182.58@tcp
128.182.145@tcp21

[r...@youngs ~]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
   4.8G  2.0G  2.6G  43% /
/dev/hda1  99M   24M   71M  25% /boot
tmpfs 506M 0  506M   0% /dev/shm
guinness.beer.psc@tcp0:/BEER
99G   50G   47G  52% /beer
guinness-145.beer.psc@tcp21:/BEER
99G   50G   47G  52% /beer-145

Problem:

  I can change all kerberos flavors modifications on default and tcp0, 
tcp21 and get indication  on MDS server showing the changes. 
I don't see such confirmation  for tcp21. But I can certainly mount as 
shown above. I suspect that tcp21 is defaulting to krb5p and thus 
requiring still auth for users. /beer and /beer-145 are NFS-exported to
other systems residing in same and different kerberos realms.
Root can access the filesystems with no problem but users require 
authentication.

My modprobe is of the form
options lnet ip2nets="tcp0(eth0) 128.182.58.*; tcp21(eth1) 128.182.145.*"
routes="tcp0 128.182.145@tcp21; tcp21 128.182.58@tcp0"

Question:

What's the interoperability between lustre 2.0* and lustre 1.83?
Officially it is not compatible and/or supported?
But unofficially, we can try it? Or absolutely not compatible.

I would appreciate any feedback/corrections.

Thanks,
josephine

Reference:
--
Lustre version: 2.0.5 Alpha
Lustre release: 1.9.280
Kernel: 2.6.18_128.7.1



___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] Lustre-1.9.260 mkfs.lustre errors

2009-09-16 Thread Josephine Palencia

Cool that fixed it :)

Thanks Jab.

josephin



On Wed, 16 Sep 2009, Jeffrey Bennett wrote:

> Do you have same version of e2fsprogs on both?
>
> jab
>
>> -Original Message-
>> From: lustre-discuss-boun...@lists.lustre.org
>> [mailto:lustre-discuss-boun...@lists.lustre.org] On Behalf Of
>> Josephine Palencia
>> Sent: Wednesday, September 16, 2009 6:43 AM
>> To: lustre-discuss@lists.lustre.org
>> Subject: [Lustre-discuss] Lustre-1.9.260 mkfs.lustre errors
>>
>>
>>
>> HEAD (Lustre-1.9.260) built on both archs (i386, x86_64).
>> mkfs.lustre, mount works on  the i386.
>>
>> But I get this error for the x86_64 on 2 different machines:
>> [r...@mds00w x86_64]# mkfs.lustre --verbose  --reformat
>> --fsname=jwan --mdt --mgsnode=mgs.jwan.teragrid@tcp0 /dev/sda8
>>
>> Permanent disk data:
>> Target: jwan-MDT
>> Index:  unassigned
>> Lustre FS:  jwan
>> Mount type: ldiskfs
>> Flags:  0x71
>>(MDT needs_index first_time update )
>> Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr
>> Parameters: mgsnode=128.182.112@tcp
>>
>> device size = 11538MB
>> 2 6 18
>> formatting backing filesystem ldiskfs on /dev/sda8
>>  target name  jwan-MDT
>>  4k blocks 0
>>  options-J size=400 -i 4096 -I 512 -O
>> dir_index,extents,uninit_groups -F
>> mkfs_cmd = mke2fs -j -b 4096 -L jwan-MDT  -J size=400 -i
>> 4096 -I 512 -O dir_index,extents,uninit_groups -F /dev/sda8
>> cmd: mke2fs -j -b 4096 -L jwan-MDT  -J size=400 -i 4096
>> -I 512 -O dir_index,extents,uninit_groups -F /dev/sda8 mke2fs
>> 1.40.7.sun3 (28-Feb-2008) Invalid filesystem option set:
>> dir_index,extents,uninit_groups  <-?
>>
>> mkfs.lustre FATAL: Unable to build fs /dev/sda8 (256)
>>
>> mkfs.lustre FATAL: mkfs failed 256
>> [r...@mds00w x86_64]# clear
>>
>> Machine 1 to serve as mdt:
>> --
>> [r...@mds00w x86_64]# mkfs.lustre --verbose  --reformat
>> --fsname=jwan --mdt --mgsnode=mgs.jwan.teragrid@tcp0 /dev/sda8
>>
>> Permanent disk data:
>> Target: jwan-MDT
>> Index:  unassigned
>> Lustre FS:  jwan
>> Mount type: ldiskfs
>> Flags:  0x71
>>(MDT needs_index first_time update )
>> Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr
>> Parameters: mgsnode=128.182.112@tcp
>>
>> device size = 11538MB
>> 2 6 18
>> formatting backing filesystem ldiskfs on /dev/sda8
>>  target name  jwan-MDT
>>  4k blocks 0
>>  options-J size=400 -i 4096 -I 512 -O
>> dir_index,extents,uninit_groups -F
>> mkfs_cmd = mke2fs -j -b 4096 -L jwan-MDT  -J size=400 -i
>> 4096 -I 512 -O dir_index,extents,uninit_groups -F /dev/sda8
>> cmd: mke2fs -j -b 4096 -L jwan-MDT  -J size=400 -i 4096
>> -I 512 -O dir_index,extents,uninit_groups -F /dev/sda8 mke2fs
>> 1.40.7.sun3 (28-Feb-2008)
>> Invalid filesystem option set:
>> dir_index,extents,uninit_groups   <-
>>
>> mkfs.lustre FATAL: Unable to build fs /dev/sda8 (256)
>>
>> mkfs.lustre FATAL: mkfs failed 256
>>
>> Machine 2 to serve as ost:
>> -
>> [r...@oss01w ~]# mkfs.lustre --reformat --fsname=jwan --ost
>> --mgsnode=mgs.jwan.teragrid@tcp0 /dev/sda8
>>
>> Permanent disk data:
>> Target: jwan-OST
>> Index:  unassigned
>> Lustre FS:  jwan
>> Mount type: ldiskfs
>> Flags:  0x72
>>(OST needs_index first_time update )
>> Persistent mount opts: errors=remount-ro,extents,mballoc
>> Parameters: mgsnode=128.182.112@tcp
>>
>>
>> mkfs.lustre FATAL: loop device requires a --device-size= param
>>
>> mkfs.lustre FATAL: Loop device setup for /dev/s
>> ---
>>
>> For now, I combined the mgs/mdt on the i386 machines and that
>> created the fs and mounted without problems.
>>
>> I'd appreciate feedback on the 2 other machines with
>> mkfs.lustre errors.
>>
>> Thanks,
>> josephine
>> ___
>> Lustre-discuss mailing list
>> Lustre-discuss@lists.lustre.org
>> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>>
>
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] Lustre-1.9.260 mkfs.lustre errors

2009-09-16 Thread Josephine Palencia


HEAD (Lustre-1.9.260) built on both archs (i386, x86_64).
mkfs.lustre, mount works on  the i386.

But I get this error for the x86_64 on 2 different machines:
[r...@mds00w x86_64]# mkfs.lustre --verbose  --reformat  --fsname=jwan 
--mdt --mgsnode=mgs.jwan.teragrid@tcp0 /dev/sda8

Permanent disk data:
Target: jwan-MDT
Index:  unassigned
Lustre FS:  jwan
Mount type: ldiskfs
Flags:  0x71
   (MDT needs_index first_time update )
Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr
Parameters: mgsnode=128.182.112@tcp

device size = 11538MB
2 6 18
formatting backing filesystem ldiskfs on /dev/sda8
target name  jwan-MDT
4k blocks 0
options-J size=400 -i 4096 -I 512 -O 
dir_index,extents,uninit_groups -F
mkfs_cmd = mke2fs -j -b 4096 -L jwan-MDT  -J size=400 -i 4096 -I 512 
-O dir_index,extents,uninit_groups -F /dev/sda8
cmd: mke2fs -j -b 4096 -L jwan-MDT  -J size=400 -i 4096 -I 512 -O 
dir_index,extents,uninit_groups -F /dev/sda8
mke2fs 1.40.7.sun3 (28-Feb-2008)
Invalid filesystem option set: dir_index,extents,uninit_groups  <-?

mkfs.lustre FATAL: Unable to build fs /dev/sda8 (256)

mkfs.lustre FATAL: mkfs failed 256
[r...@mds00w x86_64]# clear

Machine 1 to serve as mdt:
--
[r...@mds00w x86_64]# mkfs.lustre --verbose  --reformat  --fsname=jwan 
--mdt --mgsnode=mgs.jwan.teragrid@tcp0 /dev/sda8

Permanent disk data:
Target: jwan-MDT
Index:  unassigned
Lustre FS:  jwan
Mount type: ldiskfs
Flags:  0x71
   (MDT needs_index first_time update )
Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr
Parameters: mgsnode=128.182.112@tcp

device size = 11538MB
2 6 18
formatting backing filesystem ldiskfs on /dev/sda8
target name  jwan-MDT
4k blocks 0
options-J size=400 -i 4096 -I 512 -O 
dir_index,extents,uninit_groups -F
mkfs_cmd = mke2fs -j -b 4096 -L jwan-MDT  -J size=400 -i 4096 -I 512 
-O dir_index,extents,uninit_groups -F /dev/sda8
cmd: mke2fs -j -b 4096 -L jwan-MDT  -J size=400 -i 4096 -I 512 -O 
dir_index,extents,uninit_groups -F /dev/sda8
mke2fs 1.40.7.sun3 (28-Feb-2008)
Invalid filesystem option set: dir_index,extents,uninit_groups   <-

mkfs.lustre FATAL: Unable to build fs /dev/sda8 (256)

mkfs.lustre FATAL: mkfs failed 256

Machine 2 to serve as ost:
-
[r...@oss01w ~]# mkfs.lustre --reformat --fsname=jwan --ost 
--mgsnode=mgs.jwan.teragrid@tcp0 /dev/sda8

Permanent disk data:
Target: jwan-OST
Index:  unassigned
Lustre FS:  jwan
Mount type: ldiskfs
Flags:  0x72
   (OST needs_index first_time update )
Persistent mount opts: errors=remount-ro,extents,mballoc
Parameters: mgsnode=128.182.112@tcp


mkfs.lustre FATAL: loop device requires a --device-size= param

mkfs.lustre FATAL: Loop device setup for /dev/s
---

For now, I combined the mgs/mdt on the i386 machines and that created the 
fs and mounted without problems.

I'd appreciate feedback on the 2 other machines with mkfs.lustre errors.

Thanks,
josephine
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] HEAD/1.9.210: libcfs.a, 64bit kernel quota support (fwd)

2009-09-08 Thread Josephine Palencia

Filed as bugs 20694, 20695

Thanks,
j



-- Forwarded message --
Date: Tue, 08 Sep 2009 09:32:21 +0200
From: Andreas Dilger 
To: Josephine Palencia 
Subject: Re: [Lustre-discuss] HEAD/1.9.210:  libcfs.a,
 64bit kernel quota support

On Sep 07, 2009  14:57 -0400, Josephine Palencia wrote:
> To: lustre-disc...@clusterfs.com

Josephine,
please use lustre-disc...@lustre.org, since the @clusterfs.com addresses
will be disappearing soon.

> Lustre version: HEAD
> 
> 1. I get the error below re missing libcfs.a while building lustre HEAD.
> Is libcfs.a supposed to be in HEAD?
> It is present in 1.9.210 but not in HEAD.
> 
> make[5]: Entering directory `/home/palencia/lustre/lustre/utils/gss'
> make[5]: *** No rule to make target `../../../libcfs/libcfs/libcfs.a',
> needed by `lsvcgssd'.  Stop
> 
> Lustre version: HEAD and 1.9.210

Please file this as a bug against HEAD, so that it doesn't slip through
the tracks.  You should CC Eric Mei.

> 2. Lustre configure gives me this error
> 
> checking if percpu_counter_inc takes the 2nd argument... yes
> checking if kernel has 64-bit quota limits support... checking for
> /extra/linux-2.6.18-128.1.14/include/linux/lustre_version.h... (cached) yes
> configure: error: You have got no 64-bit kernel quota support.
> 
> If I --disable-quota as a test  in lustre configure, I get a different
> sent of errors during make.

This should be filed as a separate bug.

> CC [M]  /extra/lustre-1.9.210/lustre/mdt/mdt_lproc.o
> /extra/lustre-1.9.210/lustre/mdt/mdt_lproc.c: In function mdt_quota_off:
> /extra/lustre-1.9.210/lustre/mdt/mdt_lproc.c:661: error: const struct 
> md_device_operations has no member named mdo_quota
> /extra/lustre-1.9.210/lustre/mdt/mdt_lproc.c:666: error: dereferencing 
> pointer to incomplete type
> make[6]: *** [/extra/lustre-1.9.210/lustre/mdt/mdt_lproc.o] Error 1
> make[5]: *** [/extra/lustre-1.9.210/lustre/mdt] Error 2
> 
> 
> Ps advice,
> 
> Thanks,
> josephin
> 
> ___
> Lustre-discuss mailing list
> Lustre-discuss@lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss

Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] HEAD/1.9.210: libcfs.a, 64bit kernel quota support

2009-09-07 Thread Josephine Palencia

Lustre version: HEAD

1. I get the error below re missing libcfs.a while building lustre HEAD.
Is libcfs.a supposed to be in HEAD?
It is present in 1.9.210 but not in HEAD.

make[5]: Entering directory `/home/palencia/lustre/lustre/utils/gss'
make[5]: *** No rule to make target `../../../libcfs/libcfs/libcfs.a', 
needed by `lsvcgssd'.  Stop

Lustre version: HEAD and 1.9.210

2. Lustre configure gives me this error

checking if percpu_counter_inc takes the 2nd argument... yes
checking if kernel has 64-bit quota limits support... checking for 
/extra/linux-2.6.18-128.1.14/include/linux/lustre_version.h... (cached) yes
configure: error: You have got no 64-bit kernel quota support.

If I --disable-quota as a test  in lustre configure, I get a different 
sent of errors during make.


CC [M]  /extra/lustre-1.9.210/lustre/mdt/mdt_lproc.o
/extra/lustre-1.9.210/lustre/mdt/mdt_lproc.c: In function mdt_quota_off:
/extra/lustre-1.9.210/lustre/mdt/mdt_lproc.c:661: error: const struct 
md_device_operations has no member named mdo_quota
/extra/lustre-1.9.210/lustre/mdt/mdt_lproc.c:666: error: dereferencing pointer 
to incomplete type
make[6]: *** [/extra/lustre-1.9.210/lustre/mdt/mdt_lproc.o] Error 1
make[5]: *** [/extra/lustre-1.9.210/lustre/mdt] Error 2


Ps advice,

Thanks,
josephin

___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] Specifying lustre kerb flavor for client (single and cross-realm)

2009-07-30 Thread Josephine Palencia

Hi,

Is there a way to specify lustre kerb flavor for specific target clients?

Ex.
cli2mdt,cli2ost= null for local client A on different kerb realm (cross realm)
cli2mdt,cli2ost= krb5p for remote clients B, C on same kerberos realm

Thanks,
josephine
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] Lustre kerberos credentials not looking at $KRB5CCNAME

2009-07-23 Thread Josephine Palencia



Lustre kerberos does not look at $KRB5CCNAME.  It assumes that your kerberos 
ccache is
/etc/krb5cc_N.This problem affects system which uses kerberos 
toauthenticate logins.
The system complains with a log error saying that it cannot find a ccache  and 
the user
cannot accessthe lustre filsystem (permission denied with df or any 
attempts for IO).

Workarounds:
-manually run "unset KRB5CCNAME" then "kinit"
-or parse $KRB5CCNAME from FILE:/tmp/XYZ and run "cp /tmp/XYZ 
/tmp/krb5cc_N" in login scripts

The ideal solution is for lustre kerberos to have something similar to "afslog" 
which will
look at $KRB5CCNAME and put lustre credentials somewhere where the system can 
find them.

Kudos to Kevin Sullivan (PSC) for helping to identify the problem and providing 
the work-around.

This has been filed in bugzilla.lustre under #20253


-josephine
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] Hastening lustrefs recovery

2009-07-16 Thread Josephine Palencia

OS: Centos5.2 x86_64
Lustre: 1.6.5.1
OpenIB: 1.3.1


What determines the speed at which a lustre fs will recover? (ex. after a 
crash)  Can (should) one hasten the recovery by tweaking some parameters?

For 4 OSTS each with 7TB, ~40 connected clients , recovery time 
is 48min. Is that reasonable or is that too long?


Thanks,
josephine



___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] [Lustre-devel] Kerb cross-site-forcing back clients to null/plain (fwd)

2009-07-02 Thread Josephine Palencia
-- Forwarded message --
Date: Thu, 2 Jul 2009 13:21:34 -0400 (EDT)
From: Josephine Palencia 
To: lustre-de...@lists.lustre.org
Subject: [Lustre-devel] Kerb cross-site-forcing back clients to null/plain


OS:  Centos 5.2 x86_64
Kernel:  2.6.18-92.1.6
Lustre:  1.9.50


Is there/can there be a mechanism by which kerb auth on the
clients both from local & different kerb realm can be forced back to
null/plain from krb5n/a/i/p if the remote site's kerb is not
yet ready (properly configured)?

I'd rather the filesystem continues to be mounted on the client and
indicates it did so auto-reversing back to null/plain instead of just
hanging.

Thanks,
josephin
___
Lustre-devel mailing list
lustre-de...@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-devel
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] Lustre-1.9.210: mkfs.ldiskfs?

2009-07-01 Thread Josephine Palencia

OS: Centos 5.3, x86_64
Kernel: 2.6.18-128.1.6

[r...@attractor ~]# cat /proc/fs/lustre/version
lustre: 1.9.210
..

Looking for mkfs.ldiskfs during mkfs.lustre?
Did a find on previous working lustre-1.9 working and didn't find one.
Ps advice.

[r...@attractor ~]# mkfs.lustre --fsname=jwan0 --mdt --mgs /dev/hdc1

Permanent disk data:
Target: jwan0-MDT
Index:  unassigned
Lustre FS:  jwan0
Mount type: ldiskfs
Flags:  0x75
   (MDT MGS needs_index first_time update )
Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr
Parameters:

checking for existing Lustre data: not found
device size = 19085MB
2 6 18
formatting backing filesystem ldiskfs on /dev/hdc1
 target name  jwan0-MDT
 4k blocks 0
 options-J size=400 -i 4096 -I 512 -q -O 
dir_index,uninit_groups -F
mkfs_cmd = mkfs.ldiskfs -j -b 4096 -L jwan0-MDT  -J size=400 -i 4096 
-I 512 -q -O dir_index,uninit_groups -F /dev/hdc1
sh: mkfs.ldiskfs: command not found

mkfs.lustre FATAL: Unable to build fs /dev/hdc1 (32512)

mkfs.lustre FATAL: mkfs failed 32512


Thanks,
josephin
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] Lustre 2.0* CMD doc/info?

2009-06-05 Thread Josephine Palencia

Hi,

I am looking for more information for setting up Clustered Metadata 
(CMD) for lustre-2.0.* aside from what's on the wiki.

Ps direct me to the proper link/contact?
If there's no documentation, I could help with that.

Thank you,
josephin
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] builds: lustre-2.0.2alpha (1.9.181)+ HEAD (1.9.190) with kerberos enabled

2009-05-22 Thread Josephine Palencia


A.  Centos5.3, Lustre-2.0.2alpha (1.9.181) with kerberos enabled

Re:  lustre-module complaining of wrong kernel

[r...@mds02w x86_64]# rpm -ivh 
lustre-modules-1.9.181-2.6.18_128.1.6_lustre_1.9.181_200905190414.x86_64.rpm
error: Failed dependencies:
 kernel = 2.6.18-128.1.6-lustre-1.9.181 is needed by 
lustre-modules-1.9.181-2.6.18_128.1.6_lustre_1.9.181_200905190414.x86_64

[r...@mds02w x86_64]# uname -a
Linux mds02w.psc.teragrid.org 2.6.18-128.1.6-lustre-1.9.181 #3 SMP Tue May 19 
03:15:43 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux

The rpms built completely from the source.
[r...@mds02w x86_64]# pwd
/usr/src/redhat/RPMS/x86_64
[r...@mds02w x86_64]# ls
kernel-2.6.18128.1.6lustre1.9.181-2.x86_64.rpm
lustre-1.9.181-2.6.18_128.1.6_lustre_1.9.181_200905190414.x86_64.rpm
lustre-ldiskfs-4.0.1-2.6.18_128.1.6_lustre_1.9.181_200905190415.x86_64.rpm
lustre-ldiskfs-4.0.1-2.6.18_128.1.6_lustre_1.9.181_200905190427.x86_64.rpm
lustre-modules-1.9.181-2.6.18_128.1.6_lustre_1.9.181_200905190414.x86_64.rpm
lustre-source-1.9.181-2.6.18_128.1.6_lustre_1.9.181_200905190414.x86_64.rpm
lustre-tests-1.9.181-2.6.18_128.1.6_lustre_1.9.181_200905190414.x86_64.rpm

Configure script identified linux source path.
Booted to the right kernel before lustre build.
So not sure why I get the dep error for wrong kernel when I try to install 
lustre-module rpm.

[r...@mds02w lustre-1.9.181]# pwd
/extra/lustre-1.9.181

$ ./configure --with-linux=/extra/linux-2.6.18-128.1.6 
--enable-dependency-tracking --enable-posix-osd --enable-panic_dumplog 
--enable-gss --enable-health_write --enable-lru-resize --enable-liblustre-tests
--enable-mindf --enable-quota --enable-lu_ref --with-ldiskfsprogs

Have done this many  times in older 1.9.50 versions with no problems..

If I force --nodeps on lustre-modules install, system crashes as expected.

If I proceed with make install instead of make rpms, the resulting 
system system crashes when I attempt to load lustre module.

B. Centos5.3,  HEAD (1.9.190) with kerberos

Lustre-module rpm installs.
However the system crashes when I attempt to load lustre module
Crash message below:

[r...@mds02w ~]# Assertion failure in journal_start() at 
fs/jbd/transaction.c:283: "handle->h_transaction->t_journal == journal"
--- [cut here ] - [please bite here ] -
Kernel BUG at fs/jbd/transaction.c:283
invalid opcode:  [1] SMP
last sysfs file: /class/misc/obd_psdev/dev
CPU 0
Modules linked in: lustre(U) lov(U) osc(U) lquota(U) mdc(U) fid(U) fld(U) 
ksocklnd(U) ptlrpc(U) obdclass(U) lnet(U) lvfs(U) libcfs(U) ipt
_MASQUERADE(U) iptable_nat(U) ip_nat(U) xt_state(U) ip_conntrack(U) 
nfnetlink(U) ipt_REJECT(U) xt_tcpudp(U) iptable_filter(U) ip_tables(U
) x_tables(U) bridge(U) autofs4(U) hidp(U) l2cap(U) bluetooth(U) sunrpc(U) 
ipv6(U) xfrm_nalgo(U) crypto_api(U) dm_mirror(U) dm_multipath(
U) scsi_dh(U) video(U) backlight(U) sbs(U) i2c_ec(U) button(U) battery(U) 
asus_acpi(U) acpi_memhotplug(U) ac(U) parport_pc(U) lp(U) parpo
rt(U) sg(U) e100(U) tg3(U) mii(U) i2c_amd756(U) i2c_amd8111(U) k8_edac(U) 
k8temp(U) i2c_core(U) amd_rng(U) libphy(U) ide_cd(U) pcspkr(U)
cdrom(U) hwmon(U) serio_raw(U) edac_mc(U) dm_raid45(U) dm_message(U) 
dm_region_hash(U) dm_log(U) dm_mod(U) dm_mem_cache(U) sata_sil(U) li
bata(U) shpchp(U) 3w_(U) sd_mod(U) scsi_mod(U) ext3(U) jbd(U) uhci_hcd(U) 
ohci_hcd(U) ehci_hcd(U)
Pid: 2790, comm: modprobe Tainted: G  2.6.18-prep #3
RIP: 0010:[]  [] 
:jbd:journal_start+0x62/0x107
RSP: 0018:810077701a28  EFLAGS: 00010282
RAX: 0073 RBX: 810027460ad8 RCX: 802f7aa8
RDX: 802f7aa8 RSI:  RDI: 802f7aa0
RBP: 810001aa7400 R08: 802f7aa8 R09: 0046
R10: 803d9520 R11:  R12: 000a
R13: 810040c2fa60 R14: 07c0 R15: 0780
FS:  2abf4c117240() GS:803ac000() knlGS:
CS:  0010 DS:  ES:  CR0: 8005003b
CR2: 02b04788 CR3: 00201000 CR4: 06e0
Process modprobe (pid: 2790, threadinfo 81007770, task 
81007ef3b7e0)
Stack:  0040 81003374ea68 0101e780 8805223d
  000a   0040
  810040c2fa60 0101e780 0040 810077701e98
Call Trace:
  [] :ext3:ext3_prepare_write+0x42/0x17b
  [] generic_file_buffered_write+0x26c/0x6d3
  [] current_fs_time+0x3b/0x40
  [] zone_statistics+0x3e/0x6d
  [] __generic_file_aio_write_nolock+0x36c/0x3b8
  [] generic_file_aio_write+0x65/0xc1
  [] :ext3:ext3_file_write+0x16/0x91
  [] do_sync_write+0xc7/0x104
  [] release_pages+0x14e/0x15b
  [] autoremove_wake_function+0x0/0x2e
  [] do_gettimeofday+0x40/0x8f
  [] getnstimeofday+0x10/0x28
  [] do_acct_process+0x517/0x54e
  [] dput+0x2c/0x113
  [] acct_process+0x45/0x50
  [] do_exit+0x2bb/0x91f
  [] cpuset_exit+0x0/0x6c
  [] tracesys+0xd5/0xe0


Code: 0f 0b 68 59 99 03 88 c2 1b 01 ff 43 0c e9 

[Lustre-discuss] lustre-1.8 interop (1.6/2 not 1.4)

2009-05-21 Thread Josephine Palencia


Just confirming/verifying, lustre-1.8 systems can't mount the older 1.4*?
(only 1.6*/2*)

Thanks,
josephin


___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] IB + lustre 1.9.*

2009-02-27 Thread Josephine Palencia

Hello,

I'd like to build IB for Lustre-1.9.*/HEAD.
Advisable? Advice?

Except for the 'it's not supported..'

Thanks,
josephin

___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] ib-bonded QDR with lustre

2009-02-27 Thread Josephine Palencia

Hi,

Wondering if anyone has a working setup with ib-bonded 2 QDR IB interfaces +
lustre  1.6* and if there was a performance improvement?
(There was a patch needed to get it to work on the 1.6.5.1 last I checked.)

Thanks,
josephin

___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] Works: Re: mkfs.lustre (can't allocate memory)

2008-07-30 Thread Josephine Palencia

Andreas,

This work-around worked.
Thank you very much.

Josephin



On Mon, 28 Jul 2008, Andreas Dilger wrote:

> On Jul 24, 2008  14:51 -0400, Josephine Palencia wrote:
>> [EMAIL PROTECTED] ~]# mkfs.lustre --mgs /dev/cciss/c0d0p6
>> LDISKFS-fs: Unable to create cciss/c0d0p6
>
> This appears to be an internal problem with creating the directory
> "/proc/fs/ldiskfs/cciss/c0d0p6" because the "cciss" part of the
> tree does not yet exist and the "c0d0p6" subdirectory create fails.
>
> As a temporary workaround, can you please try modifying the ldiskfs code:
>
>sbi->s_dev_proc = proc_mkdir(sb->s_id, proc_root_ext3);
>if (sbi->s_dev_proc == NULL) {
>printk(KERN_ERR "EXT3-fs: Unable to create %s\n", sb->s_id);
> + /* Don't fail mounting if the /proc file can't be created
>sb->s_fs_info = NULL;
>kfree(sbi);
>return -ENOMEM;
> + */
>}
>
>
> It looks (though I'm not 100% sure) that the rest of the mballoc code
> will deal with s_dev_proc == NULL properly.
>
> Cheers, Andreas
> --
> Andreas Dilger
> Sr. Staff Engineer, Lustre Group
> Sun Microsystems of Canada, Inc.
>
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] Adding OST's not increasing storage?

2008-07-27 Thread Josephine Palencia


Test Setup: (combining them first while ENOMEM error fixed for mgs hardware)
--
mds00w.psc.teragrid.org: combined mdt/mgs and client
oss00w.psc.teragrid.org: ost1
oss00w.psc.teragrid.org: ost2
operon22.psc.edu   : ost3 and client


Combined mgs/mdt on mds00w:
-
[EMAIL PROTECTED] ~]# df -h
/dev/sda8 9.9G  489M  8.9G   6% /mnt/test/mdt


Added 1 ost from oss00w with 1.4TB
--
[EMAIL PROTECTED] ~]# df -h
/dev/sda7 1.3T  1.1G  1.3T   1% /mnt/test/ost0

Shows 1.3T storage correctly on mds00w as client.
[EMAIL PROTECTED] ~]# df -h
[EMAIL PROTECTED]:/testfs   1.3T  1.1G  1.3T   1% /mnt/testfs

Added 2nd ost from oss01w  with 1.4TB:
--
[EMAIL PROTECTED] ~]# df -h
/dev/sda7 1.3T  1.1G  1.3T   1% /mnt/test/ost1

mds00w shows both osts active
---
[EMAIL PROTECTED] ~]# cat 
/proc/fs/lustre/lov/testfs-clilov-81007c659000/target_obd
0: testfs-OST_UUID ACTIVE
1: testfs-OST0001_UUID ACTIVE
[EMAIL PROTECTED] ~]# cat /proc/fs/lustre/lov/testfs-MDT-mdtlov/target_obd
0: testfs-OST_UUID ACTIVE
1: testfs-OST0001_UUID ACTIVE

but df from mds00w shows  only 1.3T storage available  (other 1.3T in used?)
Should be around 2.6T


Adding a third ost with only 150GB storage gives 142GB storage showing on
both clients. Total storage should be around 2.7TB.
-

[EMAIL PROTECTED] ~]# df -h
/dev/hdb1 151G  1.1G  142G   1% /mnt/test/ost3
[EMAIL PROTECTED]:/testfs   2.8T  2.6T  142G  95% /mnt/testfs
[EMAIL PROTECTED] ~]# df -h
/dev/sda8 9.9G  489M  8.9G   6% /mnt/test/mdt
[EMAIL PROTECTED]:/testfs   2.8T  2.6T  142G  95% /mnt/testfs


[EMAIL PROTECTED] ~]# cat /proc/fs/lustre/devices
   0 UP mgs MGS MGS 11
   1 UP mgc [EMAIL PROTECTED] e6af805d-1e32-b002-d315-54fb78e7e558 5
   2 UP lov testfs-MDT-mdtlov testfs-MDT-mdtlov_UUID 4
   3 UP mdt testfs-MDT testfs-MDT_UUID 7
   4 UP mds mdd_obd-testfs-MDT-0 mdd_obd_uuid-testfs-MDT-0 3
   5 UP osc testfs-OST-osc-MDT testfs-MDT-mdtlov_UUID 5
   6 UP lov testfs-clilov-8101278aec00 f327440b-f5e9-c9cc-66fc-7a6001402368 
4
   7 UP lmv testfs-clilmv-8101278aec00 f327440b-f5e9-c9cc-66fc-7a6001402368 
4
   8 UP mdc testfs-MDT-mdc-8101278aec00 
f327440b-f5e9-c9cc-66fc-7a6001402368 5
   9 UP osc testfs-OST-osc-8101278aec00 
f327440b-f5e9-c9cc-66fc-7a6001402368 5
  10 UP osc testfs-OST0001-osc-8101278aec00 
f327440b-f5e9-c9cc-66fc-7a6001402368 5
  11 UP osc testfs-OST0001-osc-MDT testfs-MDT-mdtlov_UUID 5
  12 UP osc testfs-OST0002-osc-8101278aec00 
f327440b-f5e9-c9cc-66fc-7a6001402368 5
  13 UP osc testfs-OST0002-osc-MDT testfs-MDT-mdtlov_UUID 5

Show all OST's active.

[EMAIL PROTECTED] ~]# cat 
/proc/fs/lustre/lov/testfs-clilov-8101278aec00/target_obd
0: testfs-OST_UUID ACTIVE
1: testfs-OST0001_UUID ACTIVE
2: testfs-OST0002_UUID ACTIVE
[EMAIL PROTECTED] ~]# cat /proc/fs/lustre/lov/testfs-MDT-mdtlov/target_obd
0: testfs-OST_UUID ACTIVE
1: testfs-OST0001_UUID ACTIVE
2: testfs-OST0002_UUID ACTIVE

Encountering issues deactivating/activating OST's as well but that's in 
another email.

Thanks,
josephin


PS.
Version:  lustre: 1.9.50
Kernel-2.6.18-92.1.6-lustre-1.9.50 #2 SMP x86_64


___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] mkfs.lustre (can't allocate memory)

2008-07-27 Thread Josephine Palencia

Redoing a new filesystem and I'm getting the error below.
Ps advise.  Thanks, josephin

[EMAIL PROTECTED] ~]# mkfs.lustre --mgs /dev/cciss/c0d0p6

Permanent disk data:
Target: MGS
Index:  unassigned
Lustre FS:  lustre
Mount type: ldiskfs
Flags:  0x74
   (MGS needs_index first_time update )
Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr
Parameters:

checking for existing Lustre data: not found
device size = 24121MB
2 6 18
formatting backing filesystem ldiskfs on /dev/cciss/c0d0p6
 target name  MGS
 4k blocks 0
 options-J size=964 -q -O dir_index,uninit_groups -F
mkfs_cmd = mkfs.ext2 -j -b 4096 -L MGS  -J size=964 -q -O 
dir_index,uninit_groups -F /dev/cciss/c0d0p6
mkfs.lustre: Unable to mount /dev/cciss/c0d0p6: Cannot allocate memory

mkfs.lustre FATAL: failed to write local files
mkfs.lustre: exiting with 12 (Cannot allocate memory)

---
modules loaded before mkfs:

ldiskfs   254260  0
crc16   6272  1 ldiskfs
jbd62252  2 ldiskfs,ext3
lustre962512  0
lov   458540  1 lustre
mdc   175900  1 lustre
osc   236980  1 lustre
ptlrpc   1222556  6 lustre,lov,mdc,fid,lquota,osc
obdclass  815900  7 lustre,lov,mdc,fid,lquota,osc,ptlrpc
lnet  271500  4 lustre,ksocklnd,ptlrpc,obdclass
lvfs   70488  8 lustre,lov,mdc,fid,lquota,osc,ptlrpc,obdclass
libcfs131700  11 
lustre,lov,mdc,fid,lquota,osc,ksocklnd,ptlrpc,obdclass,lnet,lvfs


strace:


--- SIGCHLD (Child exited) @ 0 (0) ---
unlink("/tmp/run_command_logzB7owp")= 0
mkdir("/tmp/mntiROBOh", 0700)   = 0
mount("/dev/cciss/c0d0p6", "/tmp/mntiROBOh", "ldiskfs"..., 0, NULL) = -1 ENOMEM 
(Cannot allocate memory)
write(2, "mkfs.lustre: Unable to mount /de"..., 71mkfs.lustre: Unable to mount 
/dev/cciss/c0d0p6: Cannot allocate memory
) = 71
rmdir("/tmp/mntiROBOh") = 0
write(2, "\nmkfs.lustre FATAL: ", 20
mkfs.lustre FATAL: )   = 20
write(2, "failed to write local files\n", 28failed to write local files) = 28
write(1, "\n   Permanent disk data:\nTarget:"..., 635) = -1 EPIPE (Broken pipe)
--- SIGPIPE (Broken pipe) @ 0 (0) ---
+++ killed by SIGPIPE +++
---
dmesg:


Lustre: OBD class driver, [EMAIL PROTECTED]
 Lustre Version: 1.9.50
 Build Version: 
1.9.50-1969123119-PRISTINE-.extra.linux-2.6.18-92.1.6-2.6.18-92.1.6-lustre-1.9.50
Lustre: Added LNI [EMAIL PROTECTED] [8/256]
Lustre: Accept secure, port 988
LDISKFS-fs: Unable to create cciss/c0d0p6

___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] mkfs.lustre (can't allocate memory)

2008-07-27 Thread Josephine Palencia

Hi,

Redoing a new filesystem and I'm getting the error below.
Ps advise.

Thanks,
josephin


[EMAIL PROTECTED] ~]# mkfs.lustre --mgs /dev/cciss/c0d0p6

Permanent disk data:
Target: MGS
Index:  unassigned
Lustre FS:  lustre
Mount type: ldiskfs
Flags:  0x74
   (MGS needs_index first_time update )
Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr
Parameters:

checking for existing Lustre data: not found
device size = 24121MB
2 6 18
formatting backing filesystem ldiskfs on /dev/cciss/c0d0p6
 target name  MGS
 4k blocks 0
 options-J size=964 -q -O dir_index,uninit_groups -F
mkfs_cmd = mkfs.ext2 -j -b 4096 -L MGS  -J size=964 -q -O 
dir_index,uninit_groups -F /dev/cciss/c0d0p6
mkfs.lustre: Unable to mount /dev/cciss/c0d0p6: Cannot allocate memory

mkfs.lustre FATAL: failed to write local files
mkfs.lustre: exiting with 12 (Cannot allocate memory)

---
modules loaded before mkfs:

ldiskfs   254260  0
crc16   6272  1 ldiskfs
jbd62252  2 ldiskfs,ext3
lustre962512  0
lov   458540  1 lustre
mdc   175900  1 lustre
osc   236980  1 lustre
ptlrpc   1222556  6 lustre,lov,mdc,fid,lquota,osc
obdclass  815900  7 lustre,lov,mdc,fid,lquota,osc,ptlrpc
lnet  271500  4 lustre,ksocklnd,ptlrpc,obdclass
lvfs   70488  8 lustre,lov,mdc,fid,lquota,osc,ptlrpc,obdclass
libcfs131700  11 
lustre,lov,mdc,fid,lquota,osc,ksocklnd,ptlrpc,obdclass,lnet,lvfs


strace:


--- SIGCHLD (Child exited) @ 0 (0) ---
unlink("/tmp/run_command_logzB7owp")= 0
mkdir("/tmp/mntiROBOh", 0700)   = 0
mount("/dev/cciss/c0d0p6", "/tmp/mntiROBOh", "ldiskfs"..., 0, NULL) = -1 
ENOMEM (Cannot allocate memory)
write(2, "mkfs.lustre: Unable to mount /de"..., 71mkfs.lustre: Unable to mount 
/dev/cciss/c0d0p6: Cannot allocate memory
) = 71
rmdir("/tmp/mntiROBOh") = 0
write(2, "\nmkfs.lustre FATAL: ", 20
mkfs.lustre FATAL: )   = 20
write(2, "failed to write local files\n", 28failed to write local files) = 28
write(1, "\n   Permanent disk data:\nTarget:"..., 635) = -1 EPIPE (Broken pipe)
--- SIGPIPE (Broken pipe) @ 0 (0) ---
+++ killed by SIGPIPE +++
---
dmesg:


Lustre: OBD class driver, [EMAIL PROTECTED]
 Lustre Version: 1.9.50
 Build Version: 
1.9.50-1969123119-PRISTINE-.extra.linux-2.6.18-92.1.6-2.6.18-92.1.6-lustre-1.9.50
Lustre: Added LNI [EMAIL PROTECTED] [8/256]
Lustre: Accept secure, port 988
LDISKFS-fs: Unable to create cciss/c0d0p6
LDISKFS-fs: Unable to create cciss/c0d0p6

___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] FYI: Bugs/Security section: Teragrid lustre-wan

2008-06-26 Thread Josephine Palencia

Hello,

The Bugs &  Security section of the Teragrid lustre-wan wiki below contains the
information and patches for the following:

http://www.teragridforum.org/mediawiki/index.php?title=Lustre-wan:_advanced_features_testing#Bugs_and_Security

1. Cross-Realm issue where remote-realm principals are refused  (category: bug)
2. Unsafe directory modes in lustre-source rpms (category: bug/security)
3. Preventing compromised of lustre clients mounting lustre fs (category: 
security)

You're welcome to use them till they are committed to lustre-cvs.

Thanks,
j
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] FYI: Lustre-wan Testing on the Teragrid

2008-05-12 Thread Josephine Palencia

Hi,

There are currently 2 separate projects to test the lustre-wan on the 
Teragrid. Below is the main site.

   http://www.teragridforum.org/mediawiki/index.php?title=Lustre-WAN

The first project  will test the more advanced features of 
lustre/lustre-wan (lustre 1.7.9+) and will stay close to the dev release.
Here's the  direct link.

   
http://www.teragridforum.org/mediawiki/index.php?title=Lustre-wan:_advanced_features_testing

The second project  involves testing IU's Data Capacitor on the WAN.
(Lustre 1.6+). The site is being created.

Cheers,
-josephin
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss