Hi David,
I did some experiments last year with Lustre 1.6.x and a Dell iSCSI
enclosure. It was a little slow (proof of concept mainly) due to sharing MDT
and OST traffic on a single GigE strand, but as long as the operating system
presents a valid block device, Lustre works fine.
hth
Klaus
On
I believe so ... I was using host names with Lustre 1.6.2 and had no issues.
Just make sure DNS/name resolution is working properly and consistently.
Klaus
On 6/2/09 9:04 AM, Robert Olson ol...@mcs.anl.gov etched on stone
tablets:
I'm curious if it's possible / wise to configure client mounts
Hi Michael,
Just want to throw my two cents in with Isaac's posting, as I spent a great
deal of time working with these kinds of features over the course of the
last two years.
In my experience with Lustre 1.6, in the case where multiple NICs were
available, Lustre will default to using the
Looks good! Glad to see Sun is keeping this resource online.
Klaus
On 4/16/09 10:21 AM, Sheila Barthel sheila.bart...@sun.com etched on
stone tablets:
The Lustre Group is pleased to announce the launch of the redesigned
lustre.org wiki, which includes a new top-level design, added pages, and
Hi Roger,
I believe you can connect the OSSs once the MDS has booted, and in fact, I¹m
pretty sure that the five in the connected_clients: 0/5¹ are in fact your
OSS nodes. Each OST maintains a connection to the MDS while the file system
is mounted, so they will be included in the connection
Hi Mag,
If I'm not mistaken, only qmaster writes to the DB, the execd process relays
queries through a listening daemon using RPC on the qmaster host which
speaks BDB on the back end.
hth,
Klaus
On 1/16/09 4:22 PM, Mag Gam magaw...@gmail.com etched on stone tablets:
Thanks Andreas.
We
Hi folks,
Lustre doesn't support any inherent link aggregation, it simply utilizes the
device node the OS presents. If this is a bonded NIC, it will use it no
problem, but the underlying device driver takes care of load balancing and
distribution.
I've used Lustre 1.6.x quite successfully with
Hi Joseph,
Lustre attaches itself to the first listed/available interface, in your
case, ib0.
I don't remember off the top of my head if there is (or what the right way
is) to do what you want to do.
hth,
Klaus
On 11/13/08 4:18 PM, Joseph Farran [EMAIL PROTECTED] etched on
stone tablets:
Hi Timh,
If you're using Linux-HA, you can configure how quickly failover takes
place. I have mine set to 90 seconds before the primary is marked dead and
the secondary takes over.
When this occurs, any Lustre transactions not yet in flight will block until
the ones that were in progress at the
Hi Lukas,
Do you want to use these interfaces together as a logical unit? The syntax
below would construct two separate LNET networks.
Additionally, you would need to qualify the mount path on the client side to
bind Lustre to a specific LNET instance, i.e. 'mount /mnt/lustre
[EMAIL
Hi,
You interconnect OSS1 and OSS2, OSS2 and OSS3, OSS3 and OSS4, and OSS4 and
OSS1 together via an out-of-band communication fabric, install Linux-HA
heartbeat software to monitor mount points, and configure nodes to take over
each other¹s mounts in case of failure.
The implementation I have
Hi Robert,
You'd need to adjust the lnet options line in /etc/modprobe.conf to force
Lustre to bind to all your NICs, I believe it binds to the first one
available unless instructed otherwise.
Try this:
-- cut --
options lnet networks=tcp0(eth1),tcp1(eth2),tcp3(eth3)
-- cut --
which will
, especially in
digital media where auditability keeps lawyers from the MPAA and the big
studios at bay.
cheers,
Klaus
On 8/18/08 9:13 PM, Andreas Dilger [EMAIL PROTECTED]did etch on stone
tablets:
On Aug 18, 2008 17:18 -0700, Klaus Steden wrote:
Hrm. Who should I contact to find out more
Hi Andreas,
Will this be compliant with Trusted Computing standards? i.e. will it be
possible to use this information for auditing purposes?
thanks,
Klaus
On 8/18/08 3:43 AM, Andreas Dilger [EMAIL PROTECTED]did etch on stone
tablets:
On Aug 09, 2008 05:06 -0700, daledude wrote:
Is there is
Hi Andreas,
Hrm. Who should I contact to find out more, then?
thanks,
Klaus
On 8/18/08 4:44 PM, Andreas Dilger [EMAIL PROTECTED]did etch on stone
tablets:
On Aug 18, 2008 12:53 -0700, Klaus Steden wrote:
Will this be compliant with Trusted Computing standards? i.e. will it be
possible
:}
Thanks,
Ron
On Aug 14, 4:33 pm, Klaus Steden [EMAIL PROTECTED] wrote:
Yes. There is an entry in the manual on this topic. You'll have to stop
Lustre and update the MGS configuration, but it's a pretty quick operation.
cheers,
Klaus
On 8/14/08 2:29 PM, Ron [EMAIL PROTECTED]did etch
Yes. There is an entry in the manual on this topic. You'll have to stop
Lustre and update the MGS configuration, but it's a pretty quick operation.
cheers,
Klaus
On 8/14/08 2:29 PM, Ron [EMAIL PROTECTED]did etch on stone tablets:
Hi,
We have set up a couple of test systems where the OSTs are
Hello Mag,
Some people on-list have tried it before, but it generally performs poorly.
I believe HP contributed some optimizations, but the consensus as recently
as the start of this year was that it will generally suck. Check the list
archives for more information, I believe CIFS was a better
Hi Johnyla,
Use 'lfs getstripe file', i.e.
root# lfs getstripe /mnt/lustre/testfile1
OBDS:
0: lustre-OST_UUID ACTIVE
1: lustre-OST0001_UUID ACTIVE
/mnt/lustre/testfile1
obdidx objid objidgroup
0 3157260x4d14e
Hi Brock,
I've been using Sun X2200s with Lustre in a similar configuration (IPMI,
STONITH, Linux-HA, FC storage) and haven't had any issues like this
(although I would typically panic the primary node during testing using
Sysrq) ... is the behaviour consistent?
Klaus
On 7/31/08 1:57 PM, Brock
netdump is indeed good for this, but you may have to take two or three
cracks at it ... it doesn't always dump the complete core image, and you
can't really do a whole lot with the incomplete version.
Klaus
On 7/31/08 5:50 PM, Kilian CAVALOTTI [EMAIL PROTECTED]did etch on
stone tablets:
On
On 7/29/08 6:44 AM, Aaron Knister [EMAIL PROTECTED]did etch on
stone tablets:
Oh and to answer your question- an OST cannot be mounted twice simoltaneously.
... well, you can- mount it from two locations, you¹re just inevitably
going to corrupt the heck out of the volume in question ...
Hi Daniel,
I don¹t believe so.
Various people have posted informal results from their own tests in the
field, but none have ever been formally collated. There are some rough
numbers on the Wikipedia page for CFS for GigE, IB, and 10GigE, but they
assume particular things about configuration,
Hi Mario,
Lustre will, if not instructed otherwise, bind to all available NICs on the
system. I've used Lustre extensively with LACP aggregate groups, and it
performs quite well.
Configuring multiple NICs from the same host into the same VLAN is something
of a non-sensical configuration unless
Hello Hans-Juergen,
I usually try a combination of searching the process table for any running
tasks that are blocking the umount request and killing them, then doing a
'umount -k', and then using 'lctl modules |awk '{print $2}' |xargs rmmod -v'
to deactivate the kernel modules. This last step
You mean ... you have a FUSE client and a FUSE server for the same file
system running on the same node?
Klaus
On 7/8/08 11:38 AM, Kevin Fox [EMAIL PROTECTED]did etch on stone
tablets:
FUSE did it some how, didn't they?
On Mon, 2008-07-07 at 14:12 -0700, Brian J. Murrell wrote:
On Mon,
Hello Heiko,
If I'm not mistaken, 'MDS' refers to the metadata _server_, while 'MDT'
refers to the metadata _target_, i.e. the distinction is akin to that
between 'OSS' and 'OST'. The MDS is a server node; the MDT is the volume
where all the metadata for your volume is stored.
The handbook
Hopefully this isn't a stupid question ... but have you considered using
Lustre 1.6 instead, if it's an option? It's much, much easier to work with.
As for failover itself ... Lustre provides redundancy in the design, i.e.
you can have a secondary MDS that comes online when the primary has
I don't think it would
be much, such that could it share spindles with the journal for the
MDS file system?
Hrm. Given it's relatively low use, I'd think that would be fine.
I have a question ... if the MGS is used so infrequently relative to the use
of the MDS, why is it (is it?)
Hi Mark,
See my comments inline below.
cheers,
Klaus
On 6/14/08 11:22 AM, Mark True [EMAIL PROTECTED]did etch on stone
tablets:
Hello!
I am new to the list, but I have been researching Lustre for quite some time
and finally have an occasion to use it. I am trying to do some capacity
Yes, if you export the CFS volume from one of the client nodes using Samba,
you¹ll be able to access it via CIFS from a Windows client.
Performance will be well below customary Lustre performance, though, so
don¹t expect miracles. :-)
cheers,
Klaus
On 6/11/08 12:50 PM, [EMAIL PROTECTED] [EMAIL
Hi Trupti,
It depends on how your failover is implemented. The bottom line is that if
you have a transaction in-flight when your MDT is disconnected, all new
transactions will block until queued and in-flight transactions either
complete or time out. If your failover window is a few seconds or
Hi Brock,
I've got a Sun StorageTek array hooked up to one of our clusters, and I'm
using labels instead of multi-pathing. We've got it hooked up in a similar
fashion as Stuart; it's a bit slow and sloppy when initializing, but it
works well enough and there are no problems once OSTs are online.
(active/passive) will have two connections
each to the same lun, so there could be issues.
Brock Palen
www.umich.edu/~brockp
Center for Advanced Computing
[EMAIL PROTECTED]
(734)936-1985
On Jun 5, 2008, at 7:52 PM, Klaus Steden wrote:
Hi Brock,
I've got a Sun StorageTek array hooked
Hi Brock,
I'm experimenting with a Dell iSCSI array in one of our labs here ... so
far, it behaves pretty typically for a Lustre, although the performance
isn't blazing -- but that's due to limitations of the network
infrastructure.
I didn't notice anything in the iSCSI gear that would indicate
Hello there,
We had a bit of an accident in one of our labs earlier today, and it
effectively destroyed one of the OSTs in the Lustre file system. From what I
can figure (I wasn't there at the time), one of the OSSes re-provisioned
itself accidentally, and installed its OS information on one of
Hi there,
Has anyone out in the Lustre community set theirs up in an environment that
used Juniper switches? We're testing one out in the lab with ours, and
something about its configuration isn't working. The same setup has been
tested and operated successfully with Extreme, Cisco, and Alcatel,
Hi Jim,
Out of curiosity (and bear with me if this is a stupid question), do you
know if anyone's integrated LMT with Ganglia? The clusters we have that use
Lustre are all ROCKS-based, which uses Ganglia for monitoring ... I'm not
familiar with Cerebro, so I'm not sure how you're using it, but
Hi Andrew,
1. No.
2. Not sure, check the Lustre manual for info on routing. Assuming TCP/IP
can see the whole path, it should work once the configuration for Lustre is
correct.
On 3/7/08 12:55 PM, Lundgren, Andrew [EMAIL PROTECTED]did etch
on stone tablets:
Is there any restriction that
Hi Jim,
I use bonding in one of our configurations here (LACP-based, to an Extreme
Summit series switch), and the overhead is not bad. My best performance test
so far provided about 340-350 MB/s sustained read performance across two OSS
nodes, each with two GigE striped together using LACP for a
PROTECTED]did etch on stone
tablets:
On Feb 14, 2008 11:17 -0800, Klaus Steden wrote:
Here's a mount line from our first OSS node:
LABEL=lustre-OST/mnt/lustreost0 lustre defaults0 0
LABEL=lustre-OST0001/mnt/lustreost1 lustre defaults,noauto 0 0
It has
Hi there,
Just a quick question ... what is the official sending address of this list?
I'm trying to write filter rules to automatically route messages from it
into my Lustre-list folder ... but some messages are sent by
[EMAIL PROTECTED], while others come in from
[EMAIL PROTECTED]
Is there an
Hello,
I'm seeing some interesting behaviour from one of the nodes in our cluster
when two applications attempt to read from the Lustre. Specifically, one
application is a real time video player, and the other is just interacting
with the file system conventionally ... but if the player is
43 matches
Mail list logo