On Jul 20, 2009 23:45 -0400, Brian J. Murrell wrote:
> On Mon, 2009-07-20 at 23:41 -0400, Mag Gam wrote:
> > Other than DRBD and Hot standby are there any other alternatives? We
> > want to have a redundant copy of our data and was wondering if rsync
> > is the only way to accomplish this.
>
> Un
Dear list,
I have gotten over 19000 quota-related errors on one MDS since 18:00
yesterday like:
Jul 20 18:24:04 * kernel: LustreError:
10999:0:(quota_master.c:507:mds_quota_adjust()) mds adjust qunit failed! (opc:4
rc:-122)
Jul 20 18:29:27 * kernel: LustreError:
11007:0
On Mon, 2009-07-20 at 23:41 -0400, Mag Gam wrote:
> Other than DRBD and Hot standby are there any other alternatives? We
> want to have a redundant copy of our data and was wondering if rsync
> is the only way to accomplish this.
Until the replication feature is available, rsync (or a suitable
rep
Other than DRBD and Hot standby are there any other alternatives? We
want to have a redundant copy of our data and was wondering if rsync
is the only way to accomplish this.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lust
On Jul 20, 2009 04:02 -0700, Dan wrote:
> We were migrating our MDS to a new machine (done this before
> successfully on another file system). I formated the new RAID 10 with
> 1.6.7.2, up from 1.6.5.1 on the old MDS. Copied all files and ran
> get/setfattr, then deleted CATALOG and OBJECTS/*.
On Jul 20, 2009 13:20 +0300, Ender G�ler wrote:
> Are there any ways of detecting the problematic file names from the mds/oss
> syslog messages? Or to be more definite, are there any ways to find a map of
> file name to inode number or file name to object id or inode number to
> object id? I'm try
I added non-root user to sudoers list. But this does not seem good fix.
Also I get 'identifier removed' error when mounting lustre. I would like to
verify one command before trying it:
tunefs.lustre --param mdt.group_upcall=NONE /tmp/paris-mdt
Is it correct? How to add users on MDS to remove this
On Jul 20, 2009 11:27 +0700, lesonus wrote:
> I'm a new bie, I installed Lustre 1.8 on Centos5.2 with some OSTs
> I have a question: when an OSTi error, so overall system is can not
> access Files, because strips is placed in this OSTi.
> So, what mechanism to safe Lustre FS when one OSTi error?
I believe I may have sorted this out.
The 3 frontend clients have 2 interfaces, eth0 and eth1. They also
have an extra (frontend) IP bound to lo:1, for IPVS based load
balancing (this works by changing some ARP-related settings, if you've
never worked with IPVS in gatewaying / direct routing mode)
How do I allow non-root users to mount lustre file system? I have an entry
in fstab as:
'mg...@tcp0:/paris /mnt/paris-samurai lustre user'.
It s not working though. Are there any other methods?
Thanks,
Lisa Zhang.
___
Lustre-discuss mailing list
Lustre-
Not to continue an off-topic thread, but how big was your Gluster
deployment? I am curious because we found it unusable with ~50TB and 5-10
million files (even though our goal was several hundred).
Jordan
> Hi all,
>
> Regarding to the comparsion of Lustre with GlusterFS, i have the
> fallowing
On Mon, 2009-07-20 at 17:50 +0200, Arne Wiebalck wrote:
> Hi Brian,
Hi Arne,
> That's what I thought :)
You have found one of our deviations though.
> The Lustre kernel:
>
> -->
> [root~]# grep -i iscsi /boot/config-2.6.18-128.1.6.el5_lustre.1.8.0.1smp
> # CONFIG_SCSI_ISCSI_ATTRS is not set
>
Hi Brian,
We try to stick as closely as we can to the vendor's selected options.
That's what I thought :)
I'm asking as (if I am not mistaken and amongst other things)
iSCSI for instance is enabled for a standard RHEL5 kernel, while
it is disabled for the RHEL5 Lustre kernel.
So I can foll
On Mon, 2009-07-20 at 11:53 +0200, Arne Wiebalck wrote:
> Dear all,
Hi,
> what determines the options selected for the distributed Lustre
> kernels?
We try to stick as closely as we can to the vendor's selected options.
> I'm asking as (if I am not mistaken and amongst other things)
> iSCSI for
I'm a new bie, I installed Lustre 1.8 on Centos5.2 with some OSTs
I have a question: when an OSTi error, so overall system is can not
access Files, because strips is placed in this OSTi.
So, what mechanism to safe Lustre FS when one OSTi error?
(Have a mechanism for RAID-OSTs, like RAID-HDD?)
Tha
Hi all,
We were migrating our MDS to a new machine (done this before
successfully on another file system). I formated the new RAID 10 with
1.6.7.2, up from 1.6.5.1 on the old MDS. Copied all files and ran
get/setfattr, then deleted CATALOG and OBJECTS/*. It mounted w/o errors
so I pointed the O
Hi there,
Are there any ways of detecting the problematic file names from the mds/oss
syslog messages? Or to be more definite, are there any ways to find a map of
file name to inode number or file name to object id or inode number to
object id? I'm trying to understand the insights of lustre and s
Dear all,
what determines the options selected for the distributed Lustre
kernels?
I'm asking as (if I am not mistaken and amongst other things)
iSCSI for instance is enabled for a standard RHEL5 kernel, while
it is disabled for the RHEL5 Lustre kernel.
Thanks,
Arne
smime.p7s
Description: S/
18 matches
Mail list logo