Hi Cédric,
I'm by no means familiar with Lustre code anymore, but based on the stack
trace and function names, it seems to be a problem with the journal. Maybe try
to do an 'efsck -f' which would replay the journal and possibly clean up the
file it has problem with.
Cheers,
Bernd
On
-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
--
Bernd Schubert
DataDirect Networks
, or least a kernel with
8kB stack size. RHEL5 has 4kB by default, which is not sufficient and
therefore in early 1.8 versions a patch landed that disallowed NFS exports.
Cheers,
Bernd
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss
Hello Tina,
On 11/12/2010 03:44 PM, Tina Friedrich wrote:
Hello again,
nope, running with / exporting from a server with the patched kernel
running does not change this behaviour at all. mountvers=3 works, 1 and
2 don't.
I can reproduce it, so NFSv2 support got broken. Which issue has
://lists.lustre.org/mailman/listinfo/lustre-discuss
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
--
Bernd Schubert
DataDirect Networks
with old index and old label
3) restore Objects from the backup
Do you think that would work?
Best regards,
Wojciech
On 22 October 2010 18:52, Bernd Schubert bernd.schub...@fastmail.fm wrote:
Hmm, I would probably format a small fake device on a ramdisk and copy
files
over, run
for sanity reasons it falls back to the on
disk value, if the values differ too much (1) and secondly I figured out
with those patches there, that using the MDS value is broken (and did not get
broken by patches, but my patches revealed it...).
Cheers,
Bernd
--
Bernd Schubert
DataDirect
2) format old OST with old index and old label
3) restore Objects from the backup
Do you think that would work?
Best regards,
Wojciech
On 22 October 2010 18:52, Bernd Schubert
bernd.schub...@fastmail.fm
in filter_iobuf_get).
Kevin
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss
Hello Michael,
On Saturday, October 23, 2010, Michael Kluge wrote:
Hi Bernd,
I get the same message with you kernel RPMS:
In file included from include/linux/list.h:6,
from include/linux/mutex.h:13,
from
is likely to be 16 (with 4 OSS connected).
Hope it helps,
Bernd
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
Hmm, e2fsck didn't catch that? rec_len is the length of a directory entry, so
after how many bytes the next entry follows. You can try to force e2fsck to do
something about that: e2fsck -D
Cheers,
Bernd
On Friday, October 22, 2010, Wojciech Turek wrote:
Ok, removing and recreating the
, October 22, 2010, Michael Kluge wrote:
Hi Bernd,
I have found a RHEL-only release for this version. It does not compile
on a 2.6.27 kernel :( I actually don't want to go back to 2.6.18 just to
get a new driver.
Michael
Am Freitag, den 22.10.2010, 13:34 +0200 schrieb Bernd Schubert
,
Wojciech
On 22 October 2010 17:15, Andreas Dilger andreas.dil...@oracle.com wrote:
On 2010-10-22, at 5:42, Bernd Schubert bernd.schub...@fastmail.fm wrote:
Hmm, e2fsck didn't catch that? rec_len is the length of a directory
entry, so
after how many bytes the next entry follows
on a new OST?
On 22 October 2010 18:52, Bernd Schubert bernd.schub...@fastmail.fm wrote:
Hmm, I would probably format a small fake device on a ramdisk and copy
files
over, run tunefs --writeconf /mdt and then start everything (inlcuding
all OSTs) again.
Cheers,
On Friday, October
On Friday, October 22, 2010, Andreas Dilger wrote:
On 2010-10-22, at 12:25, Wojciech Turek wrote:
Actually I remember now, Andreas wrote some time ago that when one adds
OST in to the same slot as the old one MDS will think that the OST have
objects up to the what old OST had, and when the
/
(just reminds me, I need to upload it to our DDN download site)
Also, do you really want to use data files, that might have been zeroed in
their middle? I think If at all your recovery will only be useful for small
human readable text files
Hope it helps,
Bernd
--
Bernd Schubert
DataDirect
That is normal and probably comes from the page cache, should be about the
same for lustre, ldiskfs, ext4, xfs, etc. It goes down if you specify
-odirect, but which is obviously not optimal on Lustre clients.
Cheers,
Bernd
On Wednesday, October 20, 2010, Andreas Dilger wrote:
Is this client
, 4, 8, etc.). So for RAID6: 4+2 or 8+2, etc.
What about RAID5?
Personally I don't lile raid5 too much, but with raid5 it is obviously +1
instead of +2
Cheers,
Bernd
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Lustre
the underlying block device, and then
obdfilter-survey to test the local Lustre IO submission path.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss
On Wednesday, October 20, 2010, Andreas Dilger wrote:
On 2010-10-19, at 08:27, Roger Spellman wrote:
I don't understand this comment:
For the MDT, yes, you could potentially use -i 1500 as about the
minimum space per inode, but then you risk running out of space in the
filesystem before
in bugzilla to add support for h/w crc32c on Nehalem CPUs to
reduce this overhead, but still not as fast as no checksum at all.
I think checksums are only visible in ptlrpc CPU time (and most also only for
reads), but not in the user space benchmark process.
Cheers,
Bernd
--
Bernd Schubert
(default on all
system except RHEL).
Cheers,
Bernd
PS: Btw, Ciemat has a DDN Lustre system, so you could also send requests to
supp...@ddn.com (please add [Lustre] in the subject line).
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss
lustre 1.8.0
El lun, 18-10-2010 a las 11:32 +0200, Bernd Schubert escribió:
Hello Alfonso,
On Monday, October 18, 2010, Alfonso Pardo wrote:
Hello,
I need to export a lustre directory from one lustre-client to another
client, buy always get the next message in the nfs-server
for a single data stream any
further?
While it could make it difficult with support, you could use our DDN Lustre
releases:
http://eu.ddn.com:8080/lustre/lustre/1.8.3/ddn3.3/
Hope it helps,
Bernd
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss
still used by lustres
o2ib moduls.
Cheers,
Bernd
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
and then immediately
abort recovery for that client.
Cheers,
Bernd
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
Hello Cory,
On 09/17/2010 11:31 PM, Cory Spitz wrote:
Hi, Bernd.
On 09/17/2010 02:48 PM, Bernd Schubert wrote:
On Friday, September 17, 2010, Andreas Dilger wrote:
On 2010-09-17, at 12:42, Jonathan B. Horen wrote:
We're trying to architect a Lustre setup for our group, and want
the MDS does not get the extents flag if ext4-ldiskfs is
used. I think only beginning with 1.8.4 that is ensured by Lustre itself.
Cheers,
Bernd
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http
Assuming the disk really is empty then, and LAST_ID really is zero,
shall I then leave it at zero, and follow the recommendation of
page 23-14, ie, just shut down again, delete the lov_objid file on
the MDS, and restart the system? Certainly the value at the
correct index (29) is definitely
can try which rate acp reports?
http://oss.oracle.com/~mason/acp/
Also could you please send me your exact bonnie line or script? We could try
to reproduce it on and idle test 9550 with a 6620 for metada (the 6620 is
slower for that than the ef3010).
Thanks,
Bernd
--
Bernd Schubert
and LAST_ID files.
Hope it helps,
Bern
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
On Friday, September 03, 2010, Bernd Schubert wrote:
On Friday, September 03, 2010, Bob Ball wrote:
We added a new OSS to our 1.8.4 Lustre installation. It has 6 OST of
8.9TB each. Within a day of having these on-line, one OST stopped
accepting new files. I cannot get it to activate
://bugzilla.lustre.org/show_bug.cgi?id=21376
It has a patch, that also got accepted in upstream tar last week. You may find
updated RHEL5 tar packages on my home page:
http://www.pci.uni-heidelberg.de/tc/usr/bernd/downloads/
Cheers,
Bernd
--
Bernd Schubert
DataDirect Networks
On Thursday, September 02, 2010, Andreas Dilger wrote:
On 2010-09-02, at 06:43, Tina Friedrich wrote:
Causing most grieve at the moment is that we sometimes see delays
writing files. From the writing clients end, it simply looks as if I/O
stops for a while (we've seen 'pauses' of anything
On Thursday, September 02, 2010, Frederik Ferner wrote:
Bernd Schubert wrote:
On Thursday, September 02, 2010, Frederik Ferner wrote:
we are currently reviewing our backup policy for our Lustre file system
as backups of the MDT are taking longer and longer.
Yes, that is due to the size
On Thursday, September 02, 2010, Frank Heckes wrote:
Hi all,
for some of our OSSes a massive amount of errors like:
Sep 2 20:28:15 jf61o02 kernel: blk_rq_check_limits: over max size
limit.
appearing in /var/log/messages (and dmesg). Does anyone have got a clue
how-to get of the root
if all those devices are have max_sec_kb tuned to
maximum?
Also, does that come up with 1.8.4 only? (I have SG_ALL in my mind which was
increased from 255 to 256, which might not be supported by all scsi host
adapters).
Cheers,
Bernd
--
Bernd Schubert
DataDirect Networks
Hello Frederik,
On Wednesday, August 25, 2010, Frederik Ferner wrote:
Hi Bernd,
thanks for your reply.
Bernd Schubert wrote:
On Tuesday, August 24, 2010, Frederik Ferner wrote:
on our MDS we noticed that all memory seems to be used. (And it's not
just normal buffers/cache as far
the correct command options if you recompiled it.
Another reason might be bug 22771, although that should only come up on MDS
with more memory you have.
Cheers,
Bernd
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Lustre-discuss
:
http://lwn.net/Articles/398846/
It links an older article about it, which should be already avaible for all:
http://lwn.net/Articles/359158/
And another one:
http://lwn.net/Articles/374424/
Cheers,
Bernd
--
Bernd Schubert
DataDirect Networks
) also *should* accept
stonith error codes, but in general, I have seen it more than once that
hearbeat-v1 run into split-brain and started resources on both cluster nodes.
That is something where pacemaker does a much better job.
--
Bernd Schubert
DataDirect Networks
controller pair.
Each controller pair (couplet in DDN terms) usually has 4 servers connected
and fits into single rack in a 300 drive configuration.
So you can get 20GB/s with 3 or 4 racks and 12 or 16 OSS servers, which is
much below your 100 IO nodes ;)
Cheers,
Bernd
--
Bernd Schubert
DataDirect
single threaded. It also does not
support NFS locks.
If it still does not work out, you should enabled lustre debugging, nfs
debugging and you probably should use wireshark to see what it going on.
Hope it helps,
Bernd
--
Bernd Schubert
DataDirect Networks
, it is terribly difficult to debug it without
additional tools. I have opened a bugzilla for that, but I don't think I will
have time for those tools any time soon.
https://bugzilla.lustre.org/show_bug.cgi?id=23190
--
Bernd Schubert
DataDirect Networks
On Wednesday, July 14, 2010, Andreas Dilger wrote:
On 2010-07-14, at 13:29, Nate Pearlstein wrote:
Just checking to be sure this isn't a known bug or problem. I couldn't
find a bz for this, but it would appear that tunefs.lustre --print fails
on a lustre mdt or ost device if mounted with
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org
some weird effects
between fortran-IO implementations...
David, did you use PGI or another compiler? Last time I had to deal with
Gaussian only PGI was supported, but I have not checked for recent Gaussian
versions.
Cheers,
Bernd
--
Bernd Schubert
DataDirect Networks
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 07/08/2010 11:21 PM, Andreas Dilger wrote:
On 2010-07-08, at 14:01, Guy Coates wrote:
Try this script; (It is from Bernd Schubert). It will parse the
per-client proc stats on the mds/oss into something nice and
humanly-readable. It is very
, is this correct?
Ashley.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
--
Bernd Schubert
DataDirect Networks
___
Lustre
Привет Катя!
On Tuesday, June 22, 2010, Katya Tutlyaeva wrote:
Hi everybody!
Of course, these devices are successfully mounted on OSS, when I move
them using hb_takeover on another OSS (even if I move all devices,
include mdt on second OSS or move these unworking devices on first OSS)
with obdecho).
Cheers,
Bernd
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
Hello Jonas,
On Monday 07 June 2010, Jonas Ambrus wrote:
Hi Guys,
i tried to compile lustre 1.8.3 on kernel 2.6.22 (vanilla-config).
The configure-script of lustre works fine. But when I try to make the
lustre it fails with the following reason:
___
Applying
-discuss
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
' suggested.
So if 'lfs find' now used the filesize to determine if a file is really
located on an OST, that would be an improvement. Of course, if it fails at all
with an IO error, it is also not useful ;)
Cheers,
Bernd
--
Bernd Schubert
DataDirect Networks
it as type lustre and therefore all those nice lfs subcommands will
not work.
Cheers,
Bernd
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
is compiled patchless against it.
Applying this Lustre patch from bugzilla#15587 should solve the issue without
the need to recompile the kernel:
https://bugzilla.lustre.org/attachment.cgi?id=29116
Cheers,
Bernd
--
Bernd Schubert
DataDirect Networks
to fail. Another easy
fix: link /etc/mtab to /proc/mounts.
That also happens sometimes without automounter.
Cheers,
Bernd
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org
upcoming Debian Squeeze
requires 2.6.27 at a minimum.
Cheers,
Bernd
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
LPU64
# error No word size defined
Please note that I did not test this patch at all yet.
Cheers,
Bernd
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo
On Tuesday 04 May 2010, Ramiro Alba Queipo wrote:
On Tue, 2010-05-04 at 14:16 +0200, Bernd Schubert wrote:
That is bug 22729. A very simple patch (entirely untested) should be:
diff --git a/lnet/include/libcfs/linux/kp30.h
b/lnet/include/libcfs/linux/kp30.h --- a/lnet/include/libcfs/linux
-discuss
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
if you are interested and I can put a tar ball of
e2fsprogs-sun-ddn on my home page.
Cheers,
Bernd
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre
/ldlm/namespaces/*; do echo 800 ${i}/lru_size; done
At least that helped all the time before when we had that problem. I hoped it
would be fixed in 1.8.2, but seems it is not. Please open a bug report.
Thanks,
Bernd
--
Bernd Schubert
DataDirect Networks
interfaces in the fabric using the same IP address
(192.168.60.226)...
I guess next time you should run a lnet_selftest and lctl ping.
Greetings from Tübingen,
Bernd
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Lustre
from the
script.
Cheers,
Bernd
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
/proc/fs/lustre/obdfilter/scratch-OST0018/mntdev
Warning! /dev/dsk/ost08 is mounted.
Warning: skipping journal recovery because doing a read-only filesystem
check.
see here:
https://bugzilla.lustre.org/show_bug.cgi?id=19566
https://bugzilla.lustre.org/show_bug.cgi?id=21359
--
Bernd Schubert
.
From the bug reports it looks like the OST is actually still mounted by
lustre, unbeknownst to Linux and VFS.
Is there a mechanism to unmount it or do I need to reboot?
Erik
On Fri, Jan 15, 2010 at 3:28 PM, Bernd Schubert
bs_li...@aakef.fastmail.fmwrote:
On Friday 15 January 2010, Erik
of Canada, Inc.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Hello Robert,
could you please send a mail into our ticket system? Kit or I would then start
to investigate tomorrow.
Thanks,
Bernd
On Monday 11 January 2010, Michael Robbert wrote:
The filename is not very unique. I can create a file with the same name in
another directory or on another
Hello Antonio,
On Wednesday 23 December 2009, Antonio Concas wrote:
Hi, all
Dec 23 11:20:29 mommoti12 kernel: LDISKFS-fs: external journal has bad
superblock
see here:
https://bugzilla.lustre.org/show_bug.cgi?id=21389
Cheers,
Bernd
--
Bernd Schubert
DataDirect Networks
:35 AM, Bernd Schubert wrote:
On Monday 21 December 2009, Andreas Dilger wrote:
On 2009-12-21, at 11:15, Nick Jennings wrote:
I had another instance of the client kernel panic which I first
encountered a few months ago. This time I managed to get a shot of the
console. Attached is the dmesg
On Tuesday 22 December 2009, Nick Jennings wrote:
On 12/21/2009 07:36 PM, Brian J. Murrell wrote:
Photographs of 25 line console screens are not very often suitable
substitutes for real console logging, unfortunately. Seriously, if you
really want to pursue this issue, you are going to
On Tuesday 22 December 2009, David Dillow wrote:
On Tue, 2009-12-22 at 18:09 +0100, Bernd Schubert wrote:
On Tuesday 22 December 2009, Nick Jennings wrote:
On 12/21/2009 07:36 PM, Brian J. Murrell wrote:
Photographs of 25 line console screens are not very often suitable
substitutes
Lustre installations in Europe are now based on
pacemaker, without any ugly workarounds. But then as I told you before, our
releases also fix bug 19566 already.
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Lustre-discuss
version, the
workaround for this is to disable lockless truncates.
# on all clients
for i in /proc/fs/lustre/llite/*; do
echo 0 ${i}/lockless_truncate;
done
Cheers,
Bernd
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
.*.sync_journal=0, it even
slightly reduced performance. So I wonder if one additionally needs to enable
jbd-async journals?
Thanks,
Bernd
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http
some time the OSTs do revocer.
ler.c:882:ost_brw_read()) @@@ timeout on bulk PUT after
100+0s r...@81007efa7e00 x7869690/t0
This error message means You have a flaky network. For example it comes up
if you set a high MTU, but your switch does not support it.
Cheers,
Bernd
--
Bernd
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
.el5_1_amd64.deb
Cheers,
Bernd
PS: Disclaimer: Whatever packages you may find on my home page, I won't
provide support for these!
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org
really didn't have the time to step
in, I had been already far above 16 hours per day that time.
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
19:05:07 +0200
From: Bernd Schubert bs_li...@aakef.fastmail.fm
Subject: Re: [Lustre-discuss] Moving MGS to separate device
To: lustre-discuss@lists.lustre.org
Message-ID: 200910111905.08076.bs_li...@aakef.fastmail.fm
Content-Type: Text/Plain; charset=iso-8859-15
Hello Wojciech,
I already
from SLES11? I thought
ldiskfs is based on ext4 there? So we should have at least 16TiB and I'm not
sure if all the e2fsprogs patches already have been landed to get 64-bit max
sizes?
Thanks,
Bernd
--
Bernd Schubert
DataDirect Networks
___
Lustre
block on the disk was modified
by sososd3 AFTER sososd7 first looked at it.
Probably, bug#19566. Michael, which Lustre version do you exactly use?
Thanks,
Bernd
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Lustre-discuss
logs?
CERROR(Mount %p is still busy (%d refs), giving up.\n,
mnt, atomic_read(mnt-mnt_count));
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http
and if possible add further
information yourself.
https://bugzilla.lustre.org/show_bug.cgi?id=20402
Thanks,
Bernd
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman
On Wednesday 14 October 2009, Michael Schwartzkopff wrote:
We have timeouts of 60 seconds. But we will move to 300. Thanks for the
hint.
Check out my bug report, that might not be sufficient.
--
Bernd Schubert
DataDirect Networks
___
Lustre
required for Lustre
functionality) would continue working while lprocfs was disabled until
fixed.
Until now, there was no reason to change that code, but it makes sense
to fix
that now... Could you file a bug on this?
Done, bug 21084
Cheers,
Bernd
--
Bernd Schubert
DataDirect Networks
On Saturday 10 October 2009, Andreas Dilger wrote:
On 8-Oct-09, at 22:28, Lundgren, Andrew wrote:
Is there a way to set the lru_size to a fixed value and have it stay
that way across mounts?
I know it can be set using:
$ lctl set_param ldlm.namespaces.*osc*.lru_size=$((NR_CPU*100))
On Monday 12 October 2009, Michael Schwartzkopff wrote:
Am Montag, 12. Oktober 2009 15:54:04 schrieb Vadym:
Hello
I'm do a schema of mail service so I have only one question:
Can Lustre provide me full automatic failover solution?
No. See the lustre manual for this. You need a cluster
-OST0002-osc.lru_size=800
error: conf_param: Invalid argument
And this as well
lctl conf_param testfs-MDT.ldlm.namespaces.testfs-OST0002-osc.lru_size=800
Thanks,
Bernd
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Lustre
(https://bugzilla.lustre.org/show_bug.cgi?id=20807) for a
pacemaker agent.
Cheers,
Bernd
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http
this
files for the first time triggers 'lvbo' message.
We have third lustre file system which runs on different hardware but the
same lustre version and RHEL version as the affected ones. I can not see
any problems on the third file system.
Wojciech
2009/10/10 Bernd Schubert bs_li
Khind Road
Pune-Maharastra
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
--
Bernd Schubert
DataDirect Networks
___
Lustre
, if for some reason, e.g. evictions connection to OSTs get lost, it
will also also reset to default. We are for now compiling our packages LRU
disabled.
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Lustre-discuss
Hello
this link still points to the alpha version, I guess it better should be
redirected as v1.6:
http://downloads.lustre.org/public/lustre/v1.8/
Cheers,
Bernd
--
Bernd Schubert
DataDirect Networks
___
Lustre-discuss mailing list
Lustre-discuss
if you need anything else.
TIA
On Sat, Mar 14, 2009 at 7:35 AM, Bernd Schubert
bernd.schub...@fastmail.fm wrote:
On Saturday 14 March 2009, Mag Gam wrote:
We are having a problem with a MDS server (which also has 1 OST) on the
box.
When the server boots up, we notice
On Saturday 14 March 2009, Mag Gam wrote:
We are having a problem with a MDS server (which also has 1 OST) on the
box.
When the server boots up, we notice there is an ll_mdt process running
at 100% and we keep on waiting close to 10-15 mins. We only have 8
clients. (I assume this normal
Hello,
ever since the end of last week I try to download the sources of 1.6.7, but I
always get:
We are sorry ...
General Error
We are sorry, but the download system cannot process your request at this
time. Please try again later.
If the problem persists, please report it to Customer
1 - 100 of 125 matches
Mail list logo