On 28 sept. 2012, at 09:03, Alfonso Pardo wrote:
> If I am need more inodes in my OST, I have a big trouble!, becouse I will
> need format all OST in my production storage environment.
>
> Any ideas to increase the inodes number in my OST without formating?
The number of inodes is decided at mkf
VAX Clusters DLM.
Cheers,
Johann
--
Johann Lombardi
Whamcloud, Inc.
www.whamcloud.com
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
On Thu, Aug 23, 2012 at 09:10:37AM -0400, Ken Hornstein wrote:
> >When you refer to ia64, are you referring to the itanium systems?
>
> I'm referring to systems where "uname -p" returns "ia64". Is that
> itanium? No idea.
Yes, ia64 is the Itanium proce
on RHEL6 lustre clients.
That said, we had to disable page migration (see LU-130) since we need to
implement our own page migration handler.
Cheers,
Johann
--
Johann Lombardi
Whamcloud, Inc.
www.whamcloud.com
___
Lustre-discuss mailing list
L
s patch still does
not bring you where you want since the 1.8 MDS still replies with QIF_SPACE. To
address this, you will have to patch the mds_get_dqblk() function in
lustre/quota/quota_master.c not to call mds_get_space().
HTH
Johann
--
Johann Lombardi
Wham
if your problem is
fixed?
Cheers,
Johann
--
Johann Lombardi
Whamcloud, Inc.
www.whamcloud.com
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
LU-1052 is the jira ticket for 2.6.18-308.1.1 support. There is a patch here
which should help you to build against this kernel.
Johann
--
Johann Lombardi
Whamcloud, Inc.
www.whamcloud.com
___
Lustre-discuss mailing list
Lustre-discuss@lists.lus
On Mon, Feb 06, 2012 at 01:19:13PM -0800, My Lustre wrote:
> So the question is: Is there any way to delete or recreate this file?
You can try "unlink 100MB.bin". Unlike rm, the unlink command won't stat the
file to check whether it is a directory.
Cheers,
Johann
--
Johann
ere is
likely an error message from ldiskfs earlier in the log.
Cheers,
Johann
--
Johann Lombardi
Whamcloud, Inc.
www.whamcloud.com
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
ration. Have
you tried to disable that?
# cat /sys/kernel/mm/redhat_transparent_hugepage/enabled
[always] never
# echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled
Lustre also does not handle page migration properly, that's why we fail all
page migration attempt for now (see LU
In lustre 2.x, this must be mdd.quota_type (instead of mdt.quota_type). A patch
was landed to master some time ago (will be available in 2.2) to interpret
mdt.quota_type as mdd.quota_type transparently, see
http://review.whamcloud.com/#change,354.
Cheers,
Johann
On Thu, Jan 05, 2012 at 01:35:1
; Hmm, what do you mean ?
My point is just that writing to a hole in a sparse file is not any different
than writing at the end of a file and increasing its size. In both cases we
have to allocate blocks and the write can fail with ENOSPC.
Johann
--
Johann Lombardi
Whamcloud, Inc.
www.whamcloud.
ay have hit it with
> > Lustre
> > 1.8.5.
This particular bug is supposed to be fixed since 1.8.4.
> > But, again, my application was writing into sparse files so the space was
> > already allocated... and the sparse files haven't grown.
Lustre (like most
rrently lustre 2.1. 'git checkout 2.1.0-RC1' to access the latest tag.
Cheers,
Johann
--
Johann Lombardi
Whamcloud, Inc.
www.whamcloud.com
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
ing approval on
Oracle's side (see bugzilla 24508).
Cheers,
Johann
--
Johann Lombardi
Whamcloud, Inc.
www.whamcloud.com
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
more information, please refer to the lustre
manual:
http://wiki.lustre.org/manual/LustreManual20_HTML/LustreProc.html#50438271_pgfId-1296529
Cheers,
Johann
--
Johann Lombardi
Whamcloud, Inc.
www.whamcloud.com
___
Lustre-discuss mailing list
Lus
.8.6 & 1.8.6-wc1.
Cheers,
Johann
--
Johann Lombardi
Whamcloud, Inc.
www.whamcloud.com
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
2:128.174.5.100@tcp
But a tcp nid is registered for OST. Is this intended?
If so, have you configured lnet on the MDS to use tcp?
Cheers,
Johann
--
Johann Lombardi
Whamcloud, Inc.
www.whamcloud.com
___
Lustre-discuss mailing list
Lustre-discuss@li
s a
memory leak in the same part of the code.
Johann
--
Johann Lombardi
Whamcloud, Inc.
www.whamcloud.com
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
.5.
The patch i attached should really work with 1.8.5.
> Is there a download location for lustre source for
> 1.8.6? I don't see it on lustre.org.
AFAIK, 1.8.6 has not been released yet.
Johann
--
Johann Lombardi
Whamcloud, Inc.
www.whamcloud.com
>From 69219538cf774475344472a0b4279
need to apply the patch to the lustre clients (only the lustre-module
rpms will be modified). No need to rebuild the kernel.
Johann
--
Johann Lombardi
Whamcloud, Inc.
www.whamcloud.com
___
Lustre-discuss mailing list
Lustre-discuss@lis
roduced in 1.8.5, see bugzilla ticket 24508 & jira ticket
LU-286. A fix is available here: http://review.whamcloud.com/#change,506
Johann
--
Johann Lombardi
Whamcloud, Inc.
www.whamcloud.com
___
Lustre-discuss mailing list
Lustre-disc
On Wed, May 04, 2011 at 04:05:56PM +0200, DEGREMONT Aurelien wrote:
> >> if client/server do not re-send their RPC.
> >>
> > To be clear, clients go through a disconnect/reconnect cycle and eventually
> > resend RPCs.
> >
> I'm not sure I understand clearly what happens there.
I just mean
On Wed, May 04, 2011 at 01:37:14PM +0200, DEGREMONT Aurelien wrote:
> > I assume that the 25315s is from a bug
BTW, do you see this problem with both extent & inodebits locks?
> (fixed in 1.8.5 I think, not sure if it was ported to 2.x) that calculated
> the wrong time when printing this error m
t.c:
> 1137:ptlrpc_connect_interpret()) recovery of crn-OST0007_UUID on
> 10.13.24.91@o2ib failed (-16)
Both OST0007 & OST0013 return EBUSY. Any messages or watchdogs in the OSS logs
(i.e. 10.13.24.9{1,2}@o2ib)?
Johann
--
Johann Lombardi
Whamcloud,
On Mon, May 02, 2011 at 07:44:57PM +0900, Norimichi SUZUKI wrote:
> [root@mds1 log]# e2fsck -fp /dev/mapper/mpath1p1
> e2fsck: MMP: fsck being run while trying to open /dev/mapper/mpath1p1
> lustre-MDT:
Is e2fsck already running? If not, you can run 'tune2fs -f -E clear-mmp
/dev/mapper/mpath1
building lustre module on ubuntu natty.
Lustre supports kernels up to 2.6.32 and Ubuntu Natty uses a 2.6.38 kernel.
Johann
--
Johann Lombardi
Whamcloud, Inc.
www.whamcloud.com
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://l
conf_param $FSNAME-MDT.mdt.group_upcall=/usr/sbin/l_getgroups
($FSNAME must be replaced with the name of your filesystem)
Johann
--
Johann Lombardi
Whamcloud, Inc.
www.whamcloud.com
___
Lustre-discuss mailing list
Lustre-discuss@lists.lust
On Thu, Mar 24, 2011 at 04:22:25PM -0500, Mike Hanby wrote:
> I forgot to point out, both the clients and servers are using the Lustre
> official RPMs for EL5.
>
> Also, on the clients, the "l_getgroups -d " reports the correct GIDs
> for my user.
Could you please run lctl get_param mds.*.group
Hi Tien,
On Tue, Mar 15, 2011 at 10:56:19AM -0700, Tien Nguyen wrote:
> Filesystem kbytes quota limit grace files quota limit grace
> /share10 0 1 - 1*0 1 -
>
> As you can see, for files section, there is "1*", in this case, what
> does this number(1 in
On Tue, Mar 15, 2011 at 01:26:44PM +0100, Johann Lombardi wrote:
> Arr, the fix works well with sparse OST indexes, but not with deactivated
> OSTs. I'm sorry about that. I will have this fixed.
FYI, i have filed a bug for this issue:
http://jira.whamcloud.com/browse/LU-129
It shou
with sparse OST indexes, but not with deactivated OSTs.
I'm sorry about that. I will have this fixed.
Cheers,
Johann
--
Johann Lombardi
Whamcloud, Inc.
www.whamcloud.com
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://li
been fixeed in 2.1?
>
> any idea on how to proceed with getting quotas going, short of reformatting
> all the OSTs again.
>
> we are running lustre1.8.4
This problem was fixed in 1.8.5, check bugzilla ticket 21174 for more
information.
Cheers,
Johann
--
Johann Lombardi
Whamcl
On Fri, Mar 11, 2011 at 04:52:21PM +, Gregory Matthews wrote:
> That sounds to me like it is only hitting the MDT and not the OSTs.
No, it hits both. Fortunately, quotacheck is run in parallel on the MDT and the
OSTs, that's why it scales well.
Johann
--
Johann Lombardi
Whamcl
5 minutes too. The filesystem is distributed across 28 OSS
> (168 OSTs of 3.6TB each).
> So you should be able to run your quotacheck quite quickly.
--
Johann Lombardi
Whamcloud, Inc.
www.whamcloud.com
___
Lustre-discuss mailing list
Lustre-discuss@l
rly tested, i am not aware of anyone using it in
production.
In any case, this would just be a temporary solution until 2.1 is available.
Cheers,
Johann
--
Johann Lombardi
Whamcloud, Inc.
www.whamcloud.com
___
Lustre-discuss mailing list
Lust
On Thu, Mar 10, 2011 at 11:51:44AM -0600, David Noriega wrote:
> I've been reading up on setting up quotas and looks like luster needs
> to be shut down for that as it scans the entire filesystem. The thing
The problem is that accounting can be wrong if files/blocks are allocated/freed
during the
On Thu, Mar 10, 2011 at 10:02:03AM -0600, Daniel Mayfield wrote:
> Pid: 12935, comm: ptlrpcd-brw Not tainted 2.6.32.28-2.rgm #1 PowerEdge R610
^^^ ^^^
Please note that 2.0 does not support 2.6.32 kernels. 2.1 will.
> RIP: 0010:[] [] sg_next+0x3/
On Sun, Mar 06, 2011 at 07:20:32PM -0800, Samuel Aparicio wrote:
> ... don't see a journal_dev option in mount.lustre although the path to the
> device is hardcoded at mkfs time ...
Although the -o journal_dev=0xMMmm option is not documented in the man page, it
should still work with mount.lustr
On Thu, Feb 24, 2011 at 10:48:32AM -0700, Mervini, Joseph A wrote:
> Quick question: Has runtime modification of the number of OST threads been
> implemented in Lustre-1.8.3?
Yes, see bugzilla ticket 18688. It was landed in 1.8.1.
Cheers,
Johann
--
Johann Lombardi
Whamclou
update the
configuration logs on the MGS:
http://wiki.lustre.org/manual/LustreManual18_HTML/ConfiguringLustre.html#50651184_pgfId-1303457
HTH
Cheers,
Johann
--
Johann Lombardi
Whamcloud, Inc.
www.whamcloud.com
___
Lustre-discuss mailing list
Lust
for the LDD to
be rewritten. I don't think we are in this case, so I would change the label on
the new device with e2label.
HTH
Cheers,
Johann
--
Johann Lombardi
Whamcloud, Inc.
www.whamcloud.com
___
Lustre-discuss mailing list
Lustre-discuss@
ormatted OSS
> (all /dev/sdX on both servers are the same) will work?
Yes, the procedure is detailed in the manual:
http://wiki.lustre.org/manual/LustreManual18_HTML/LustreTroubleshooting.html#50651190_pgfId-1291458
Johann
--
Johann Lombardi
Whamcloud, Inc.
ww
with 1.8.5 (which supports
SLES11 SP1) already has most of the patches required for lustre 2.0.0. I think
it would have been less painful to start from there and to add the missing
patches (e.g. data_in_dirent.patch).
Cheers,
Johann
--
Johann Lombardi
Whamcloud, Inc.
On Mon, Feb 21, 2011 at 04:42:45PM +0100, Alvaro Aguilera wrote:
> inside that directory there are only files for RedHat5 and SLES11.
>
> Is SLES10 still supported?
Yes, but only on the client side:
http://wiki.lustre.org/index.php/Lustre_2.0#Lustre_2.0_Matrix
Cheers,
Johann
-
; interface ib0: it's down
Although IP is not used for any communication, the o2ib LND requires IP over IB
addresses to identify nodes.
That said, you should definitely not get a kernel panic for such a
misconfiguration, this is really a bug.
Cheers,
Johann
--
Johann L
On Sun, Jan 09, 2011 at 07:08:02PM +0530, Rajendra prasad wrote:
> I read the post on this subject. But i am getting this error in my lustre
> client and i found that the respective ost has reached 81% where it has more
> than 25 GB available. I am finding this by using lfs df -h . Due to this i
>
On Wed, Dec 08, 2010 at 09:41:22PM +0530, akshar bhosale wrote:
> Thank you, but which quota is this? is this the user level quota for user
> home area?
Yes, the user has reached the quota limit set by the administrator. You can
check the current usage & limit with the lfs quota command.
Please
On Tue, Nov 23, 2010 at 05:09:22PM +0700, sd a wrote:
> as far as I know, since 2.6.33, linux supports 'full' TRIM command, the IO
> problem is solved
Indeed, i *think* this patch is the one:
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=18f0f97850059303ed73b1f020
On Tue, Nov 23, 2010 at 03:12:41PM +0700, sd a wrote:
> I've just implemented a lustre system which has 4 OSS and 32 SSD hard disk.
> But, I got a big problem: lustre doesn't support new kernel. That means, no
> SSD TRIM command is supported.
1.8.5 (should be released soon) will support the SLES11
On Tue, Oct 19, 2010 at 11:21:25AM -0400, Jason Hill wrote:
> Also something to look at if you aren't having any luck with other avenues
> would be the debug log with RPC trace enabled. We do something like:
>
> echo +rpctrace > /proc/sys/lnet/debug;
> lctl dk > /dev/null; sleep 60;
> lctl dk > /
On Mon, Oct 18, 2010 at 12:42:36PM +0200, Jonathan Buch wrote:
> I'm trying to exclude a dead OST but I'm having trouble. When I use
> `df`, it hangs until I use `lctl --device "" deactivate`.
>
> Should the exclude= mount option prevent df from locking up?
There is actually a mount option (name
On Mon, Oct 18, 2010 at 02:42:47PM +0200, Bernd Schubert wrote:
> Yes, unfortunately not entirely unexpected, with upstream Oracle versions.
> Firstly, please send a mail to supp...@ddn.com and ask for the udev tuning
> rpm
> (please add [Lustre] in the subject line).
>
> Then see this MMP issu
On Mon, Oct 18, 2010 at 01:58:40PM +0200, Michael Kluge wrote:
> dd if=/dev/zero of=$RAM_DEV bs=1M count=1000
> mke2fs -O journal_dev -b 4096 $RAM_DEV
>
> mkfs.lustre --device-size=$((7*1024*1024*1024)) --ost --fsname=luram
> --mgsnode=$MDS_NID --mkfsoptions="-E stride=32,stripe-width=256 -b 4096
On Oct 12, 2010, at 1:59 AM, Tanin wrote:
> [r...@localhost Desktop]# mount -t lustre /dev/vg02/mdt /mdt
> mount.lustre: mount /dev/vg02/mdt at /mdt failed: Unknown error 256
>
> The the two directory do exsit, but the mount can not go through. Can
> you guys give me some advise? Thanks
I would
On Tue, Oct 05, 2010 at 12:31:15PM -0500, David Noriega wrote:
> Can I setup quotas after lustre is active? Or does that require taking
> everything offline? Or could I just run "lfs quota on" and then start
> setting quotas for every user?
You can turn quotas on with "lfs quotacheck -ug /path/to/
On Tue, Oct 05, 2010 at 02:36:23PM -0500, David Noriega wrote:
> So then Samba isn't Lustre-aware in the sense it checks and respects quotas?
Not exactly. If quotas are enabled on the backend filesystem, samba clients can
consult quota information and even change quota settings through the samba
Hi Alexey,
On Fri, Oct 01, 2010 at 06:58:13PM +0300, Alexey Lyashkov wrote:
> according code, 'quota_type' configuration parameter handled by mdd module
> >>
> bash-3.2$ grep -rn quota_type mdd mdt
> mdd/mdd_lproc.c:308:{ "quota_type", mdd_lprocfs_quota_rd_type,
> >>
>
> , but for ma
Hi David,
On Mon, Oct 04, 2010 at 12:09:21PM -0500, David Noriega wrote:
> Moved our samba server to use Lustre as its backend file system and
> things look like they are working, but I'm seeing the following
> message repeat over and over
>
> [2010/10/04 11:09:40, 0] lib/sysquotas.c:sys_get_quot
Hi Alfonso,
On Tue, Oct 05, 2010 at 01:28:16PM +0200, Alfonso Pardo wrote:
> I have got an error when I compile the lustre source code over opensuse
> 11:
> make[3]: se sale del directorio `/usr/src/packages/BUILD/lustre-1.8.0'
1.8.0 does not support kernels newer than 2.6.22. SLES11 (SP0) suppor
On Mon, Oct 04, 2010 at 08:49:46AM +0200, Michael Kluge wrote:
> is there any chance to get a 1.8.4 compiled on a 2.6.32+ kernel right
> now with the standard Lustre sources that are available through the
> download pages?
Yes, but only patchless client. Server-side support should come with 1.8.5
On Fri, Oct 01, 2010 at 10:15:57AM -0400, Charland, Denis wrote:
> But I compiled a server version of Lustre, not a client version, and I'm
> running it
Yes, but this also includes the client component, although you don't use it.
> on a Lustre server. Does it mean that the server considers that
On Thu, Sep 30, 2010 at 05:20:27PM -0400, Charland, Denis wrote:
> Any idea why I get patchless_client instead?
Some kernel patches (e.g. VFS intent patches) used to be required to mount
a lustre system. Lustre 1.6 removed this dependency and introduced patchless
client support for recent kernels.
On Fri, Oct 01, 2010 at 07:03:36AM -0500, Ronald K Long wrote:
> From further research it looks as though this is a known problem with
> open-unlinked directories in 1.8.2 and a fix is attached to bug 22177.
Yes, the fix is to increment nlink by 2 instead of 1. If you want to
patch 1.8.2, you need
On Thu, Sep 30, 2010 at 04:07:01PM -0400, Michael Di Domenico wrote:
> Had a hard power failure this morning on the whole building. Now when
> I attempt to mount the metadata partition I get
>
> VFS: Can't find ldiskfs filesystem on dev md22.
> LustreError: 6570:0:(obd_mount.c:1278:server_kernel_
On Thu, Sep 30, 2010 at 02:58:37PM -0500, Nirmal Seenu wrote:
> 1. Is is possible to run different versions of ldiskfs on different OSSs of
> the same lustre file system.
Yes.
> 2. Can I install the Lustre RPMs meant for the ext4 system on our older
> servers that were formatted with the older
Hi Megan,
On Mon, Sep 27, 2010 at 01:50:38PM -0400, Ms. Megan Larko wrote:
> OkayI was getting errors when I attempted to use --erase-params
> and --writeconf in 1.8.4 stating that my 1.6.7.2 parameters would have
> to be updated (again, my "failover.node" string becomes "failnode"
> string).
On Mon, Sep 27, 2010 at 12:31:53PM -0400, Ms. Megan Larko wrote:
> I attempted to add a new lustre tuning parameter to my system (I
> wanted to add the *t.group_upcall=NONE.) While the Lustre 1.8.4 read
Please note that *t.group_upcall=NONE is already supported in 1.6.
> the 1.6.7.2 tuning para
Hi Ashley,
On Mon, Sep 27, 2010 at 05:23:59PM +0100, Ashley Nicholls wrote:
> 12346:0:(rw.c:1948:ras_stride_increase_window()) LBUG
> Sep 14 14:08:20 max13 kernel: Lustre:
> Has anyone else experienced this
Yes, this read-ahead bug has been fixed by the last patch of bug 17197.
> or know if a) t
On Fri, Aug 06, 2010 at 10:30:06AM -0400, David Gucker wrote:
> Some clients have been removed several weeks ago but are still listed in:
>
> ls -l /proc/fs/lustre/obdfilter/*/exports/
>
> This was found after tracing back mystery tcp packets to the OSS.
> Although this is causing no damage, it
Hi James,
On Tue, Aug 03, 2010 at 10:42:00AM -0600, James Robnett wrote:
> Wonderful news. On a related topic. Can the build scripts be
> made available (or a cleansed variant). It's not that cumbersome
> to write one's own but if they already exist it'd be handy to re-use
> them rather than re
On Fri, Jul 30, 2010 at 11:31:44PM -0700, Andreas Dilger wrote:
> > I am creating these filesystems with lustre 1.8.3 from the prebuilt RPMs.
> > Do you mean that there is a different ldiskfs package I should use?
>
> There should be an ldiskfs-ext4 RPM available for download with 1.8.3 and
>
On Mon, Aug 02, 2010 at 12:04:24PM -0400, Osvaldo Rentas wrote:
> What is the ETA on Lustre 1.8.4?
The code is frozen (v1_8_4). The rpms are built and should be available soon
on the download site.
Johann
___
Lustre-discuss mailing list
Lustre-discuss@
On Fri, May 28, 2010 at 10:12:22AM +0200, Ramiro Alba Queipo wrote:
> - Is this an issue that affects every client, both patchless clients and
> not patchless or only those clients with SLES11 servers.?
The server does not matter here. This is a client side issue.
Since CONFIG_SECURITY_FILE_CAPABI
On Fri, May 21, 2010 at 01:49:41PM +0200, Stefano Elmopi wrote:
> I realized that the time server differed much across machines,
> there were at least a few hours of difference.
> I'm doing the tests and have not been paying attention to time
> synchronization but now I have aligned the time of all
Hi Olivier,
On Thu, May 20, 2010 at 07:12:45PM +0200, Olivier Hargoaa wrote:
> But you couldn't know but we already ran lnet self test unsuccessfully.
> I wrote results as answer to Brian.
ok. To get back to your original question:
> Currently Lustre network is bond0. We want to set it as eth0,
On Thu, May 20, 2010 at 10:43:58AM -0400, Brian J. Murrell wrote:
> On Thu, 2010-05-20 at 16:27 +0200, Olivier Hargoaa wrote:
> >
> > On Lustre we get poor read performances
> > and good write performances so we decide to modify Lustre network in
> > order to see if problems comes from network
On Thu, May 20, 2010 at 12:46:42PM +0200, leen smit wrote:
> In the new setup I have 4 machines, two MDT's and two OST's
> We want to use keepalived as a failover mechanism between the two MDT's.
> To keep the MDT's in sync, I'm using a DRBD disk between the two.
>
> Keepalive uses a VIP in a act
On Thu, May 20, 2010 at 12:29:41PM +0200, Stefano Elmopi wrote:
> Hi Andreas
> My version of Lustre 1.8.3
> Sorry for my bad English but I used the wrong word, "crash" is not the
> right word.
> I try to explain better, I start copying a large file on the file system
> and while the copy process co
On Wed, Apr 28, 2010 at 01:50:13PM +0200, Patrick Winnertz wrote:
> [ 177.489667] LustreError: 2099:0:(mds_reint.c:1772:mds_orphan_add_link())
> ASSERTION(inode->i_nlink == 2) failed: dir nlink == 1
> [ 177.490214] LustreError: 2099:0:(mds_reint.c:1772:mds_orphan_add_link())
> LBUG
This is a k
Hi,
On Tue, Apr 20, 2010 at 09:08:25AM -0700, Jagga Soorma wrote:
> Thanks for your response.* I will try to run the leak-finder script and
> hopefully it will point us in the right direction.* This only seems to be
> happening on some of my clients:
Could you please tell us what kernel you use o
Hi,
On Mon, Apr 26, 2010 at 01:16:37AM -0700, Tommi T wrote:
> Here is leak from our system, not sure if this is the same issue.
> Application is molpro which is causing this leak. (pristine lustre 1.8.2)
>
> total used free sharedbuffers cached
> Mem: 3
On Thu, Mar 11, 2010 at 10:43:28AM -0600, Hendelman, Rob wrote:
> I'd like to make sure that I understand these 2 module options.
>
> >From googling a bit & searching bugzilla, here is my understanding:
>
> oss_num_threads = Maximum number of service threads across all ost's on a
> single OSS.
>
On Wed, Mar 10, 2010 at 04:22:47PM -0500, Dot Yet wrote:
> Is there some way to convert -Werror to -Wall and see if that helps? I
> tried putting it as CFLAG in the configure command, but that did not
> work.
You can comment out the following code from lustre/autoconf/lustre-core.m4:
if test $tar
On Wed, Mar 10, 2010 at 12:33:04AM -0800, syed haider wrote:
>mkfs.lustre: Unable to mount /dev/cciss/c0d1: Cannot allocate memory
>mkfs.lustre FATAL: failed to write local files
>mkfs.lustre: exiting with 12 (Cannot allocate memory)
This is bug 17490 which has been fixed in 1.6.7 & 1.
On Tue, Mar 02, 2010 at 11:24:18AM -0800, Adesanya, Adeyemi wrote:
> We are trying to recover from a corrupt Lustre 1.6.5 OST. The primary
> superblock became corrupt so we ran "e2fsck -b -B
> " (using e2fsprogs-1.40.7).
>
> We are now able to mount the OST using 'mount -t ldiskfs'. However, i
On Tue, Mar 02, 2010 at 02:01:06PM +0100, Andrew Godziuk wrote:
> Then I guess this part of manual should be changed:
>
> "The active/passive configuration is seldom used for OST servers as it
> doubles hardware costs without improving performance. On the other
> hand, an active/active cluster conf
On Tue, Mar 02, 2010 at 01:09:51PM +0100, Andrew Godziuk wrote:
> Has anything changed in Lustre 1.8 that would make it possible to set
> up two OSS with an OST shared using DRBD, in an active-active
> configuration?
No, a lustre target (OST & MDT) must *never* be active on more than 1 server
at a
On Fri, Feb 26, 2010 at 01:07:20PM +0100, Marek Magry?? wrote:
> There's no need now, I had removed all the pools, recreated them and the
> problem disappeared. Still, it's strange, because there was absolutely
> nothing suspicious in the logs on the clients and servers. Thanks for help.
Actuall
On Mon, Feb 01, 2010 at 02:25:34PM -0500, Michael Di Domenico wrote:
> Is there a new date for the 1.8.2 to come out? The wiki says Jan, 2010...
1.8.2 testing is almost completed (some scale & interoperability tests
are still underway). If all those tests go well, it should be available
shortly.
On Fri, Jan 29, 2010 at 05:00:10PM +0200, Deon Borman wrote:
> I have a weird problem on one of my OSSs, though I've seen it once on
> the other OSS. Things will be humming along nicely, when suddenly I get
> lots of messages like this:
>
> Jan 29 15:26:16 venus kernel: Lustre:
> 898:0:(socklnd
On Fri, Jan 29, 2010 at 12:30:51PM -0500, Ms. Megan Larko wrote:
> that "network" was not run. Am I missing something in lctl command
> usage?
# lctl
lctl > net up
LNET configured
lctl > list_nids
10.8.0@tcp
lctl > conn_list
You must run the 'network' command before 'conn_list'.
lctl > net tc
On Fri, Jan 29, 2010 at 11:48:58AM +0100, gvozden rovina wrote:
> Thank you for your swift answer. I have just one more question. Is it
> possible to
> configure lustre system so that it writes not just the file but also the
> copy of the same file in the same time as you create it?
No, we don't s
On Fri, Jan 29, 2010 at 10:32:26AM +0100, gvozden rovina wrote:
> OST. For instance i copied 2.5 GB file to lustre which had 120 GB storage
> space (I have 2GB test OSTs) and it didn't automatically recognized full
> OST but it simply stopped working with " No space left on device" error
> message.
On Wed, Jan 27, 2010 at 08:06:00AM +0100, Heiko Schrter wrote:
> Any chance for 16TB+ with ext4 on vanilla kernels ?
No, there is no such plan, at least in the short term.
Johann
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lis
On Sun, Jan 24, 2010 at 12:47:32AM -0500, Vilobh Meshram wrote:
>Hi,
>I am new to Luster File System and wanted to understand the internals.I
>wanted to know more about MDS/MDT/OSS/OST component.
>Please point me to some link.
The lustre manual is available online:
http://manual.lu
On Tue, Jan 26, 2010 at 11:41:40AM -0800, Tommi T wrote:
> Hmm, nice to know. Where I can find more information about differences
> between ext3/4 based ldiskfs?
Actually, you won't see any change at the lustre level, except bigger target
support. Rebasing ldiskfs on ext4 allows us to reduce the n
On Tue, Jan 26, 2010 at 08:30:32PM +0100, Peter Kjellstrom wrote:
> Why is there a 16TiB limit? Ext4 has no such limit (except for 16TiB _file_
> size).
Because we have only tested & validated up to 16TB. You are of course free to
try with bigger device (use the force_over_16tb mount option), but
On Tue, Jan 26, 2010 at 09:15:13AM -0800, Peter Jones wrote:
> Yes, that is correct. So far we have only tested on RHEL5
To be clear, 16TB support is only for rhel5 using ext4-based ldiskfs.
The default for rhel5 is still to use ext3. We will actually provide
2 sets of rpms, one with ext3-based ld
On Mon, Jan 25, 2010 at 05:24:32PM -0700, Andreas Dilger wrote:
> I agree it makes sense to have a command to do this. As I write this
> I'm offline, so I can't poke at my test filesystem, but thought that /
> proc/fs/lustre/osc/*/import will contain the OST name and NID(s) to
> which they ar
1 - 100 of 175 matches
Mail list logo