the quota.
A google search did not lead to any hint, so I hope someone on the list
has an idea.
Thanks and kind regards,
Torsten
--
<><><><><><><><><><><><><><><><><><><><&
ustre3 ~]#
Best regards
Torsten
--
<><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><>
<>
rsten
--
<><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><>
<>
Torsten
--
<><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><>
<> <&
r
users.
Best regards,
Torsten
--
<><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><>
<>
a mail to all our users asked if one of them maybe has written
too many files.
Thanks again
Torsten
--
<><><><><><><><><><><><><><><><><><><><><><><><
27;s
> backend filesystem as a local filesytem (readonly) and look for where
> the space is going.
Let me guess: that cannot be done while Lustre is active, or?
Thanks again
Torsten
--
<><><><><><><><><><><><>
that statement is only for Lustre-2.1 or general.
Cheers
Torsten
--
<><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><&g
Am 19.10.15 um 10:00 schrieb Torsten Harenberg:
> [root@lustre1 MGTMDT]# ls -lh oi.16
> -rw-r--r-- 1 root root 554G Aug 13 2013 oi.16
coming back to this. I found this mail from Andreas
https://lists.01.org/pipermail/hpdd-discuss/2014-October/001352.html
"Multiple OI files are crea
e a nice sunday
Torsten
--
<><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><>
<> <>
<> Dr. Torste
--
<><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><>
<>
r two.
>
> Peter
>
> On 9/27/16, 6:50 AM, "lustre-discuss on behalf of Torsten Harenberg"
> torsten.harenb...@cern.ch> wrote:
>
>> Dear all,
>>
>> I cannot get to
>>
>> https://downloads.hpdd.intel.com/public/lustre/lustre-2.7.0/
>>
nts used to run).
If that works, we will also try 2.8. (CentOS 6)
So no particular reason, was just the "safe choice".
Best regards,
Torsten
--
<><><><><><><><><><><><><><><><><>&
untered any issues.
running flock here as well on ~200 nodes with 2.5.3 servers and 2.8.0
clients without any problems.
Cheers
Torsten
--
<><><><><><><><><><><><><><><><><><><><><><><>
in the mount options?
Cheers
Torsten
--
<><><><><><><><><><><><><><><><><><><><><><><><><><&
tarted a lfs find already,
but it takes very long)
3.) remove the OST and start from scratch.
And really nice would be to understand where the OST comes from and
how one can avoid it.
Any hint is really appreciated.
Best regards
Torsten
--
Dr. Torsten Harenberg harenb...@phy
rsten
--
<><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><>
<> <&g
uld identify the source of the problem,
we would really appreciate it.
Kind regards
Torsten
--
Dr. Torsten Harenberg harenb...@physik.uni-wuppertal.de
Bergische Universitaet
Fakultät 4 - Physik Tel.: +49 (0)202 439-3521
Gaussstr. 20 Fax : +49 (0)202 439-2811
42097 Wuppert
S for el7 as far as I can see.
Thanks again
Torsten
--
Dr. Torsten Harenberg harenb...@physik.uni-wuppertal.de
Bergische Universitaet
Fakultät 4 - Physik Tel.: +49 (0)202 439-3521
Gaussstr. 20 Fax : +49 (0)202 439-2811
42097 Wuppertal
smime.p7s
Description: S
ered a bit the load by limiting the # of
running jobs which might also helped to stablize the system.
We enabled kdump, so if another crash is happening anytime soon, we hope
to get at least a dump for a hint where the problem is.
Thanks again
Torsten
--
Dr. Torsten Harenberg harenb...@p
Dear all,
Am 10.03.20 um 08:18 schrieb Torsten Harenberg:
> During the last days (since thursday), our Lustre instance was
> surprisingly stable. We lowered a bit the load by limiting the # of
> running jobs which might also helped to stablize the system.
>
> We enabled kdump
known? Any advice other than downgrading the kernel again?
Thanks and kind regards
Torsten
--
Dr. Torsten Harenberg harenb...@physik.uni-wuppertal.de
Bergische Universitaet
Fakultät 4 - Physik Tel.: +49 (0)202 439-3521
Gaussstr. 20 Fax : +49 (0)202 439-2811
42097
re_rmmod" command should do this for you.
Thanks Andreas (as always ;-) ).
Indeed somehow the modules somehow got messed up. Strange.. first time
DKMS didn't work as expected for me.
Long story short: working now.
Cheers
Torsten
--
Dr. Torsten Harenberg harenb...@physik.un
23 matches
Mail list logo