SGI nu mai există de ceva ani.
Așa că "cei de la xfs" sunt cei care mențin kernelul ăla.

Cumva sistemul ăla a rămas fără alimentare ?

"xfs is for speed not reliability  :)" - dacă nu ai alimentare așa este.

Haha atunci râmâi pe jfs/jfs2 că e mai bun :(

O zi mai bună.

On 07/27/2016 08:11 PM, Iulian Roman wrote:
> 2016-07-27 12:06 GMT+02:00 lista email <lista.em...@yahoo.com>:
>
>> Chiar tie ti-am raspuns acum 2 mesaje si am dat outputul unui mount|grep
>> ^/ in care se vad toate atributele:
>>
>> (rw,relatime,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)
>>
>> nu am de ce sa schimb, am si precizat inca din primul post ca optiunea de
>> la xfs_grow -m nu ma intereseaza. Cele doua / sunt formatate la fel.
>>
>> Pe 25 Iulie avem (max number of inodes = 67k):
>> ]# df -i
>> Filesystem                 Inodes IUsed     IFree IUse% Mounted on
>> /dev/mapper/centos-root     66960 66165       795   99% /
>> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>> apoi fara sa modific nimic, am dat un grow asa de "reglaj" sa vad ce se
>> intampla ... si stupiza ... face grow ...
>> # xfs_growfs /dev/mapper/centos-root
>>
>> stupiza devine si mai mare, a facut "ceva" grow, dar doar la inoduri
>> (dimensiunea a ramas neschimbata, 50GB)
>> # df -i
>> Filesystem                 Inodes IUsed     IFree IUse% Mounted on
>> /dev/mapper/centos-root     83200 66165     17035   80% /
>> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>> Ieri, am vazut va numarul maxim de inoduri a inceput sa scada in timp ce
>> numarul de inodes used ramanea aproximativ la fel... din nou stupiza
>>
>> astazi, 27 Iulie, numarul maxim de inoduri a ajuns din nou la fel ca pe 25
>> iulie, iar numarul de inodes used este aproape la fel ca acum 2 zile.
>> # df -i
>> Filesystem                 Inodes IUsed     IFree IUse% Mounted on
>> /dev/mapper/centos-root     67024 66225       799   99% /
>> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>
>> Si asta se intampla doar pe acel server. Nu stiu cum a ajuns sistemul in
>> halul acesta, asa l-am preluat ... dar nu e normal ca numarul maxim de
>> inoduri sa scada asa din senin. Am trimis un email la dezvoltatorii xfs ...
>> sigur nu e in regula ce se intampla acolo.
>>
>
> probabil superblock-ul e corupt, si dupa orice operatie care are ca
> rezultat modificarea/scrierea  superblock-ului output-ul se schimba (asta
> daca nu o fi vreun bug in xfs). Prin loguri nu ai nimic relevant ?
>
> Inainte de as astepta raspuns de la cei de la xfs , probabil ar fi mai bine
> sa bootezi in rescue mode , sa faci un xfs_check si eventual xfs_repair si
> sa vezi care e rezultatul ! Un output la lsblk -i ar fi fost mai elocvent
> ca sa ne lamurim care e configuratia (partiti/PV, LV si filesystem).
>
> P.S xfs is for speed not reliability  :)
>
>
>>
>> Ca si fix, ar fi:
>> - sa mut toate datele din acel / pe alt slice/disk sa reformatez din nou /
>> si apoi sa aduc datele inapoi sau
>> - sa incerc xfs_check/repair
>> si toate operatiile trebuiesc facute offline.
>>
>> O sa mai vad ... sunt curios ce sugereaza cei de la xfs.
>> --------------------------------------------
>> On Wed, 7/27/16, Cristian Paslaru <cryst...@gmail.com> wrote:
>>
>>  Subject: Re: [rlug] partition 100% full No space left on device
>>  To: "lista email" <lista.em...@yahoo.com>
>>  Cc: "Romanian Linux Users Group" <rlug@lists.lug.ro>
>>  Date: Wednesday, July 27, 2016, 12:12 PM
>>
>>  Ai inode64
>>  mount option setat la ambele?
>>  De asemenea poti schimba imaxpct=25
>>  to imaxpct=5 de exemplu, folosind xfs_growfs:-m
>>  imaxpct  set inode max percent to imaxpct
>>
>>
>>  2016-07-26 17:06 GMT+03:00
>>  lista email <lista.em...@yahoo.com>:
>>  Hmmm ...
>>  cred ca aici este buba:
>>
>>
>>
>>  "Pe nodul cu problema (numar max de inoduri raportat de
>>  df:
>>
>>   70k): in scadere fata de acum o ora cu vreo 10k :)) dar
>>
>>   ordinul de marime ramane."
>>
>>
>>
>>  Ieri am dat un xfs_grow pe / (fara sa modific ceva
>>  prin system) si dupa ce a terminat, am vazut ca mi-a marit
>>  putin numarul de inoduri, de la 66k undeva la 85K, de aceea
>>  astazi de dimineata gradul de utilizare a inodurilor era
>>  undeva in jur de 80%. Acum vad ca se apropie din nou de
>>  cifra de ieri (inainte de a da xfs_grow).
>>
>>
>>
>>  Ba mai mult, dimensiunea ei ramane constanta si numarul de
>>  inoduri folosite ramane si el aproximativ constant ~66k!
>>
>>
>>
>>  In mod normal numarul maxim de inoduri este fix (rezulta
>>  dupa formatare) si cel care creste sau scade este numarul de
>>  inodes USED. Nu e in regula ... in acest context, ma intreb
>>  cum este posibil ca numarul maxim de inoduri pentru partitia
>>  radacina (/) SA SCADA, sau sa fie variabil???!!! Asta suna a
>>  voodoo!
>>
>>
>>
>>  --------------------------------------------
>>
>>  On Tue, 7/26/16, lista email
>>  <lista.em...@yahoo.com>
>>  wrote:
>>
>>
>>
>>   Subject: Re: [rlug] partition 100% full No space left on
>>  device
>>
>>   To: "lista email" <lista.em...@yahoo.com>,
>>  "Romanian Linux Users Group" <rlug@lists.lug.ro>,
>>  "Cristian Paslaru" <cryst...@gmail.com>
>>
>>   Date: Tuesday, July 26, 2016, 4:02 PM
>>
>>
>>
>>   Nu e de la isize. Are valoarea
>>
>>   default (isize=256). M-am uitat cu xfs_info de la
>>  inceput.
>>
>>   Am si mentionat ca outputul de la xfs_info este similar
>>
>>   pentru ambele servere, DAR numarul maxim de inoduri
>>  difera
>>
>>   foarte mult.
>>
>>
>>
>>   Pe nodul cu problema (numar max de inoduri raportat de
>>  df:
>>
>>   70k): in scadere fata de acum o ora cu vreo 10k :)) dar
>>
>>   ordinul de marime ramane.
>>
>>
>>
>>   # df -i|grep root
>>
>>   /dev/mapper/centos-root     69120
>>
>>   66223      2897   96% /
>>
>>
>>
>>   # xfs_info /
>>
>>   meta-data=/dev/mapper/centos-root isize=256
>>
>>   agcount=17, agsize=819136 blks
>>
>>            =
>>
>>
>>
>>      sectsz=512   attr=2,
>>
>>   projid32bit=1
>>
>>            =
>>
>>
>>
>>      crc=0        finobt=0
>>
>>   data     =
>>
>>
>>
>>      bsize=4096   blocks=13107200,
>>
>>   imaxpct=25
>>
>>            =
>>
>>
>>
>>      sunit=64     swidth=64
>>
>>   blks
>>
>>   naming   =version 2
>>
>>
>>
>>   bsize=4096   ascii-ci=0 ftype=0
>>
>>   log      =internal
>>
>>
>>
>>      bsize=4096   blocks=6400,
>>
>>   version=2
>>
>>            =
>>
>>
>>
>>      sectsz=512   sunit=64 blks,
>>
>>   lazy-count=1
>>
>>   realtime =none
>>
>>
>>
>>      extsz=4096   blocks=0,
>>
>>   rtextents=0
>>
>>
>>
>>   Pe nodul sanatos (numar max de inoduri raportat de df:
>>
>>   52milioane):
>>
>>
>>
>>   #  df -i|grep root
>>
>>   /dev/mapper/centos-root  52424704 66137
>>
>>   52358567    1% /
>>
>>
>>
>>   # xfs_info /
>>
>>   meta-data=/dev/mapper/centos-root isize=256
>>
>>   agcount=16, agsize=819136 blks
>>
>>            =
>>
>>
>>
>>      sectsz=512   attr=2,
>>
>>   projid32bit=1
>>
>>            =
>>
>>
>>
>>      crc=0        finobt=0
>>
>>   data     =
>>
>>
>>
>>      bsize=4096   blocks=13106176,
>>
>>   imaxpct=25
>>
>>            =
>>
>>
>>
>>      sunit=64     swidth=64
>>
>>   blks
>>
>>   naming   =version 2
>>
>>
>>
>>   bsize=4096   ascii-ci=0 ftype=0
>>
>>   log      =internal
>>
>>
>>
>>      bsize=4096   blocks=6400,
>>
>>   version=2
>>
>>            =
>>
>>
>>
>>      sectsz=512   sunit=64 blks,
>>
>>   lazy-count=1
>>
>>   realtime =none
>>
>>
>>
>>      extsz=4096   blocks=0,
>>
>>   rtextents=0
>>
>>
>>
>>   Eu nu vad nicio diferenta care sa ma duca cu gandul la
>>
>>   diferenta aceasta imensa de la 52milioane de inoduri vs
>>
>>   80mii inoduri.
>>
>>   --------------------------------------------
>>
>>   On Tue, 7/26/16, Cristian Paslaru <cryst...@gmail.com>
>>
>>   wrote:
>>
>>
>>
>>    Subject: Re: [rlug] partition 100% full No space left
>>  on
>>
>>   device
>>
>>    To: "lista email" <lista.em...@yahoo.com>,
>>
>>   "Romanian Linux Users Group" <rlug@lists.lug.ro>
>>
>>    Date: Tuesday, July 26, 2016, 3:38 PM
>>
>>
>>
>>    Ce isize
>>
>>    ai la /?Default e 256, si daca ai asa putine inodes
>>
>>    avail, este posibil sa ai un isize huge, hence your
>>
>>    issue.
>>
>>    Incearca
>>
>>    xfs_info /
>>
>>    Sporuri.
>>
>>
>>
>>    2016-07-26 14:26 GMT+03:00
>>
>>    lista email <lista.em...@yahoo.com>:
>>
>>    am scris in primul email, un raport
>>
>>    complet.
>>
>>
>>
>>
>>
>>
>>
>>    Da, inoduri mai sunt, dar tocmai aici este diferenta!
>>
>>
>>
>>
>>
>>
>>
>>    Pe nodul seek:
>>
>>
>>
>>    # df -i|grep root
>>
>>
>>
>>    /dev/mapper/centos-root     77104 66220
>>   10884
>>
>>     86% /
>>
>>
>>
>>
>>
>>
>>
>>    pe nodul ok:
>>
>>
>>
>>    # df -i|grep root
>>
>>
>>
>>    /dev/mapper/centos-root
>>
>>    52424704 66137  52358567    1% /
>>
>>
>>
>>
>>
>>
>>
>>    Ambele au partitias / de 50GB!
>>
>>
>>
>>
>>
>>
>>
>>    Dupa cate se poate observa, pe nodul ok sunt peste 52
>>
>>    milioane de inoduri in timp ce pe cel cu fs-ul full
>>  sunt
>>
>>   in
>>
>>    jur de 77K. Cum se explica aceasta diferenta de
>>  indouri
>>
>>    pentru doua partitii cu aceeasi dimensiune? Din acest
>>
>>   motiv,
>>
>>    Wofly cat si Bogdan au avansat idea uni xfs corupt
>>  catre
>>
>>    care incep si eu sa inclin ...
>>
>>
>>
>>
>>
>>
>>
>>    --------------------------------------------
>>
>>
>>
>>    On Tue, 7/26/16, Matei,
>>
>>    Petre-Marius <mat.mar...@gmail.com>
>>
>>    wrote:
>>
>>
>>
>>
>>
>>
>>
>>     Subject: Re: [rlug] partition 100% full No space left
>>  on
>>
>>    device
>>
>>
>>
>>     To: rlug@lists.lug.ro
>>
>>
>>
>>     Date:
>>
>>    Tuesday, July 26, 2016, 1:06 PM
>>
>>
>>
>>
>>
>>
>>
>>     On 26.07.2016 12:07,
>>
>>
>>
>>     Bogdan-Stefan Rotariu wrote:
>>
>>
>>
>>     > On 26 July
>>
>>
>>
>>     2016 at 12:04:08, lista email (lista.em...@yahoo.com)
>>
>>
>>
>>     wrote:
>>
>>
>>
>>     >
>>
>>
>>
>>     > Buna
>>
>>
>>
>>     tuturor,
>>
>>
>>
>>     >
>>
>>
>>
>>     > Neata,
>>
>>
>>
>>     >
>>
>>
>>
>>     > Ma uit de cateva zile
>>
>>
>>
>>     peste un centos 7 si nu reusesc sa-mi dau seama de ce
>>  df
>>
>>    imi
>>
>>
>>
>>     raporteaza ca partitia / este ~100% full iar du imi
>>
>>
>>
>>     raporteaza usage de numai 1.7G din 50GB (adica sub
>>  4%).
>>
>>
>>
>>     Mentionez ca partitia / este formatata xfs.
>>
>>
>>
>>     >
>>
>>
>>
>>     >
>>
>>
>>
>>     >
>>
>>
>>
>>     > Probabil ai o
>>
>>
>>
>>     aplicatie care tine un fisier deschis, desi el nu
>>  mai
>>
>>    exista
>>
>>
>>
>>     vizibil in fs.
>>
>>
>>
>>     >
>>
>>
>>
>>     >
>>
>>
>>
>>     lsof -nP | grep '(deleted)’
>>
>>
>>
>>     >
>>
>>
>>
>>     > sau cu listing mai ‘fancy’:
>>
>>
>>
>>     >
>>
>>
>>
>>     > find /proc/*/fd -ls |
>>
>>
>>
>>     grep  '(deleted)'
>>
>>
>>
>>     >
>>
>>
>>
>>     >
>>
>>
>>
>>     >
>>
>>
>>
>>     _______________________________________________
>>
>>
>>
>>     > RLUG mailing list
>>
>>
>>
>>     > RLUG@lists.lug.ro
>>
>>
>>
>>     > http://lists.lug.ro/mailman/listinfo/rlug
>>
>>
>>
>>
>>
>>
>>
>>     dar inoduri mai sunt?
>>
>>
>>
>>
>>
>>
>>
>>     df -i
>>
>>
>>
>>
>>
>>
>>
>>     Marius
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>     _______________________________________________
>>
>>
>>
>>     RLUG mailing list
>>
>>
>>
>>     RLUG@lists.lug.ro
>>
>>
>>
>>     http://lists.lug.ro/mailman/listinfo/rlug
>>
>>
>>
>>    _______________________________________________
>>
>>
>>
>>    RLUG mailing list
>>
>>
>>
>>    RLUG@lists.lug.ro
>>
>>
>>
>>    http://lists.lug.ro/mailman/listinfo/rlug
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> RLUG mailing list
>> RLUG@lists.lug.ro
>> http://lists.lug.ro/mailman/listinfo/rlug
>>
> _______________________________________________
> RLUG mailing list
> RLUG@lists.lug.ro
> http://lists.lug.ro/mailman/listinfo/rlug
>

_______________________________________________
RLUG mailing list
RLUG@lists.lug.ro
http://lists.lug.ro/mailman/listinfo/rlug

Raspunde prin e-mail lui