Leonardo,
 O estranho que na instalação dos dois nós não tive nenhum
problema de espaço
 porem quando vou adicionar outro nó isto ocorre.
 o ext3 não é a melhor forma de formatação para binários do
Oracle ?
 [root@rac03 ~]# df -m
 Filesystem           1M-blocks      Used Available Use% Mounted on
 /dev/mapper/VolGroup00-LogVol00
                          15809      3119     11875  21% /
 /dev/sda1                   99        12        82  13% /boot
 tmpfs                     1014         0      1014   0% /dev/shm
 /dev/sdb1                40313     38265         0 100% /u01
 [root@rac03 ~]# dumpe2fs /dev/sdb1 | grep ^Free blocks
 dumpe2fs 1.39 (29-May-2006)
 Free blocks:              524046
 [root@rac03 ~]# dumpe2fs /dev/sdb1 | grep ^Free inodes
 dumpe2fs 1.39 (29-May-2006)
 Free inodes:              5193155
 Obrigado,
 On Seg 28/03/11 19:22 , Leonardo Valente leonardovale...@gmail.com
sent:
 Analisando seu FS, através da informação "0 free blocks", deu
para notar que
 ele realmente está cheio, apesar de você não ter mandado o log
completo.
 Executa os seguintes comandos:
 dumpe2fs /dev/sdb1 | grep ^Free blocks
 dumpe2fs /dev/sdb1 | grep ^Free inodes
 Além disso, quando o sistema de arquivo é formatado (ext3), 5% do
espaço é
 reservado ao root, você pode diminuir esta porcentagem para 1%:
 tune2fs -m 1 /dev/sdb1
 Para colertarmos mais informações posta essas saídas:
 ps -ef | grep pmon
 du -sh $ORACLE_HOME
 -- 
 Leonardo Valente
 Red Hat Certified Engineer
 Linux Professional Institute Certified Level 2
 Em 28 de março de 2011 07:31,  escreveu:
 > Olha aí
 >
 > Disk /dev/sdb: 42.9 GB, 42949672960 bytes
 > 255 heads, 63 sectors/track, 5221 cylinders
 > Units = cylinders of 16065 * 512 = 8225280 bytes
 >
 >    Device Boot      Start         End      Blocks   Id  System
 > /dev/sdb1               1        5221    41937651   83  Linux
 >
 >
 >
 > Filesystem           1M-blocks      Used Available Use% Mounted on
 > /dev/mapper/VolGroup00-LogVol00
 >                          15809      3118     11876  21% /
 > /dev/sda1                   99        12        82  13% /boot
 > tmpfs                     1014         0      1014   0% /dev/shm
 > /dev/sdb1                40313     38265         0 100% /u01
 >
 >
 > Group 283: (Blocks 9273344-9306111)
 >   Block bitmap at 9273344 (+0), Inode bitmap at 9273345 (+1)
 >   Inode table at 9273346-9273857 (+2)
 >   0 free blocks, 16384 free inodes, 0 directories
 >   Free blocks:
 >   Free inodes: 4636673-4653056
 > Group 284: (Blocks 9306112-9338879)
 >   Block bitmap at 9306112 (+0), Inode bitmap at 9306113 (+1)
 >   Inode table at 9306114-9306625 (+2)
 >   0 free blocks, 16114 free inodes, 1 directories
 >   Free blocks:
 >   Free inodes: 4653327-4669440
 > Group 285: (Blocks 9338880-9371647)
 >   Block bitmap at 9338880 (+0), Inode bitmap at 9338881 (+1)
 >   Inode table at 9338882-9339393 (+2)
 >   0 free blocks, 16384 free inodes, 0 directories
 >   Free blocks:
 >   Free inodes: 4669441-4685824
 > Group 286: (Blocks 9371648-9404415)
 >   Block bitmap at 9371648 (+0), Inode bitmap at 9371649 (+1)
 >   Inode table at 9371650-9372161 (+2)
 >   0 free blocks, 16294 free inodes, 18 directories
 >   Free blocks:
 >   Free inodes: 4685915-4702208
 > Group 287: (Blocks 9404416-9437183)
 >   Block bitmap at 9404416 (+0), Inode bitmap at 9404417 (+1)
 >   Inode table at 9404418-9404929 (+2)
 >   0 free blocks, 15870 free inodes, 7 directories
 >   Free blocks:
 >   Free inodes: 4702723-4718592
 > Group 288: (Blocks 9437184-9469951)
 >   Block bitmap at 9437184 (+0), Inode bitmap at 9437185 (+1)
 >   Inode table at 9437186-9437697 (+2)
 >   0 free blocks, 15865 free inodes, 37 directories
 >   Free blocks:
 >   Free inodes: 4719112-4734976
 > Group 289: (Blocks 9469952-9502719)
 >   Block bitmap at 9469952 (+0), Inode bitmap at 9469953 (+1)
 >   Inode table at 9469954-9470465 (+2)
 >   0 free blocks, 16154 free inodes, 21 directories
 >   Free blocks:
 >   Free inodes: 4735207-4751360
 > Group 290: (Blocks 9502720-9535487)
 >   Block bitmap at 9502720 (+0), Inode bitmap at 9502721 (+1)
 >   Inode table at 9502722-9503233 (+2)
 >   0 free blocks, 15488 free inodes, 37 directories
 >   Free blocks:
 >   Free inodes: 4752257-4767744
 > Group 291: (Blocks 9535488-9568255)
 >   Block bitmap at 9535488 (+0), Inode bitmap at 9535489 (+1)
 >   Inode table at 9535490-9536001 (+2)
 >   0 free blocks, 16012 free inodes, 9 directories
 >   Free blocks:
 >   Free inodes: 4768117-4784128
 > Group 292: (Blocks 9568256-9601023)
 >   Block bitmap at 9568256 (+0), Inode bitmap at 9568257 (+1)
 >   Inode table at 9568258-9568769 (+2)
 >   0 free blocks, 15647 free inodes, 6 directories
 >   Free blocks:
 >   Free inodes: 4784866-4800512
 > Group 293: (Blocks 9601024-9633791)
 >   Block bitmap at 9601024 (+0), Inode bitmap at 9601025 (+1)
 >   Inode table at 9601026-9601537 (+2)
 >   0 free blocks, 16098 free inodes, 11 directories
 >   Free blocks:
 >   Free inodes: 4800799-4816896
 > Group 294: (Blocks 9633792-9666559)
 >   Block bitmap at 9633792 (+0), Inode bitmap at 9633793 (+1)
 >   Inode table at 9633794-9634305 (+2)
 >   0 free blocks, 15596 free inodes, 11 directories
 >   Free blocks:
 >   Free inodes: 4817685-4833280
 > Group 295: (Blocks 9666560-9699327)
 >   Block bitmap at 9666560 (+0), Inode bitmap at 9666561 (+1)
 >   Inode table at 9666562-9667073 (+2)
 >   0 free blocks, 15971 free inodes, 54 directories
 >   Free blocks:
 >   Free inodes: 4833694-4849664
 > Group 296: (Blocks 9699328-9732095)
 >   Block bitmap at 9699328 (+0), Inode bitmap at 9699329 (+1)
 >   Inode table at 9699330-9699841 (+2)
 >   0 free blocks, 16251 free inodes, 16 directories
 >   Free blocks:
 >   Free inodes: 4849798-4866048
 > Group 297: (Blocks 9732096-9764863)
 >   Block bitmap at 9732096 (+0), Inode bitmap at 9732097 (+1)
 >   Inode table at 9732098-9732609 (+2)
 >   0 free blocks, 16300 free inodes, 12 directories
 >   Free blocks:
 >   Free inodes: 4866133-4882432
 > Group 298: (Blocks 9764864-9797631)
 >   Block bitmap at 9764864 (+0), Inode bitmap at 9764865 (+1)
 >   Inode table at 9764866-9765377 (+2)
 >   0 free blocks, 16205 free inodes, 30 directories
 >   Free blocks:
 >   Free inodes: 4882612-4898816
 > Group 299: (Blocks 9797632-9830399)
 >   Block bitmap at 9797632 (+0), Inode bitmap at 9797633 (+1)
 >   Inode table at 9797634-9798145 (+2)
 >   0 free blocks, 15521 free inodes, 53 directories
 >   Free blocks:
 >   Free inodes: 4899680-4915200
 > Group 300: (Blocks 9830400-9863167)
 >   Block bitmap at 9830400 (+0), Inode bitmap at 9830401 (+1)
 >   Inode table at 9830402-9830913 (+2)
 >   0 free blocks, 15726 free inodes, 6 directories
 >   Free blocks:
 >   Free inodes: 4915859-4931584
 > Group 301: (Blocks 9863168-9895935)
 >   Block bitmap at 9863168 (+0), Inode bitmap at 9863169 (+1)
 >   Inode table at 9863170-9863681 (+2)
 >   0 free blocks, 15431 free inodes, 100 directories
 >   Free blocks:
 >   Free inodes: 4932538-4947968
 > Group 302: (Blocks 9895936-9928703)
 >   Block bitmap at 9895936 (+0), Inode bitmap at 9895937 (+1)
 >   Inode table at 9895938-9896449 (+2)
 >   0 free blocks, 16291 free inodes, 27 directories
 >   Free blocks:
 >   Free inodes: 4948062-4964352
 > Group 303: (Blocks 9928704-9961471)
 >   Block bitmap at 9928704 (+0), Inode bitmap at 9928705 (+1)
 >   Inode table at 9928706-9929217 (+2)
 >   0 free blocks, 16355 free inodes, 3 directories
 >   Free blocks:
 >   Free inodes: 4964382-4980736
 > Group 304: (Blocks 9961472-9994239)
 >   Block bitmap at 9961472 (+0), Inode bitmap at 9961473 (+1)
 >   Inode table at 9961474-9961985 (+2)
 >   0 free blocks, 16314 free inodes, 11 directories
 >   Free blocks:
 >   Free inodes: 4980807-4997120
 > Group 305: (Blocks 9994240-10027007)
 >   Block bitmap at 9994240 (+0), Inode bitmap at 9994241 (+1)
 >   Inode table at 9994242-9994753 (+2)
 >   0 free blocks, 16342 free inodes, 10 directories
 >   Free blocks:
 >   Free inodes: 4997163-5013504
 > Group 306: (Blocks 10027008-10059775)
 >   Block bitmap at 10027008 (+0), Inode bitmap at 10027009 (+1)
 >   Inode table at 10027010-10027521 (+2)
 >   0 free blocks, 16355 free inodes, 4 directories
 >   Free blocks:
 >   Free inodes: 5013534-5029888
 > Group 307: (Blocks 10059776-10092543)
 >   Block bitmap at 10059776 (+0), Inode bitmap at 10059777 (+1)
 >   Inode table at 10059778-10060289 (+2)
 >   0 free blocks, 16362 free inodes, 6 directories
 >   Free blocks:
 >   Free inodes: 5029911-5046272
 > Group 308: (Blocks 10092544-10125311)
 >   Block bitmap at 10092544 (+0), Inode bitmap at 10092545 (+1)
 >   Inode table at 10092546-10093057 (+2)
 >   0 free blocks, 16203 free inodes, 1 directories
 >   Free blocks:
 >   Free inodes: 5046454-5062656
 > Group 309: (Blocks 10125312-10158079)
 >   Block bitmap at 10125312 (+0), Inode bitmap at 10125313 (+1)
 >   Inode table at 10125314-10125825 (+2)
 >   0 free blocks, 16382 free inodes, 0 directories
 >   Free blocks:
 >   Free inodes: 5062659-5079040
 > Group 310: (Blocks 10158080-10190847)
 >   Block bitmap at 10158080 (+0), Inode bitmap at 10158081 (+1)
 >   Inode table at 10158082-10158593 (+2)
 >   0 free blocks, 16354 free inodes, 2 directories
 >   Free blocks:
 >   Free inodes: 5079071-5095424
 > Group 311: (Blocks 10190848-10223615)
 >   Block bitmap at 10190848 (+0), Inode bitmap at 10190849 (+1)
 >   Inode table at 10190850-10191361 (+2)
 >   0 free blocks, 16384 free inodes, 0 directories
 >   Free blocks:
 >   Free inodes: 5095425-5111808
 > Group 312: (Blocks 10223616-10256383)
 >   Block bitmap at 10223616 (+0), Inode bitmap at 10223617 (+1)
 >   Inode table at 10223618-10224129 (+2)
 >   0 free blocks, 16296 free inodes, 2 directories
 >   Free blocks:
 >   Free inodes: 5111897-5128192
 > Group 313: (Blocks 10256384-10289151)
 >   Block bitmap at 10256384 (+0), Inode bitmap at 10256385 (+1)
 >   Inode table at 10256386-10256897 (+2)
 >   0 free blocks, 16384 free inodes, 0 directories
 >   Free blocks:
 >   Free inodes: 5128193-5144576
 > Group 314: (Blocks 10289152-10321919)
 >   Block bitmap at 10289152 (+0), Inode bitmap at 10289153 (+1)
 >   Inode table at 10289154-10289665 (+2)
 >   0 free blocks, 16384 free inodes, 0 directories
 >   Free blocks:
 >   Free inodes: 5144577-5160960
 > Group 315: (Blocks 10321920-10354687)
 >   Block bitmap at 10321920 (+0), Inode bitmap at 10321921 (+1)
 >   Inode table at 10321922-10322433 (+2)
 >   0 free blocks, 16290 free inodes, 0 directories
 >   Free blocks:
 >   Free inodes: 5161055-5177344
 > Group 316: (Blocks 10354688-10387455)
 >   Block bitmap at 10354688 (+0), Inode bitmap at 10354689 (+1)
 >   Inode table at 10354690-10355201 (+2)
 >   0 free blocks, 16384 free inodes, 0 directories
 >   Free blocks:
 >   Free inodes: 5177345-5193728
 > Group 317: (Blocks 10387456-10420223)
 >   Block bitmap at 10387456 (+0), Inode bitmap at 10387457 (+1)
 >   Inode table at 10387458-10387969 (+2)
 >   0 free blocks, 16384 free inodes, 0 directories
 >   Free blocks:
 >   Free inodes: 5193729-5210112
 > Group 318: (Blocks 10420224-10452991)
 >   Block bitmap at 10420224 (+0), Inode bitmap at 10420225 (+1)
 >   Inode table at 10420226-10420737 (+2)
 >   0 free blocks, 16384 free inodes, 0 directories
 >   Free blocks:
 >   Free inodes: 5210113-5226496
 > Group 319: (Blocks 10452992-10484411)
 >   Block bitmap at 10452992 (+0), Inode bitmap at 10452993 (+1)
 >   Inode table at 10452994-10453505 (+2)
 >   0 free blocks, 16384 free inodes, 0 directories
 >   Free blocks:
 >   Free inodes: 5226497-5242880
 >
 >
 >
 > Obrigado,
 >
 >
 >
 >
 >
 >
 > On Dom 27/03/11 17:20 , Leonardo Valente leonardovale...@gmail.com
[2] sent:
 >
 >
 >
 > Complementando o comentário do colega, como root, executa o
comando de
 > exemplo:
 >
 > # dumpe2fs /dev/sda3
 >
 > E procura pela informação "free inodes"
 >
 > Obs: Podes postar toda a saída aqui pro grupo, se preferir.
 >
 > --
 > Leonardo Valente
 > Red Hat Certified Engineer
 > Linux Professional Institute Certified Level 2
 >
 > Em 27 de março de 2011 11:23, José Carlos Guerrieri
 >  escreveu:
 > > Verifique o numero de inodes.
 > >
 > > Em 27 de março de 2011 09:23,  escreveu:
 > >
 > >>
 > >>
 > >> Senhores,
 > >> após montar um lab com RAC 10Gr2 .
 > >> Fui adicionar um nó, como manda o manual,
 > >> $ORACLE_GRID_HOME/OUI/BIN/ADDNODE.SH
 > >> ,
 > >> Ao instalar o cluster e os binários do banco para ASM já
tenho
 > >> 100% de utilização do /u01
 > >> creio que isso deve ser algum bug já que este ponto de
montagem tem
 > >> 40Gb.
 > >> Rac 10gR2
 > >> Linux RedHat 5.5
 > >> Obrigado,
 > >>
 > >> [As partes desta mensagem que não continham texto foram
removidas]
 > >>
 > >>
 > >>
 > >
 > >
 > > [As partes desta mensagem que não continham texto foram
removidas]
 > >
 > >
 > >
 > > ------------------------------------
 > >
 > > ----------------------------------------------------------
 > >>Atenção! As mensagens do grupo ORACLE_BR são de acesso
público e de
 > inteira responsabilidade de seus remetentes.
 > > Acesse: http://www.mail-archive.com/
[5]oracle_br@yahoogrupos.com.br [6]/
 > > ----------------------------------------------------------
 > >>Apostilas » Dicas e Exemplos » Função » Mundo Oracle »
Package »
 > Procedure » Scripts » Tutoriais - O GRUPO ORACLE_BR TEM SEU
PROPRIO ESPAÇO!
 > VISITE: http://www.oraclebr.com.br/ [7]
 > > ---------------------------------------------------------- Links
do
 > Yahoo! Grupos
 > >
 > >
 > >
 >  
 >
 >
 >
 [As partes desta mensagem que não continham texto foram removidas]
 ------------------------------------

--------------------------------------------------------------------------------------------------------------------------
 >Atenção! As mensagens do grupo ORACLE_BR são de acesso público
e de inteira responsabilidade de seus remetentes.
 Acesse: http://www.mail-archive.com/ [8]oracle_br@yahoogrupos.com.br
[9]/ 

--------------------------------------------------------------------------------------------------------------------------
 >Apostilas » Dicas e Exemplos » Função » Mundo Oracle »
Package » Procedure » Scripts » Tutoriais - O GRUPO ORACLE_BR TEM
SEU PROPRIO ESPAÇO! VISITE: http://www.oraclebr.com.br/ [10]  

------------------------------------------------------------------------------------------------------------------------
Links do Yahoo! Grupos
  Para visitar o site do seu grupo na web, acesse:
     http://br.groups.yahoo.com/group/oracle_br/ [11]
  Para sair deste grupo, envie um e-mail para:
     oracle_br-unsubscr...@yahoogrupos.com.br [12]
  O uso que você faz do Yahoo! Grupos está sujeito aos:
     http://br.yahoo.com/info/utos.html [13]


[As partes desta mensagem que não continham texto foram removidas]

Responder a