We are currently running SFORA RAC 5.0MP3 on Solaris 10 64-bit two-node
cluster. We run a large 10TB
Oracle RAC database on this cluster, and we are in the process of VxVM
mirroring the data from an
old EMC array to a new EMC array.

We provisioned the new EMC storage without incident, added the new disks to
the existing diskgroups,
and the mirrored the data between arrays. As the final step in the process,
we then ran "vxplex -g
dgname dis plex-nn", and saw the following for three separate volumes:


v  arch         -            ENABLED  ACTIVE   6487707904 SELECT  -
fsgen
pl arch-01      arch         ENABLED  ACTIVE   6487707904 CONCAT  -       RW
sd 0804088C-01  arch-01      0804088C 0        1696907008 0
c3t50000972C00C915Cd25 ENA
sd 080408BC-01  arch-01      080408BC 0        1696907008 1696907008
c3t50000972C00C915Cd26 ENA
sd 080408EC-01  arch-01      080408EC 0        1696907008 3393814016
c3t50000972C00C915Cd27 ENA
sd 0804091C-01  arch-01      0804091C 0        1396986880 5090721024
c3t50000972C00C915Cd28 ENA
pl arch-02      arch         ENABLED  ACTIVE   6487707904 CONCAT  -       RW
sd 20520113-01  arch-02      20520113 0        1428029344 0
c3t5006016844600097d8 ENA
sd 20520061-02  arch-02      20520061 861984256 666538784 1428029344
c2t5006016244600097d12 ENA
sd 20520121-01  arch-02      20520121 0        1537081088 2094568128
c2t5006016244600097d16 ENA
sd 20520064-01  arch-02      20520064 0        1428029344 3631649216
c3t5006016844600097d25 ENA
sd 20520103-01  arch-02      20520103 0        1428029344 5059678560
c3t5006016844600097d26 ENA

racnyc05:~>sudo vxplex -g oradg dis arch-02
VxVM vxsync ERROR V-5-1-2321 Cannot lock into memory: Resource temporarily
unavailable
VxVM vxplex ERROR V-5-1-10870 fsgen/vxplex: Warning: vxsync exited with
exitcode 42:
        Volume data may not be flushed to all plexes

Despite the warning above, it seemed to work, however, in that the
plex is disassociated and the filesystem is still accessible:

racnyc05:~>vxprint -htg oradg arch arch-02
V  NAME         RVG/VSET/CO  KSTATE   STATE    LENGTH   READPOL   PREFPLEX
UTYPE
PL NAME         VOLUME       KSTATE   STATE    LENGTH   LAYOUT    NCOL/WID
MODE
SD NAME         PLEX         DISK     DISKOFFS LENGTH   [COL/]OFF DEVICE
MODE
SV NAME         PLEX         VOLNAME  NVOLLAYR LENGTH   [COL/]OFF AM/NM
MODE
SC NAME         PLEX         CACHE    DISKOFFS LENGTH   [COL/]OFF DEVICE
MODE
DC NAME         PARENTVOL    LOGVOL
SP NAME         SNAPVOL      DCO
EX NAME         ASSOC        VC                       PERMS    MODE
STATE
SR NAME         KSTATE

pl arch-02      -            DISABLED -        6487707904 CONCAT  -
RW
sd 20520113-01  arch-02      20520113 0        1428029344 0
c3t5006016844600097d8 ENA
sd 20520061-02  arch-02      20520061 861984256 666538784 1428029344
c2t5006016244600097d12 ENA
sd 20520121-01  arch-02      20520121 0        1537081088 2094568128
c2t5006016244600097d16 ENA
sd 20520064-01  arch-02      20520064 0        1428029344 3631649216
c3t5006016844600097d25 ENA
sd 20520103-01  arch-02      20520103 0        1428029344 5059678560
c3t5006016844600097d26 ENA

v  arch         -            ENABLED  ACTIVE   6487707904 SELECT  -
fsgen
pl arch-01      arch         ENABLED  ACTIVE   6487707904 CONCAT  -
RW
sd 0804088C-01  arch-01      0804088C 0        1696907008 0
c3t50000972C00C915Cd25 ENA
sd 080408BC-01  arch-01      080408BC 0        1696907008 1696907008
c3t50000972C00C915Cd26 ENA
sd 080408EC-01  arch-01      080408EC 0        1696907008 3393814016
c3t50000972C00C915Cd27 ENA
sd 0804091C-01  arch-01      0804091C 0        1396986880 5090721024
c3t50000972C00C915Cd28 ENA

racnyc05:~>df -h /arch
Filesystem             size   used  avail capacity  Mounted on
/dev/vx/dsk/oradg/arch
                       3.0T    54G   2.8T     2%    /arch

racnyc05:~>sudo fstyp /dev/vx/rdsk/oradg/arch vxfs

It's happened on three volumes in two different clustered diskgroups
on this cluster. I'm unwilling to proceed with the more critical
volumes until I get a better understanding of the root cause. I'm
concerned that the remaining active plex on the new array may become
inconsistent during the plex disassociation, which could cause serious
problems for our database.

Here's a little more information:

racnyc05:~>modinfo | grep -i vx
 39 7bea2000  3e4e0 307   1  vxdmp (VxVM 5.0MP3: DMP Driver)
 41 7ba00000 209248 272   1  vxio (VxVM 5.0MP3 I/O driver)
 43 7bea11e8    c78 273   1  vxspec (VxVM 5.0MP3 control/status driv)
236 7afc54a8    cb0 310   1  vxportal (VxFS 5.0_REV-5.0MP3A25_sol port)
240 7a600000 1d89e0  20   1  vxfs (VxFS 5.0_REV-5.0MP3A25_sol SunO)
260 7a7ec000   a9e0 311   1  fdd (VxQIO 5.0_REV-5.0MP3A25_sol Qui)
264 7ab4a000  51c10 315   1  vxfen (VRTS Fence 5.0MP3)
265 7b600000  21ec0 316   1  vxglm (VxGLM 5.0MP3 (SunOS 5.10))
266 7ab9e000   5418 317   1  vxgms (VxGMS 5.0MP3 (SunOS))


Let me know if you need any additional information.

Cheers,

Paul
_______________________________________________
Veritas-vx maillist  -  [email protected]
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx

Reply via email to