A customer is running ZFS version15 on Solaris SPARC 10/08 supporting Oracle 
10.2.0.3 databases in a dev and production test environment.   We have come 
across some cache inconsistencies with one of the Oracle databases where 
fetching a record displays a 'historical value' (that has been changed and 
committed many times).   This is an isolated occurance and is not always 
consistent.  I can't replicate it to other tables.   I'll also be posting a 
note to the ZFS discussion list.



Is it possible for a read to bybpass the write cache and fetch from disk before 
the flush of the cache to disk occurs?  This is a large system that is 
infrequently busy.  The Oracle SGA size is minimized to 1GB per instance and we 
rely more on the ZFS cache, allowing us to fit 'more instances' (many of which 
are cloned snapshots).  We've been running this setup for 2 years.  The 
filesystems are set with compression on, blocksize 8k for oracle datafiles, 
128k for redologs.



Here are the details of the scenerio:



1.   Update statement re-setting existing value. At this point the previous 
value was actually set to -643 prior to the update.  It was originally set to 3 
before today's session:



SQL> update [name deleted] set status_cd = 1 where id = 65;

1 row updated.

SQL> commit;

Commit complete.

SQL> select rowid, id, status_cd from [table name deleted]

SQL> where id = 65;

ROWID              ID                 STATUS_CD

------------------ ------------------ ----------

AAAq/DAAAAAAERlAAM                 65          3



Note that when retrieved the status_cd reverts to the old original value of 3, 
not the previous value of -643.



2.  Oracle trace file proves that the update was issued and committed:



=====================

PARSING IN CURSOR #1 len=70 dep=0 uid=110 oct=6 lid=110 tim=17554807027344 
hv=3512595279 ad='fd211878'

update [table deleted] set status_cd = 1 where id = 65 END OF STMT PARSE 
#1:c=0,e=54,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=2,tim=17554807027340

BINDS #1:

EXEC #1:c=0,e=257,p=0,cr=3,cu=3,mis=0,r=1,dep=0,og=2,tim=17554807027737

WAIT #1: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 
p3=0 obj#=-1 tim=17554807027803 WAIT #1: nam='SQL*Net message from client' ela= 
2999139 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=17554810026992 STAT #1 
id=1 cnt=1 pid=0 pos=1 obj=0 op='UPDATE  [TABLE DELETED] (cr=3 pr=0 pw=0 
time=144 us)'

STAT #1 id=2 cnt=1 pid=1 pos=1 obj=177738 op='INDEX UNIQUE SCAN 
[TABLE_DELETED]_XPK (cr=3 pr=0 pw=0 time=19 us)'

PARSE #2:c=0,e=9,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,tim=17554810027367

XCTEND rlbk=0, rd_only=0

EXEC #2:c=0,e=226,p=0,cr=0,cu=1,mis=0,r=0,dep=0,og=0,tim=17554810027630

WAIT #2: nam='log file sync' ela= 833 buffer#=9408 p2=0 p3=0 obj#=-1 
tim=17554810028507 WAIT #2: nam='SQL*Net message to client' ela= 2 driver 
id=1413697536 #bytes=1 p3=0 obj#=-1 tim=17554810028578 WAIT #2: nam='SQL*Net 
message from client' ela= 1825185 driver id=1413697536 #bytes=1 p3=0 obj#=-1 
tim=17554811853812 ===================== PARSING IN CURSOR #1 len=67 dep=0 
uid=110 oct=3 lid=110 tim=17554811854015 hv=1593702413 ad='fd713640'

select status_cd from [table_deleted] where id = 65 END OF STMT PARSE 
#1:c=0,e=41,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=2,tim=17554811854010

BINDS #1:

EXEC #1:c=0,e=91,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=2,tim=17554811854273

WAIT #1: nam='SQL*Net message to client' ela= 1 driver id=1413697536 #bytes=1 
p3=0 obj#=-1 tim=17554811854327 FETCH 
#1:c=0,e=64,p=0,cr=4,cu=0,mis=0,r=1,dep=0,og=2,tim=17554811854436

WAIT #1: nam='SQL*Net message from client' ela= 780 driver id=1413697536 
#bytes=1 p3=0 obj#=-1 tim=17554811855291 FETCH 
#1:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,tim=17554811855331

WAIT #1: nam='SQL*Net message to client' ela= 0 driver id=1413697536 #bytes=1 
p3=0 obj#=-1 tim=17554811855366





There are no Oracle or Solaris error messages indicating any issue with this 
update.   Haas anyone seen this behavoir?



The features of ZFS (snapshots/clones/compression) save us a ton of time on 
this platform and we have certainly benefited from it.   Just want to 
understand how something like this could occur and determine how we can prevent 
it in the future.

==========
Gerry Bragg
Sr. Developer
Altarum Institute
(734) 516-0825
gerry.br...@altarum.org<mailto:gerry.br...@altarum.org>
www.altarum.org<http://www.altarum.org/>
"Systems Research For Better Health"

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to