Hi Tim,

Thanks for the feedback,

[Chris]

AttributeError: ClientCache instance has no attribute '_get'
Both at random times while the servers are running...

Tim Peters wrote:
Then-- sorry --I don't have a clue.  Like I said, I don't see any code
capable of removing a ClientCache's `_get` attribute after the cache has
been opened, and I don't see a way for a ZEO client cache to work at all
without its _get attribute (_get is used on every load and every
invalidate).  If this only occurred at client startup time, then it might
get pinned on a startup race.  But if it occurs at random times while the
servers are running, I have no theory.

Hmm.. well, I guess the clients could have restarted during the day and so maybe it is a startup race?

It looks like open() is "supposed to be" called sometime during initial
cache verification.  That code is complicated.

Hmmm, Tim says the code is complicated, I guess I should be prepared to
wipe my brains off the wall should I try and read it?

Up to you, but you should know I gave up <wink>.

No one needs covering in pink squishy bits this close to Christmas...

The log shows that your client takes the "full cache verification" path.  We
know this because it says, e.g.,

    2005-10-11T17:57:58 INFO(0) ZCS:13033 Verifying cache

and "Verifying cache" is produced from only one place in ZODB.  That's this
block in ClientStorage.verify_cache:

        log2(INFO, "Verifying cache")
        # setup tempfile to hold zeoVerify results
        self._tfile = tempfile.TemporaryFile(suffix=".inv")
        self._pickler = cPickle.Pickler(self._tfile, 1)
        self._pickler.fast = 1 # Don't use the memo

        self._cache.verify(server.zeoVerify)
        self._pending_server = server
        server.endZeoVerify()
        return "full verification"


Note that this calls self._cache.verify(), and that's exactly the verify()
method I talked about just above (the one that opens the cache immediately,
giving the cache its _get attribute).

Rats.

I suppose it's _conceivable_ that there's some obscure race allowing a load
or invalidation attempt to sneak in here, before self._cache.verify() is
called.  But if so, that's necessarily part of ZEO client startup, you say
it's not correlated with startup, and your logs don't show anything unusual
between "Verifying cache" and "endVerify finishing".

More rats. There some sinking ship knocking around here?

ERROR(200) ZODB Shouldn't load state for 0x6c33b4 when the connection
is closed

...which is a bit more distrubing.

That one comes out of Connection.setstate(), and appears to mean what
it says:  the app is trying to load an object from this Connection but
the Connection has been closed.

Hurm... Zope? Plone? Sessions? Which one shall it be today? *sigh*
Any clues greatfully received (ie: common patterns that cause this error)

app problem, not a ZODB problem.  Have you tracked down what kind of object
(say) oid 0x6c33b4 is?  That may give a good clue.

Indeed, next time I see one, I'll have a poke. Dont suppose you have a snippet to have to turn 0x6c33b4 into a python object using a zopectl debug session?

(pps: kinda interested what those conflict errors are, read or write? how
do I find out?)

ZODB raises ReadConflictError specifically for a read conflict.  It raises
ConflictError for a write conflict.  Whatever code in Zope produces the log
messages apparently suppresses the distinction.  This is easy to do, since
ReadConflictError is a subclass of ConflictError; I _guess_ that code just
catches the base ConflictError to capture both.  Poking around in Zope might
help (maybe there's an easy way to change it to preserve the distinction).

Shall do!

thanks for all the help,

cheers,

Chris

--
Simplistix - Content Management, Zope & Python Consulting
           - http://www.simplistix.co.uk
_______________________________________________
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev

Reply via email to