Hello everyone -- We received a strange error today on our production write
master. During a routine maintenance script, we got the following error:

"ERROR:  cache lookup failed for aggregate 5953992"

I think I found the culprit. We have a cron script that (among other
things) recreates the aggregate function array_cat_aggregate()once every
minute. My guess is that a) transactions do not guarantee a snapshot of
custom functions and b) we got unlucky and the aggregate was deleted during
the execution of the UPDATE.

I was just wondering if my analysis is correct, so I can make the necessary
changes to production to prevent this from happening again.

Here is the query that produced the error:

UPDATE marbury_case SET
  components_vector = (
    SELECT array_cat_aggregate(component) FROM (
      SELECT ARRAY[
        id,
        type_id,
        "offset",
        length,
        internal_id,
        parent_id,
        right_sibling_id] AS component
      FROM marbury_component WHERE case_id = marbury_case.id ORDER BY id)
    AS foo),
  attributes_json = (
    SELECT array_to_json(array_agg(attributes || hstore('_ind',
ind::text))) FROM (
      SELECT (rank() OVER (ORDER BY id)) - 1 AS ind, attributes
      FROM marbury_component
      WHERE case_id = marbury_case.id ORDER BY id)
    AS foo WHERE attributes IS NOT NULL
                 AND array_length(akeys(attributes), 1) > 0),
  vectors_updated = timeofday()::timestamp
WHERE id = 71865

Reply via email to