On Tue, Oct 24, 2017 at 10:20 PM, Justin Pryzby wrote:
> I think you must have compared these:
Yes, I did. My mistake.
> On Tue, Oct 24, 2017 at 03:11:44PM -0500, Justin Pryzby wrote:
>> ts=# SELECT * FROM bt_page_items(get_raw_page('sites_idx', 1));
>>
>> itemoffset | 48
>> ctid | (1,37)
On Tue, Oct 24, 2017 at 02:57:47PM -0700, Peter Geoghegan wrote:
> On Tue, Oct 24, 2017 at 1:11 PM, Justin Pryzby wrote:
> > ..which I gather just verifies that the index is corrupt, not sure if
> > there's
> > anything else to do with it? Note, we've already removed the duplicate
> > rows.
>
On Tue, Oct 24, 2017 at 02:57:47PM -0700, Peter Geoghegan wrote:
> On Tue, Oct 24, 2017 at 1:11 PM, Justin Pryzby wrote:
> > ..which I gather just verifies that the index is corrupt, not sure if
> > there's
> > anything else to do with it? Note, we've already removed the duplicate
> > rows.
> M
On Tue, Oct 24, 2017 at 01:48:55PM -0500, Kenneth Marshall wrote:
> I just dealt with a similar problem with pg_repack and a PostgreSQL 9.5 DB,
> the exact same error. It seemed to caused by a tuple visibility issue that
> allowed the "working" unique index to be built, even though a duplicate row
On Tue, Oct 24, 2017 at 1:11 PM, Justin Pryzby wrote:
> ..which I gather just verifies that the index is corrupt, not sure if there's
> anything else to do with it? Note, we've already removed the duplicate rows.
Yes, the index itself is definitely corrupt -- this failed before the
new "heapalli
On Tue, Oct 24, 2017 at 12:31:49PM -0700, Peter Geoghegan wrote:
> On Tue, Oct 24, 2017 at 11:48 AM, Kenneth Marshall wrote:
> >> Really ? pg_repack "found" and was victim to the duplicate keys, and
> >> rolled
> >> back its work. The CSV logs clearly show that our application INSERTed
> >> ro
On Tue, Oct 24, 2017 at 11:48 AM, Kenneth Marshall wrote:
>> Really ? pg_repack "found" and was victim to the duplicate keys, and rolled
>> back its work. The CSV logs clearly show that our application INSERTed rows
>> which are duplicates.
>>
>> [pryzbyj@database ~]$ rpm -qa pg_repack10
>> pg_r
On Tue, Oct 24, 2017 at 01:30:19PM -0500, Justin Pryzby wrote:
> On Tue, Oct 24, 2017 at 01:27:14PM -0500, Kenneth Marshall wrote:
> > On Tue, Oct 24, 2017 at 01:14:53PM -0500, Justin Pryzby wrote:
>
> > > Note:
> > > I run a script which does various combinations of ANALYZE/VACUUM
> > > (FULL/AN
On Tue, Oct 24, 2017 at 01:27:14PM -0500, Kenneth Marshall wrote:
> On Tue, Oct 24, 2017 at 01:14:53PM -0500, Justin Pryzby wrote:
> > Note:
> > I run a script which does various combinations of ANALYZE/VACUUM
> > (FULL/ANALYZE)
> > following the upgrade, and a script runs nightly with REINDEX an
On Tue, Oct 24, 2017 at 01:14:53PM -0500, Justin Pryzby wrote:
> I upgrade another instance to PG10 yesterday and this AM found unique key
> violations.
>
> Our application is SELECTing FROM sites WHERE site_location=$1, and if it
> doesn't find one, INSERTs one (I know that's racy and not ideal).
I upgrade another instance to PG10 yesterday and this AM found unique key
violations.
Our application is SELECTing FROM sites WHERE site_location=$1, and if it
doesn't find one, INSERTs one (I know that's racy and not ideal). We ended up
with duplicate sites, despite a unique index. We removed t
11 matches
Mail list logo