[GENERAL] ownership of sequences by tables in pg_dumps?

2012-01-12 Thread Andrew Hammond
Where foo is a number of different tables, I'm calling

pg_dump --format=custom --compress=9 --no-password
--file=public.foo.pgdump --table=public.foo --schema-only my_database

When I check the contents of that dump using

pg_restore -l public.foo.pgdump

in some cases it includes the foo_id_seq object and in others it does
not. How does pg_dump decide if an sequence is associated with a given
table or not?

Andrew

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] missing chunk 0 for toast value ...

2011-10-25 Thread Andrew Hammond
I found the following error message in my logfiles.

Oct 24 04:05:57 db-app02 postgres[24640]: [2-1]
user=nutricate,db=admin_production ERROR:  missing chunk number 0 for
toast value
2411466 in pg_toast_2619
Oct 24 04:05:57 db-app02 postgres[24640]: [2-2]
user=nutricate,db=admin_production STATEMENT:  SELECT devices.id,
devices.name, devices.info, devices.pos_id,
devices.device_group_id, devices.header_id, devices.footer_id,
devices.device_type_id, devices.auth_code, devices.pos_comm_id,
devices.printer_comm_id, devices.sw_version, devices.hw_version,
devices.device_status, devices.entity_id, devices.address,
devices.created_by, devices.create_method, devices.created_on,
devices.modified_by, devices.updated_on, devices.version_id,
devices.unique_id, devices.hw_key, devices.config_status,
devices.activated_on, devices.store_id,
devices.last_status_update, devices.loaded_on,
devices.header2_id, devices.footer2_id, devices.timezone_id,
devices.scanner_comm_id, devices.public_address,
devices.hostname, devices.update_sw_version,
devices.proxy_address, devices.proxy_type_id,
devices.build_error,
Oct 24 04:05:57 db-app02 postgres[24640]: [2-3]  devices.local_info,
devices.associated_on FROM devices WHERE (devices.entity_id IN
(SELECT U0.id FROM entities U0 WHERE ((U0.lft  280  AND U0.rgt 
2597 ) OR U0.id = 140 )) AND devices.auth_code = E'0063' )

I tried reproducing it with

SELECT * FROM devices WHERE (devices.entity_id IN (SELECT U0.id FROM
entities U0 WHERE ((U0.lft  280  AND U0.rgt  2597 ) OR U0.id = 140
)) AND devices.auth_code = E'0063' )

This returned no rows and did not reproduce the error. I tried reproducing
with an even broader approach

CREATE TEMP TABLE foo AS SELECT * FROM devices;
CREATE TEMP TABLE bar AS SELECT * FROM entities;

Still no repro. This is the second time I've seen this error, and the last
time it also did not reproduce. I'm wondering a couple of things. First, is
there some other way to reproduce this error? Second, would simply deleting
the row cause the problem to go away? I wouldn't think so, but why else is
it not reproducing? Third, what is the recommended course of action here. We
have hourly backups, but this is a production database, so I would prefer
not to have to restore. I tested a restore from an hour ago and it ran
successfully.

A


Re: [GENERAL] missing chunk 0 for toast value ...

2011-10-25 Thread Andrew Hammond
On Tue, Oct 25, 2011 at 2:20 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Andrew Hammond andrew.george.hamm...@gmail.com writes:
  Oct 24 04:05:57 db-app02 postgres[24640]: [2-1]
  user=nutricate,db=admin_production ERROR:  missing chunk number 0 for
  toast value
  2411466 in pg_toast_2619

 Hmm ... pg_toast_2619 is pg_statistic's toast table.  Is this 9.0.x,
 and are you by any chance in the habit of running CLUSTER or VACUUM FULL
 on your system catalogs?  Could one have been running in the background
 when this happened?


This is 9.0.4.

I am not aware of any VACUUM FULL or CLUSTER operations that are scheduled.
To the best of my knowledge this DB has never been either clustered or
vacuum full'd. The following occur in the log file immediately before the
error message above.

Oct 24 04:05:57 db-app02 postgres[24639]: [2-1] user=,db= LOG:  automatic
vacuum of table admin_production.pg_catalog.pg_statistic: index scans: 0
Oct 24 04:05:57 db-app02 postgres[24639]: [2-2] #011pages: 0 removed, 150
remain
Oct 24 04:05:57 db-app02 postgres[24639]: [2-3] #011tuples: 254 removed, 925
remain
Oct 24 04:05:57 db-app02 postgres[24639]: [2-4] #011system usage: CPU
0.01s/0.00u sec elapsed 0.02 sec
Oct 24 04:05:57 db-app02 postgres[24639]: [3-1] user=,db= LOG:  automatic
vacuum of table admin_production.public.devices: index scans: 0
Oct 24 04:05:57 db-app02 postgres[24639]: [3-2] #011pages: 0 removed, 353
remain
Oct 24 04:05:57 db-app02 postgres[24639]: [3-3] #011tuples: 824 removed,
2261 remain
Oct 24 04:05:57 db-app02 postgres[24639]: [3-4] #011system usage: CPU
0.00s/0.00u sec elapsed 0.08 sec
Oct 24 04:05:57 db-app02 postgres[24639]: [4-1] user=,db= LOG:  automatic
analyze of table admin_production.public.devices system usage: CPU
0.00s/0.08u sec elapsed 0.17 sec


Re: [GENERAL] missing chunk 0 for toast value ...

2011-10-25 Thread Andrew Hammond
On Tue, Oct 25, 2011 at 4:11 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Andrew Hammond andrew.george.hamm...@gmail.com writes:
  The following occur in the log file immediately before the
  error message above.

 Hmm, that's pretty interesting ... are there any nearby autovacuums of
 pg_toast_2619?


Several, both before and after the error message:

Oct 24 03:49:57 db-app02 postgres[23554]: [4-1] user=,db= LOG:  automatic
vacuum of table admin_production.pg_toast.pg_toast_2619: index scans: 1
Oct 24 03:49:57 db-app02 postgres[23554]: [4-2] #011pages: 0 removed, 206
remain
Oct 24 03:49:57 db-app02 postgres[23554]: [4-3] #011tuples: 220 removed, 608
remain
Oct 24 03:49:57 db-app02 postgres[23554]: [4-4] #011system usage: CPU
0.00s/0.00u sec elapsed 0.10 sec

Oct 24 03:53:57 db-app02 postgres[23800]: [5-1] user=,db= LOG:  automatic
vacuum of table admin_production.pg_toast.pg_toast_2619: index scans: 1
Oct 24 03:53:57 db-app02 postgres[23800]: [5-2] #011pages: 0 removed, 206
remain
Oct 24 03:53:57 db-app02 postgres[23800]: [5-3] #011tuples: 220 removed, 608
remain
Oct 24 03:53:57 db-app02 postgres[23800]: [5-4] #011system usage: CPU
0.01s/0.00u sec elapsed 0.12 sec

Oct 24 03:57:57 db-app02 postgres[24059]: [4-1] user=,db= LOG:  automatic
vacuum of table admin_production.pg_toast.pg_toast_2619: index scans: 1
Oct 24 03:57:57 db-app02 postgres[24059]: [4-2] #011pages: 0 removed, 206
remain
Oct 24 03:57:57 db-app02 postgres[24059]: [4-3] #011tuples: 220 removed, 608
remain
Oct 24 03:57:57 db-app02 postgres[24059]: [4-4] #011system usage: CPU
0.00s/0.00u sec elapsed 0.04 sec

Oct 24 04:01:57 db-app02 postgres[24394]: [4-1] user=,db= LOG:  automatic
vacuum of table admin_production.pg_toast.pg_toast_2619: index scans: 1
Oct 24 04:01:57 db-app02 postgres[24394]: [4-2] #011pages: 0 removed, 206
remain
Oct 24 04:01:57 db-app02 postgres[24394]: [4-3] #011tuples: 220 removed, 608
remain
Oct 24 04:01:57 db-app02 postgres[24394]: [4-4] #011system usage: CPU
0.00s/0.01u sec elapsed 0.04 sec

Oct 24 04:05:57 db-app02 postgres[24640]: [2-1]
user=nutricate,db=admin_production ERROR:  missing chunk number 0 for toast
value 2411466 in pg_toast_2619

Oct 24 04:05:57 db-app02 postgres[24639]: [6-1] user=,db= LOG:  automatic
vacuum of table admin_production.pg_toast.pg_toast_2619: index scans: 1
Oct 24 04:05:57 db-app02 postgres[24639]: [6-2] #011pages: 0 removed, 206
remain
Oct 24 04:05:57 db-app02 postgres[24639]: [6-3] #011tuples: 216 removed, 608
remain
Oct 24 04:05:57 db-app02 postgres[24639]: [6-4] #011system usage: CPU
0.00s/0.00u sec elapsed 0.08 sec

Oct 24 04:09:57 db-app02 postgres[24877]: [5-1] user=,db= LOG:  automatic
vacuum of table admin_production.pg_toast.pg_toast_2619: index scans: 1
Oct 24 04:09:57 db-app02 postgres[24877]: [5-2] #011pages: 0 removed, 206
remain
Oct 24 04:09:57 db-app02 postgres[24877]: [5-3] #011tuples: 220 removed, 608
remain
Oct 24 04:09:57 db-app02 postgres[24877]: [5-4] #011system usage: CPU
0.00s/0.00u sec elapsed 0.10 sec

Oct 24 04:13:57 db-app02 postgres[25116]: [6-1] user=,db= LOG:  automatic
vacuum of table admin_production.pg_toast.pg_toast_2619: index scans: 1
Oct 24 04:13:57 db-app02 postgres[25116]: [6-2] #011pages: 0 removed, 206
remain
Oct 24 04:13:57 db-app02 postgres[25116]: [6-3] #011tuples: 220 removed, 608
remain
Oct 24 04:13:57 db-app02 postgres[25116]: [6-4] #011system usage: CPU
0.00s/0.00u sec elapsed 0.12 sec


Re: [GENERAL] missing chunk 0 for toast value ...

2011-10-25 Thread Andrew Hammond
On Tue, Oct 25, 2011 at 4:53 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Andrew Hammond andrew.george.hamm...@gmail.com writes:
  On Tue, Oct 25, 2011 at 4:11 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  Hmm, that's pretty interesting ... are there any nearby autovacuums of
  pg_toast_2619?

  Several, both before and after the error message:

 Well, it seems clear that somehow the vacuum deleted a toast tuple that
 the other statement was about to fetch, but it's not clear how this
 could be.  The pg_statistic fetch must have come from the planner, which
 should always be called with a transaction snapshot established, and
 that ought to protect it against vacuum deleting anything that could be
 visible to SnapshotNow.  Weird.

 [ pokes around for a bit ... ]  Hmm, can you say how the failing query
 was submitted, exactly?  I'm wondering if it came in via simple Query
 (PQexec) or extended-query protocol (anything with parameters).


The command was sent from a python via django-1.2.1 using psycopg2-2.2.1

Andrew


Re: [HACKERS] [GENERAL] AutoVacuum Behaviour Question

2007-09-13 Thread Andrew Hammond
On 9/13/07, Bruce Momjian [EMAIL PROTECTED] wrote:

 Alvaro Herrera wrote:
  Bruce Momjian wrote:
  
   Is this item closed?
 
  No, it isn't.  Please add a TODO item about it:
   * Prevent long-lived temp tables from causing frozen-Xid advancement
 starvation

 Sorry, I don't understand this.  Can you give me more text?  Thanks.


s/long-lived/orphaned/ ? And possibly this means better orphan detection and
removal.

Andrew


[GENERAL] importing pgsql schema into visio (for diagramming)

2007-09-12 Thread Andrew Hammond
Does anyone know where I could find a tool which allows importing schema
information from a postgres database into visio? The boss guys want some
pretty pictures...

Andrew


[GENERAL] programmatic way to fetch latest release for a given major.minor version

2007-04-09 Thread Andrew Hammond
I'm writing a script that wants to know the latest release for a given
major.minor version. Is there some better way than parsing
http://www.postgresql.org/ftp/source/ or trying to connect to ftp
(which is invariably timing out on me today. Is that box getting
hammered or something?) and doing the parsing that? Both approaches
feel quite awkward to me.

Andrew


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] programmatic way to fetch latest release for a given major.minor version

2007-04-09 Thread Andrew Hammond

On 4/9/07, CAJ CAJ [EMAIL PROTECTED] wrote:

On 9 Apr 2007 14:47:20 -0700, Andrew Hammond
[EMAIL PROTECTED] wrote:
 I'm writing a script that wants to know the latest release for a given
 major.minor version. Is there some better way than parsing
 http://www.postgresql.org/ftp/source/ or trying to
connect to ftp
 (which is invariably timing out on me today. Is that box getting
 hammered or something?) and doing the parsing that? Both approaches
 feel quite awkward to me.

Use wget to download via  HTTP (added recently).  Probably wise to add a
couple mirrors in your script.


I'm not asking how to download stuff. I'm asking how to figure out the
current release number for a given major.minor. I thought that was
clear in my original post, but I guess not. For example, how do I
determine (programmatically) the lastest version of 8.1.

I'm also interested in a clever way to select one of the close
mirrors at random for downloading via http. However I had planned to
hold that question until I'd solved the first issue.

Andrew

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] RFC tool to support development / operations work with slony replicated databases

2007-03-06 Thread Andrew Hammond
On Mar 6, 5:44 am, [EMAIL PROTECTED] (Kenneth Downs) wrote:
 Andrew Hammond wrote:
  Each release will include a directory that has the same name as the
  full release tag. This directory must contain all the scripts to be
  applied.

 Have you considered using a data dictionary instead,

Data dictionary is a pretty generic term, could you please show me
what you mean by this?

 so that you can
 simply diff the structures and generate DDL to bring a database to its
 current state?

To fully express the semantics of a change, DDL is often interspersed
with DML. A trivial example is refactoring a table to reflect a 1 to n
relationship. In order to do this successfully, you must first execute
DDL to create a new table, then execute DML to pull data from the old
table into the new table, and finally execute DDL to remove the
columns in the old table as necessary.

 If your scripts contain data modification code, they can be preserved,
 but using a dictionary smooths over a lot of the quirkiness of the
 scripts-based approach.

Again, I'd be very interested in seeing what you mean by a dictionary.

Andrew


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] RFC tool to support development / operations work with slony replicated databases

2007-03-05 Thread Andrew Hammond

Hello All,

I've been working on designing a tool to facilitate both developers
and operations staff working with slony replicated databases. I think
that the problem described below is a general problem for people
working with systems that are both in production and under on-going
development / maintenance. As a result I would like to both solicit
the input of the community and share the results. Documentation (which
is still somewhat drafty) follows.

Thank you for your time,
Andrew Hammond


Current Approach

A common problem in the database world is handling revisions to the
database that go with revisions in the software running against this
database. Currently our method is to include upgrade.sql and
downgrade.sql scripts with each software release.

Problem Statement

This will fail when we start using slony since we need to handle DML
differently from DDL and DCL. We also need a way to apply slonik
scripts. Ordering matters in the application of these scripts.

After talking about it for a while, we agreed that developers want a
way to apply their updates without stepping on each other's toes while
in the process of developing and testing their work.

Design

Interface

updatemydatabase -f target [-y] [--force-upgrade | --force-downgrade]
[-U pguser] [-h pghost] [-p pgport] [-d pgdatabase] [--cluster
clustername]

-f

   Optional Defaults to the current working directory. Specifies the
target intended to be upgraded to. This may be either the full or
relative path. This may be either a directory or a file.
-y

   Optional If set, assume yes to all questions. This is intended for
use when running the program in tool mode.
-U -h -p -d

   Optional As for psql and other PostgreSQL command line utilities.
--cluster

   Optional Defaults to the database name. Specifies the name of the
slony cluster to work with. This should have a one-letter short form
that conforms with other similar tools. Gotta figure out what those
are though...

   Since we will be using a python connector which is based on libqp,
we will auto-magically respect the standard postgres environment
variables including the .pgpass file for handling passwords.

Limitations

   * We are not trying to deal with databases with more than one
slony replication cluster in them.
   * We are not going to deal with the case where various sets have
different origins.
   * We assume that this is run off the same machine that is
currently running the slons. We can connect to every database in the
cluster.
   * Aside from generating the slonik preamble, we are not going to
try and auto-generate slonik scripts that do anything more complicated
than EXECUTE SCRIPT. At least not initially. Maybe we can get more
clever later?
   * We will not try to be clever about detecting changes to files.
Alfred floated the idea of using the SVN id tag to detect if a file
had been changed since it was last applied and then forcing a
downgrade/upgrade cycle. That seems like a lot of code for a corner
case. Alfred and Long agreed that it's probably a good idea to create
a convention instead. Do not edit files after they're committed unless
it will cause in-efficiencies in the application to the production
database. Instead, create a new file. If you are forced to edit a
committed file, then email the dev list.
   * Along the lines of not being clever, we assume there is only one
set, and that it's number is 1.
   * We will not assume the existence of a node 1. The whole point of
increasing availability by replicating is that we don't have to rely
on the existence of a given database.

Data Structure

Each release will include a directory that has the same name as the
full release tag. This directory must contain all the scripts to be
applied. The release may include directories of scripts from prior
releases in the same parent directory. The scripts may have an
arbitrary name, but must end with a suffix of either dml.sql, ddl.sql,
dcl.sql or slonik. Script names should incorporate the bug number
they're addressing.

   * /my/base/directory
 o 3.10.0
   + create_foo_23451.ddl.sql
   + populate_foo_23451.dml.sql
   + alter_bar_add_column_reference_foo_23451.ddl.sql
   + update_bar_reference_foo_23451.dml.sql
   + alter_bar_column_not_null_23451.ddl.sql
   + subscribe_foo_23451.slonik
   + cleanup_some_data_migration_stuff_23451.ddl.sql
   + fix_bug_24341.ddl.sql -- these are poorly chosen
names, but hey, it's an example...
   + fix_bug_24341.dml.sql
   + fix_bug_24341.slonik
   + drop_broken_node_30031.slonik
 o 3.10.1
   + another_table_29341.ddl.sql

Inside the script, we add some semantics to what are usually comments.
An example is probably the best way to show this.

-- alter_bar_column_not_null_23451.ddl.sql
-- Witty comment about why this column needs to be not null.
--dep update_bar_reference_foo_23451

Re: [GENERAL] help required regarding queryin postgis database from google maps

2007-02-26 Thread Andrew Hammond
On Feb 25, 9:34 am, [EMAIL PROTECTED] (Andrew Dunstan) wrote:
 Phani Kishore wrote:

  hi !

  i think u people could probably help me i how to query the
  pgsql/postgis from google maps api to display the markers on the
  google maps which are stored in the postgis database.
  Phani Kishore
  Tata Consultancy Services
  Mailto: [EMAIL PROTECTED]
  Website:http://www.tcs.com

 This list is not about how to use postgres. Please ask in the correct forum.

pgsql-general, would be a good place to start with.

Andrew


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster