Re: [GENERAL] Schema/table replication

2017-09-06 Thread Nico Williams
On Wed, Sep 06, 2017 at 08:22:14AM -0700, Steve Atkins wrote:
> > On Sep 6, 2017, at 6:00 AM, Marcin Giedz  wrote:
> > 
> > Hi, is there any way (3rd party software) to replicate particular 
> > schema/table not the whole database with streaming replication built-in 
> > mechanism ?
> 
> I don't believe so. You can do that with logical replication in v10 -
> https://www.postgresql.org/docs/10/static/logical-replication.html.

Well, session_replication_role exists so that you can implement your own
logical replication, and with that you can have full control over what
to replicate and what not replicate.  There is no builtin functionality
that does this.  Third-party software?  Try this search and similar in
other engines:

https://github.com/search?utf8=%E2%9C%93&q=session_replication_role+extension%3Asql&type=Code&ref=advsearch&l=&l=

> pglogical will give you much the same functionality on current
> releases. https://www.2ndquadrant.com/en/resources/pglogical/ -
> installation isn't too painful (though the docs are a little sparse
> when it comes to which node you should run which command on. Make the
> postgres.conf changes on master and slave nodes, as slave nodes need
> replication slots too(?)).
> 
> There are a bunch of trigger-based replication frameworks that'll work
> too, though less efficiently - Slony is widely used, and I used
> Bucardo successfully for years before moving to pglogical.

Yeah.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema/table replication

2017-09-06 Thread Steve Atkins

> On Sep 6, 2017, at 8:48 AM, Marcin Giedz  wrote:
> 
> Does pglogical support views replication as I can't find it in any 
> restrictions ?

There's no need to replicate the contents of a view, as it doesn't contain any 
data.

pglogical can replicate the initial schema, including any views, but won't 
replicate DDL changes automatically after that. It does provide a clean way to 
replicate DDL from the master to slaves with pglogical.replicate_ddl_command().

Cheers,
  Steve

> > On Sep 6, 2017, at 6:00 AM, Marcin Giedz  wrote:
> > 
> > Hi, is there any way (3rd party software) to replicate particular 
> > schema/table not the whole database with streaming replication built-in 
> > mechanism ?
> 
> I don't believe so. You can do that with logical replication in v10 - 
> https://www.postgresql.org/docs/10/static/logical-replication.html.
> 
> pglogical will give you much the same functionality on current releases. 
> https://www.2ndquadrant.com/en/resources/pglogical/ - installation isn't too 
> painful (though the docs are a little sparse when it comes to which node you 
> should run which command on. Make the postgres.conf changes on master and 
> slave nodes, as slave nodes need replication slots too(?)).
> 
> There are a bunch of trigger-based replication frameworks that'll work too, 
> though less efficiently - Slony is widely used, and I used Bucardo 
> successfully for years before moving to pglogical.
> 
> Cheers,
>   Steve
> 
> 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema/table replication

2017-09-06 Thread Marcin Giedz
Does pglogical support views replication as I can't find it in any restrictions 
? 

Thx 
Marcin 


Od: "Steve Atkins"  
Do: pgsql-general@postgresql.org 
Wysłane: środa, 6 wrzesień 2017 17:22:14 
Temat: Re: [GENERAL] Schema/table replication 

> On Sep 6, 2017, at 6:00 AM, Marcin Giedz  wrote: 
> 
> Hi, is there any way (3rd party software) to replicate particular 
> schema/table not the whole database with streaming replication built-in 
> mechanism ? 

I don't believe so. You can do that with logical replication in v10 - 
https://www.postgresql.org/docs/10/static/logical-replication.html. 

pglogical will give you much the same functionality on current releases. 
https://www.2ndquadrant.com/en/resources/pglogical/ - installation isn't too 
painful (though the docs are a little sparse when it comes to which node you 
should run which command on. Make the postgres.conf changes on master and slave 
nodes, as slave nodes need replication slots too(?)). 

There are a bunch of trigger-based replication frameworks that'll work too, 
though less efficiently - Slony is widely used, and I used Bucardo successfully 
for years before moving to pglogical. 

Cheers, 
Steve 




Re: [GENERAL] Schema/table replication

2017-09-06 Thread Steve Atkins

> On Sep 6, 2017, at 6:00 AM, Marcin Giedz  wrote:
> 
> Hi, is there any way (3rd party software) to replicate particular 
> schema/table not the whole database with streaming replication built-in 
> mechanism ?

I don't believe so. You can do that with logical replication in v10 - 
https://www.postgresql.org/docs/10/static/logical-replication.html.

pglogical will give you much the same functionality on current releases. 
https://www.2ndquadrant.com/en/resources/pglogical/ - installation isn't too 
painful (though the docs are a little sparse when it comes to which node you 
should run which command on. Make the postgres.conf changes on master and slave 
nodes, as slave nodes need replication slots too(?)).

There are a bunch of trigger-based replication frameworks that'll work too, 
though less efficiently - Slony is widely used, and I used Bucardo successfully 
for years before moving to pglogical.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] schema advice for event stream with tagging and filtering

2016-12-12 Thread Chris Withers

On 16/08/2016 15:10, Ilya Kazakevich wrote:

An event is a row with a primary key along the lines of (colo, host,
category) and an associated set of tags, where each tag has a type
and a value
(eg: {"color": "red", "owner": "fred", "status": "open"...}).


What about  simple table with several columns and hstore  field for tags?


BTW, "pure SQL" approach here is to use separate tables: Tags(TagId, TagName) 
and TagValues(EventId,TagId,Value).


Well, maybe, but none of us wants to do that ;-)


But in this case it will be painful to filter events by tag values directly, so only 
separate denormalized OLAP table should be used in "pure SQL":)


I don't understand the second half of this I'm afraid...


PostgreSQL, however, supports key-value based hstore.


Right, but hstore only allows single values for each key, if I 
understand correctly?


Okay, so that leaves me with a jsonb "tags" column with a gin index, but 
I still have a couple of choices..


So, in order to best answer these types of queries:


- show me a list of tag types and the count of the number of events of that
type

- show me all events that have tag1=x, tag2=y and does not have tag3


...which of the following is going to be most performant:

# SELECT '{"tag1":["v1", "v2", "v3"]}'::jsonb @> '{"tag1": ["v1"]}'::jsonb;
 ?column?
--
 t
(1 row)

# SELECT '[{"tag1":"v1"}, {"tag1": "v2"}, {"tag1": "v3"}]'::jsonb @> 
'[{"tag1": "v1"}]'::jsonb;

 ?column?
--
 t
(1 row)

So, should I go for a tag name that maps to a list of values for that 
tag, or should I go for a sequence of one-entry mappings of tag name to 
tag value?


cheers,

Chris


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] schema advice for event stream with tagging and filtering

2016-08-16 Thread Venkata B Nagothi
On Tue, Aug 16, 2016 at 6:38 PM, Chris Withers 
wrote:

> Hi All,
>
> What would be the best schema to use when looking to implement an event
> stream with tagging and filtering?
>
> An event is a row with a primary key along the lines of (colo, host,
> category) and an associated set of tags, where each tag has a type and a
> value (eg: {"color": "red", "owner": "fred", "status": "open"...}).
>
> Events come in as a streams of creates/updates as a cluster of http posts
> to a web app.
>

Not sure which version of PostgreSQL you are using. Did you consider
JSON/JSONB data-types which is strongly supported by PostgreSQL ? You can
consider having a table with JSON or JSONB column type. If you can use
JSONB, then, it supports indexing as well.

That should make web app easy to push JSON format data to PostgreSQL.

Regards,
Venkata B N

Fujitsu Australia


Re: [GENERAL] schema advice for event stream with tagging and filtering

2016-08-16 Thread Ilya Kazakevich
>>> An event is a row with a primary key along the lines of (colo, host,
>>> category) and an associated set of tags, where each tag has a type
>>> and a value
>>> (eg: {"color": "red", "owner": "fred", "status": "open"...}).
>>
>> What about  simple table with several columns and hstore  field for tags?

BTW, "pure SQL" approach here is to use separate tables: Tags(TagId, TagName) 
and TagValues(EventId,TagId,Value). 
But in this case it will be painful to filter events by tag values directly, so 
only separate denormalized OLAP table should be used in "pure SQL":)
PostgreSQL, however, supports key-value based hstore.

>> You may also normalize it (move hosts and categories to separate table).
>
>Why? These form part of the primary key for the event...

Host and category could be part of PK, but it may be better to have "HostId" 
field and "Hosts(HostId, Host)" table than "text" field with many "www.foo.bar" 
values) 

>What kind of index is recommended here? The kind of queries would be:
>
>- show me a list of tag types and the count of the number of events of that
>type
>
>- show me all events that have tag1=x, tag2=y and does not have tag3

Hstore supports GIN and GIST 
(https://www.postgresql.org/docs/9.1/static/textsearch-indexes.html). I'd start 
with  GIN.

Do you need real-time data or, say, one-day old data is ok? In latter case it 
is better to load data to denormalized table to speed-up queries and use no 
index on normalized(OLTP) table because index update operation is not free)


Ilya Kazakevich

JetBrains
http://www.jetbrains.com
The Drive to Develop



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] schema advice for event stream with tagging and filtering

2016-08-16 Thread Chris Withers

On 16/08/2016 14:29, Ilya Kazakevich wrote:

Hi,


An event is a row with a primary key along the lines of (colo, host,
category) and an associated set of tags, where each tag has a type and a value
(eg: {"color": "red", "owner": "fred", "status": "open"...}).


What about  simple table with several columns and hstore  field for tags?


Interesting!


You may also normalize it (move hosts and categories to separate table).


Why? These form part of the primary key for the event...


indexes should help you with fast filtering, or you may load data from this 
table to denormalized olap table once a day and build index there to speed-up 
queries.


What kind of index is recommended here? The kind of queries would be:

- show me a list of tag types and the count of the number of events of 
that type


- show me all events that have tag1=x, tag2=y and does not have tag3

cheers,

Chris


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] schema advice for event stream with tagging and filtering

2016-08-16 Thread Ilya Kazakevich
Hi,

>An event is a row with a primary key along the lines of (colo, host,
>category) and an associated set of tags, where each tag has a type and a value
>(eg: {"color": "red", "owner": "fred", "status": "open"...}).

What about  simple table with several columns and hstore  field for tags?
You may also normalize it (move hosts and categories to separate table).

indexes should help you with fast filtering, or you may load data from this 
table to denormalized olap table once a day and build index there to speed-up 
queries.

Ilya Kazakevich

JetBrains
http://www.jetbrains.com
The Drive to Develop




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema Size - PostgreSQL 9.2

2016-03-19 Thread Melvin Davidson
On Thu, Mar 17, 2016 at 4:45 PM, Karsten Hilbert 
wrote:

> On Fri, Mar 18, 2016 at 09:38:30AM +1300, drum.lu...@gmail.com wrote:
>
> > Can you please provide me a Query that tells me how much space is a
> Schema
> > in my DB?
>
> There's been a discussion on that recently (like last month)
> which can be found in the archive.
>
> Karsten
> --
> GPG key ID E4071346 @ eu.pool.sks-keyservers.net
> E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


This should give you what you need:

SELECT n.nspname as schema,
   pg_size_pretty(sum(pg_total_relation_size(quote_ident(n.nspname)||
'.' || quote_ident(c.relname as size,
   sum(pg_total_relation_size(quote_ident(n.nspname)|| '.' ||
quote_ident(c.relname))) as size_bytes
  FROM pg_class c
  JOIN pg_namespace n ON (n.oid = c.relnamespace)
  JOIN pg_authid a ON ( a.oid = c.relowner )
  WHERE relname NOT LIKE 'pg_%'
AND relname NOT LIKE 'information%'
AND relname NOT LIKE 'sql_%'
AND relkind IN ('r')
GROUP BY 1
ORDER BY 3 DESC, 1, 2;


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Schema Size - PostgreSQL 9.2

2016-03-19 Thread Karsten Hilbert
On Fri, Mar 18, 2016 at 09:38:30AM +1300, drum.lu...@gmail.com wrote:

> Can you please provide me a Query that tells me how much space is a Schema
> in my DB?

There's been a discussion on that recently (like last month)
which can be found in the archive.

Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema Size

2016-03-01 Thread drum.lu...@gmail.com
On 2 March 2016 at 12:23, Scott Mead  wrote:

>
> On Tue, Mar 1, 2016 at 6:07 PM, David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>> You should read the definitions for the functions you are using to
>> retrieve the sizes.
>>
>> ​http://www.postgresql.org/docs/current/static/functions-admin.html​
>>
>> +1, you've gotta be careful with each of these, they all tend to hide
> different, yet critical components of size that you may be having trouble
> resolving.
>
>  The other thing to consider is that this isn't including any on-disk
> space required for your change traffic in the WAL.  Your $PGDATA will
> always be larger than the sum of all your databases sizes...
>
>
>> On Tue, Mar 1, 2016 at 3:48 PM, drum.lu...@gmail.com <
>> drum.lu...@gmail.com> wrote:
>>
>>> Hi there
>>>
>>> Wanna see how size a schema is in my PostgreSQL 9.2
>>>
>>> Got two queries - they return different values... can u please check?
>>>
>>> cheers;
>>>
>>> Query 1:
>>> SELECT schema_name,
>>> pg_size_pretty(sum(table_size)::bigint) as "disk space",
>>> (sum(table_size) / pg_database_size(current_database())) * 100
>>> as "percent"
>>> FROM (
>>>  SELECT pg_catalog.pg_namespace.nspname as schema_name,
>>>  pg_relation_size(pg_catalog.pg_class.oid) as table_size
>>>  FROM   pg_catalog.pg_class
>>>  JOIN pg_catalog.pg_namespace
>>>  ON relnamespace = pg_catalog.pg_namespace.oid
>>> ) t
>>> GROUP BY schema_name
>>> ORDER BY schema_name
>>>
>>>
>> ​​pg_relation_size: "Disk space used by the specified fork ('main',
>> 'fsm', 'vm', or 'init') of the specified table or index"
>>
>> The 'init' fork is (I think) non-zero but extremely small.
>> TOAST for a given relation is considered its own table
>>
>>
>>> Query 2:
>>> select schemaname,
>>> pg_size_pretty(sum(pg_table_size(schemaname||'.'||relname))::bigint) as s
>>> from pg_stat_user_tables
>>> group by schemaname
>>>
>>
>> pg_table_size: "Disk space used by the specified table, excluding indexes
>> (but including TOAST, free space map, and visibility map)"
>>
>
> Personally, I'm a huge fan of 'pg_total_relation_size' which is all of
> pg_table_size + indexes.  It really depends on specifically what you're
> trying to count.  If you're looking for the total disk space required by
> your tables in a schema, I always [personally] want to include indexes in
> this count to make sure I understand the total impact on disk of accessing
> my relations.
>
>
>>
>> David J.​
>>
>>
>

So.. I'm doing this way:

CREATE OR REPLACE FUNCTION pg_schema_size(text) RETURNS BIGINT AS
$$SELECT SUM(pg_total_relation_size(quote_ident(schemaname) || '.' ||
quote_ident(tablename)))::BIGINT FROM pg_tables WHERE schemaname = $1

$$ LANGUAGE SQL;

Link: https://wiki.postgresql.org/wiki/Schema_Size


That's working - But I'd like to test it.. to compare the results with
another one trustfull - Do you have some?


Re: [GENERAL] Schema Size

2016-03-01 Thread Scott Mead
On Tue, Mar 1, 2016 at 6:07 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> You should read the definitions for the functions you are using to
> retrieve the sizes.
>
> ​http://www.postgresql.org/docs/current/static/functions-admin.html​
>
> +1, you've gotta be careful with each of these, they all tend to hide
different, yet critical components of size that you may be having trouble
resolving.

 The other thing to consider is that this isn't including any on-disk space
required for your change traffic in the WAL.  Your $PGDATA will always be
larger than the sum of all your databases sizes...


> On Tue, Mar 1, 2016 at 3:48 PM, drum.lu...@gmail.com  > wrote:
>
>> Hi there
>>
>> Wanna see how size a schema is in my PostgreSQL 9.2
>>
>> Got two queries - they return different values... can u please check?
>>
>> cheers;
>>
>> Query 1:
>> SELECT schema_name,
>> pg_size_pretty(sum(table_size)::bigint) as "disk space",
>> (sum(table_size) / pg_database_size(current_database())) * 100
>> as "percent"
>> FROM (
>>  SELECT pg_catalog.pg_namespace.nspname as schema_name,
>>  pg_relation_size(pg_catalog.pg_class.oid) as table_size
>>  FROM   pg_catalog.pg_class
>>  JOIN pg_catalog.pg_namespace
>>  ON relnamespace = pg_catalog.pg_namespace.oid
>> ) t
>> GROUP BY schema_name
>> ORDER BY schema_name
>>
>>
> ​​pg_relation_size: "Disk space used by the specified fork ('main', 'fsm',
> 'vm', or 'init') of the specified table or index"
>
> The 'init' fork is (I think) non-zero but extremely small.
> TOAST for a given relation is considered its own table
>
>
>> Query 2:
>> select schemaname,
>> pg_size_pretty(sum(pg_table_size(schemaname||'.'||relname))::bigint) as s
>> from pg_stat_user_tables
>> group by schemaname
>>
>
> pg_table_size: "Disk space used by the specified table, excluding indexes
> (but including TOAST, free space map, and visibility map)"
>

Personally, I'm a huge fan of 'pg_total_relation_size' which is all of
pg_table_size + indexes.  It really depends on specifically what you're
trying to count.  If you're looking for the total disk space required by
your tables in a schema, I always [personally] want to include indexes in
this count to make sure I understand the total impact on disk of accessing
my relations.


>
> David J.​
>
>


Re: [GENERAL] Schema Size

2016-03-01 Thread David G. Johnston
You should read the definitions for the functions you are using to retrieve
the sizes.

​http://www.postgresql.org/docs/current/static/functions-admin.html​

On Tue, Mar 1, 2016 at 3:48 PM, drum.lu...@gmail.com 
wrote:

> Hi there
>
> Wanna see how size a schema is in my PostgreSQL 9.2
>
> Got two queries - they return different values... can u please check?
>
> cheers;
>
> Query 1:
> SELECT schema_name,
> pg_size_pretty(sum(table_size)::bigint) as "disk space",
> (sum(table_size) / pg_database_size(current_database())) * 100
> as "percent"
> FROM (
>  SELECT pg_catalog.pg_namespace.nspname as schema_name,
>  pg_relation_size(pg_catalog.pg_class.oid) as table_size
>  FROM   pg_catalog.pg_class
>  JOIN pg_catalog.pg_namespace
>  ON relnamespace = pg_catalog.pg_namespace.oid
> ) t
> GROUP BY schema_name
> ORDER BY schema_name
>
>
​​pg_relation_size: "Disk space used by the specified fork ('main', 'fsm',
'vm', or 'init') of the specified table or index"

The 'init' fork is (I think) non-zero but extremely small.
TOAST for a given relation is considered its own table


> Query 2:
> select schemaname,
> pg_size_pretty(sum(pg_table_size(schemaname||'.'||relname))::bigint) as s
> from pg_stat_user_tables
> group by schemaname
>

pg_table_size: "Disk space used by the specified table, excluding indexes
(but including TOAST, free space map, and visibility map)"

David J.​


Re: [GENERAL] Schema Help Needed To Get Unstuck

2015-07-23 Thread Rich Shepard

On Thu, 23 Jul 2015, Melvin Davidson wrote:


Does this help?


Melvin,

  Yep.

  After pondering David's response I recognized my error: I had the permit
table as the center of the relationships when it should be the conditions
table. As you wrote, permits, parameters, and locations all feed the
conditions table (where additional attributes are added), then a row from
that table is added to the monitoring table where quantities and other
attributes are entered.

  Sometimes we get so deep in a rut we can't see over the edge and find the
correct path.

Thanks very much,

Rich


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema Help Needed To Get Unstuck

2015-07-23 Thread Melvin Davidson
Does this help?

CREATE TABLE permits
(
permit_number bigint,
...,
other_columns ,
...,
CONSTRAINT permits_pk PRIMARY KEY (permit_number)
);

CREATE TABLE parameters
(
permit_number bigint,
parameter varchar(50),
...,
other_columns ,
...,
CONSTRAINT parameters_pk PRIMARY KEY (permit_number, parameter),
CONSTRAINT parameters_permit_fk FOREIGN KEY (permit_number)
  REFERENCES permits (permit_number)
);

CREATE TABLE locations
(
permit_number bigint,
location varchar(50),
...,
other_columns ,
...,
CONSTRAINT locations_pk PRIMARY KEY (permit_number, location),
CONSTRAINT locations_permit_fk FOREIGN KEY (permit_number)
  REFERENCES permits (permit_number)
);

CREATE TABLE conditions
(
permit_number bigint,
condition varchar(50),
location varchar(50),
frequency varchar(10),
start_time timestamp,
end_time timestamp,
...,
other_columns ,
...,
CONSTRAINT locations_pk PRIMARY KEY (permit_number, condition, location)
CONSTRAINT conditions_permit_fk FOREIGN KEY (permit_number)
  REFERENCES permits (permit_number),
CONSTRAINT conditions_location_fk FOREIGN KEY (permit_number, location)
  REFERENCES locations (permit_number, location)
);

CREATE TABLE monitoring
(
permit_number bigint,
monitor_date timestamp,
location varchar(50),
frequency varchar(10),
start_time timestamp,
end_time timestamp,
...,
other_columns ,
...,
CONSTRAINT locations_pk PRIMARY KEY (permit_number, condition, location)
CONSTRAINT monitoring_permit_fk FOREIGN KEY (permit_number)
  REFERENCES permits (permit_number),
CONSTRAINT monitoring_location_fk FOREIGN KEY (permit_number, location)
  REFERENCES locations (permit_number, location),
CONSTRAINT monitoring_parameter_fk FOREIGN KEY (permit_number, parameter)
  REFERENCES locations (permit_number, parameter)

);


Re: [GENERAL] Schema Help Needed To Get Unstuck

2015-07-23 Thread Rich Shepard

On Thu, 23 Jul 2015, David G. Johnston wrote:


Conditions <- (Permit + Location + Parameter + Frequency + Temporal (from,
until))
Monitoring <- (Condition + Event)

While you may require additional modelling tables to support your user
interface (i.e., picking valid combinations of PeLoPa when creating a new
condition) the fundamental concept is that each permit+location+parameter
combination needs to be monitored during a given period (temporal) at a
given frequency in that period.  Every time that combination generates a
number you combine the "Condition" with the event (data + time) and add the
result to the monitoring table.

Active combinations of permits, locations, and parameters can be directly
derived from the conditions table.  The temporal fields facilitate history
generation.


David,

  This is what I was working to achieve, but did not have the details
straight. I'll ponder this and make sure I set the proper FKs on the
appropriate tables.

Thanks,

Rich


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema Help Needed To Get Unstuck

2015-07-23 Thread David G. Johnston
On Thu, Jul 23, 2015 at 3:06 PM, Rich Shepard 
wrote:

>   While designing the schema for a new application have become
> high-centered
> and stuck relating some many-to-many tables. Fresh eyes and suggestions are
> needed on how to create many-to-many association tables among these five.
>
>   Table 'Permits': Contains information about each permit, PK is permit
> number.
>
>   Table 'Parameters': Contains all parameters (physical, chemical,
> biological, operational) by name and other attributes. This is a record of
> all parameters over the life of the permit. (Many parameters, many
> permits.)
>
>   Table 'Locations': Contains details about each monitoring location. This
> is a record of all locations over the life of the permit. (Many locations,
> many permits.)
>
>   Table 'Conditions': Contains permit numbers and which paramenters are to
> be monitored at which locations and at what frequencies and over what time
> period. I'm thinking one row per permit that reference the permit number,
> parameters, locations, frequencies, and active dates. Each location has
> multiple parameters, and each parameter is monitored at multiple locations.
> FK references permit number. (Many parameters, many locations, many
> frequencies for each of many permits.)
>
>   Table 'Monitoring': Contains columns for date, location, parameter,
> quantity, and other attributes. FK references permit number. (Many rows for
> each permit.)
>
>
​Conditions <- (Permit + Location + Parameter + Frequency + Temporal (from,
until))
Monitoring <- (Condition + Event)

While you may require additional modelling tables to support your user
interface (i.e., picking valid combinations of PeLoPa when creating a new
condition) the fundamental concept is that each permit+location+parameter
combination needs to be monitored during a given period (temporal) at a
given frequency in that period.  Every time that combination generates a
number you combine the "Condition" with the event (data + time) and add the
result to the monitoring table.

Active combinations of permits, locations, and parameters can be directly
derived from the conditions table.  The temporal fields facilitate history
generation.

David J.


Re: [GENERAL] schema or database

2015-04-13 Thread Jim Nasby

On 4/13/15 6:21 AM, Anil Menon wrote:

In addition to all these comments

- If you use multiple databases, if you want to keep some "common"
tables (example counties_Table, My_company_details), its going to be a pain
- if you want to access tables across databases - you might need to
start using FDWs (which is going to be a administrative pain - syncing
passwords and stuff)
- you could set up security easier with multiple schemas - example userA
can only use schema A and no access to other schemas


Please don't top-post.


On Mon, Apr 13, 2015 at 4:48 PM, Pavel Stehule mailto:pavel.steh...@gmail.com>> wrote:



2015-04-13 10:43 GMT+02:00 Albe Laurenz mailto:laurenz.a...@wien.gv.at>>:

Michael Cheung wrote:
> I have many similar database to store data for every customer.
> Structure of database is almost the same.
> As I use same application to control all these data, so I can only use
> one database user to connect to these database.
> And I have no needs to query table for different customer together.
>
> I wonder which I should use, different shema or different database to 
store data?
>
> I 'd like to know the advantage and disadvantage for using schema or 
database.

In addition to what others have said:

If you use multiple schemas within one database, the danger is
greater that
data are written to or read from the wrong schema if your
application has a bug
ans does not make sure to always set search_path or qualify
every access with a
schema name.

With multiple databases you are guaranteed not to access data
from a different
database.

The main downside that I see to multiple databases is the
overhead: each of
the databases will have its own pg_catalog tables.


It can be advantage - if your schema is pretty complex - thousands
procedures, tables, then separate pg_catalog can be better - there
are issues with pg_dump, pg_restore.

So it depends on catalog size and complexity.


Two things no one has mentioned. First, you could also use row-level 
security. If you plan on each customer having a fairly small amount of 
data, this is by far your most efficient option. Anything else will 
result in either huge catalogs or a lot of wasted catalog space.


Second, if you do per-database, that makes it trivial to scale across 
multiple servers.


Regarding backups; you can easily do partial either way with pg_dump; 
there's really no difference. You can't do partial with PITR, but that's 
true for both schema and database.

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] schema or database

2015-04-13 Thread Anil Menon
In addition to all these comments

- If you use multiple databases, if you want to keep some "common" tables
(example counties_Table, My_company_details), its going to be a pain
- if you want to access tables across databases - you might need to start
using FDWs (which is going to be a administrative pain - syncing passwords
and stuff)
- you could set up security easier with multiple schemas - example userA
can only use schema A and no access to other schemas

Regards
AK



On Mon, Apr 13, 2015 at 4:48 PM, Pavel Stehule 
wrote:

>
>
> 2015-04-13 10:43 GMT+02:00 Albe Laurenz :
>
>> Michael Cheung wrote:
>> > I have many similar database to store data for every customer.
>> > Structure of database is almost the same.
>> > As I use same application to control all these data, so I can only use
>> > one database user to connect to these database.
>> > And I have no needs to query table for different customer together.
>> >
>> > I wonder which I should use, different shema or different database to
>> store data?
>> >
>> > I 'd like to know the advantage and disadvantage for using schema or
>> database.
>>
>> In addition to what others have said:
>>
>> If you use multiple schemas within one database, the danger is greater
>> that
>> data are written to or read from the wrong schema if your application has
>> a bug
>> ans does not make sure to always set search_path or qualify every access
>> with a
>> schema name.
>>
>> With multiple databases you are guaranteed not to access data from a
>> different
>> database.
>>
>> The main downside that I see to multiple databases is the overhead: each
>> of
>> the databases will have its own pg_catalog tables.
>>
>
> It can be advantage - if your schema is pretty complex - thousands
> procedures, tables, then separate pg_catalog can be better - there are
> issues with pg_dump, pg_restore.
>
> So it depends on catalog size and complexity.
>
> Regards
>
> Pavel
>
>
>
>
>>
>> Yours,
>> Laurenz Albe
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>


Re: [GENERAL] schema or database

2015-04-13 Thread Pavel Stehule
2015-04-13 10:43 GMT+02:00 Albe Laurenz :

> Michael Cheung wrote:
> > I have many similar database to store data for every customer.
> > Structure of database is almost the same.
> > As I use same application to control all these data, so I can only use
> > one database user to connect to these database.
> > And I have no needs to query table for different customer together.
> >
> > I wonder which I should use, different shema or different database to
> store data?
> >
> > I 'd like to know the advantage and disadvantage for using schema or
> database.
>
> In addition to what others have said:
>
> If you use multiple schemas within one database, the danger is greater that
> data are written to or read from the wrong schema if your application has
> a bug
> ans does not make sure to always set search_path or qualify every access
> with a
> schema name.
>
> With multiple databases you are guaranteed not to access data from a
> different
> database.
>
> The main downside that I see to multiple databases is the overhead: each of
> the databases will have its own pg_catalog tables.
>

It can be advantage - if your schema is pretty complex - thousands
procedures, tables, then separate pg_catalog can be better - there are
issues with pg_dump, pg_restore.

So it depends on catalog size and complexity.

Regards

Pavel




>
> Yours,
> Laurenz Albe
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] schema or database

2015-04-13 Thread Albe Laurenz
Michael Cheung wrote:
> I have many similar database to store data for every customer.
> Structure of database is almost the same.
> As I use same application to control all these data, so I can only use
> one database user to connect to these database.
> And I have no needs to query table for different customer together.
> 
> I wonder which I should use, different shema or different database to store 
> data?
> 
> I 'd like to know the advantage and disadvantage for using schema or database.

In addition to what others have said:

If you use multiple schemas within one database, the danger is greater that
data are written to or read from the wrong schema if your application has a bug
ans does not make sure to always set search_path or qualify every access with a
schema name.

With multiple databases you are guaranteed not to access data from a different
database.

The main downside that I see to multiple databases is the overhead: each of
the databases will have its own pg_catalog tables.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] schema or database

2015-04-13 Thread John R Pierce

On 4/13/2015 12:07 AM, Alban Hertroys wrote:

That's easier to backup, sure, but you can't restore a single customer's schema 
easily that way. So if one customer messes up their data big time, you'll need 
to restore a backup for all customers in the DB.


if you use pg_dump -Fc, then you can specify the schema at pg_restore time.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] schema or database

2015-04-13 Thread Alban Hertroys

> On 13 Apr 2015, at 4:20, Ian Barwick  wrote:
> 
> On 13/04/15 11:08, Michael Cheung wrote:
>> hi, all;
>> 
>> I am new here. And I need some suggestion.
>> 
>> I have many similar database to store data for every customer.
>> Structure of database is almost the same.
>> As I use same application to control all these data, so I can only use
>> one database user to connect to these database.
>> And I have no needs to query table for different customer together.
>> 
>> I wonder which I should use, different shema or different database to store 
>> data?
>> 
>> I 'd like to know the advantage and disadvantage for using schema or 
>> database.
> 
> If as you say access to the database is via a single application database
> user, it will probably make more sense to use multiple schemas rather than
> multiple databases. Keeping everything in one database will simplify
> administration (e.g. making backups - ypu'll just need to dump the one 
> database
> rather than looping through a variable number) and will make life easier if 
> you
> ever need to do some kind of query involving multiple customers.

That's easier to backup, sure, but you can't restore a single customer's schema 
easily that way. So if one customer messes up their data big time, you'll need 
to restore a backup for all customers in the DB.

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] schema or database

2015-04-12 Thread Michael Cheung
Thanks for your additional comment.
It is more clear, I'd better to using schema more than using database.

yours, michael



On Sun, 12 Apr 2015 19:24:30 -0700
John R Pierce  wrote:

> On 4/12/2015 7:20 PM, Ian Barwick wrote:
> > If as you say access to the database is via a single application database
> > user, it will probably make more sense to use multiple schemas rather than
> > multiple databases. Keeping everything in one database will simplify
> > administration (e.g. making backups - ypu'll just need to dump the one 
> > database
> > rather than looping through a variable number) and will make life easier if 
> > you
> > ever need to do some kind of query involving multiple customers.
> > There will also be less overhead when adding a new schema vs adding
> > a new database.
> 
> and less overhead in connections, as one client connection can serve multiple 
> customers
> 
> -- john r pierce, recycling bits in santa cruz
> 
> 
> 
> -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] schema or database

2015-04-12 Thread Michael Cheung
Thanks for your suggestion.
I'd like to use schema as you suggest.

yours, michael

On Mon, 13 Apr 2015 11:20:59 +0900
Ian Barwick  wrote:

> On 13/04/15 11:08, Michael Cheung wrote:
> > hi, all;
> > 
> > I am new here. And I need some suggestion.
> > 
> > I have many similar database to store data for every customer.
> > Structure of database is almost the same.
> > As I use same application to control all these data, so I can only use
> > one database user to connect to these database.
> > And I have no needs to query table for different customer together.
> > 
> > I wonder which I should use, different shema or different database to store 
> > data?
> > 
> > I 'd like to know the advantage and disadvantage for using schema or 
> > database.
> 
> If as you say access to the database is via a single application database
> user, it will probably make more sense to use multiple schemas rather than
> multiple databases. Keeping everything in one database will simplify
> administration (e.g. making backups - ypu'll just need to dump the one 
> database
> rather than looping through a variable number) and will make life easier if 
> you
> ever need to do some kind of query involving multiple customers.
> There will also be less overhead when adding a new schema vs adding
> a new database.
> 
> 
> Regards
> 
> Ian Barwick
> 
> -- 
>  Ian Barwick   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] schema or database

2015-04-12 Thread John R Pierce

On 4/12/2015 7:20 PM, Ian Barwick wrote:

If as you say access to the database is via a single application database
user, it will probably make more sense to use multiple schemas rather than
multiple databases. Keeping everything in one database will simplify
administration (e.g. making backups - ypu'll just need to dump the one database
rather than looping through a variable number) and will make life easier if you
ever need to do some kind of query involving multiple customers.
There will also be less overhead when adding a new schema vs adding
a new database.


and less overhead in connections, as one client connection can serve 
multiple customers


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] schema or database

2015-04-12 Thread Ian Barwick
On 13/04/15 11:08, Michael Cheung wrote:
> hi, all;
> 
> I am new here. And I need some suggestion.
> 
> I have many similar database to store data for every customer.
> Structure of database is almost the same.
> As I use same application to control all these data, so I can only use
> one database user to connect to these database.
> And I have no needs to query table for different customer together.
> 
> I wonder which I should use, different shema or different database to store 
> data?
> 
> I 'd like to know the advantage and disadvantage for using schema or database.

If as you say access to the database is via a single application database
user, it will probably make more sense to use multiple schemas rather than
multiple databases. Keeping everything in one database will simplify
administration (e.g. making backups - ypu'll just need to dump the one database
rather than looping through a variable number) and will make life easier if you
ever need to do some kind of query involving multiple customers.
There will also be less overhead when adding a new schema vs adding
a new database.


Regards

Ian Barwick

-- 
 Ian Barwick   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema Search Path Problem

2013-10-10 Thread Pavel Stehule
Hello


2013/10/10 Kalai R 

> Hi,
>
> I am developing .Net application using PSQL. I am using npgsql to connect
> PSQL database. In single database I have more than one schemas. After I
> connect database I set search path using "Set Searchpath command" for the
> required schema. It works perfectly.
>
> But after sometime, I need to change schema. so I set again search path
>   using "Set Searchpath command" . Now if I execute a single query the
> changed search path works correctly. But if I execute a pl/sql function
> written in background the changed search path does not work.
>


It is expected behave for older releases and it is fixed in 9.3. PL/pgSQL
procedures holds a execution plans in session plan cache and changing
search_path didn't invalidate caches. There are two workarounds: a) using
special instances of functions for schema, b) change search_path only once
after connect, and if you need a new search_path, then disconnect, and
create new connect.

Regards

Pavel Stehule


>
> Help me to solve this issue.
>
> Thanks.
>


Re: [GENERAL] Schema (Search path issue) on PostgreSQL9.2

2013-05-16 Thread Tom Lane
chiru r  writes:
> Is there any schema(set search_path) behaviour changes from  PostgreSQL9.1
> to PostgreSQL9.2 ?

Please read the release notes when updating to a new major version.
The first item under "Server Settings" in the 9.2 release notes is:

Silently ignore nonexistent schemas specified in search_path (Tom Lane)

This makes it more convenient to use generic path settings, which might 
include some schemas that don't exist in all databases.


regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema (Search path issue) on PostgreSQL9.2

2013-05-16 Thread Raghavendra
> postgres=# select version();
> version
>
>
> ---
>  PostgreSQL 9.2.4 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.1.2
> 20080704 (Red Hat 4.1.2-52), 64-bit
> (1 row)
>
> postgres=# \dn
> List of schemas
> Name|  Owner
> +--
>  information_schema | postgres
>  pg_catalog | postgres
>  pg_toast   | postgres
>  pg_toast_temp_1| postgres
>  public | postgres
> (5 rows)
>
>
Apart from your actual question, am just curious to see this output, how
come all schema's displayed whereas in latest releases only PUBLIC schema
will be displayed if you use meta command \dn.

-bash-4.1$ ./psql -p 
psql (9.3beta1)
Type "help" for help.

postgres=# \dn
  List of schemas
  Name  |  Owner
+--
 public | postgres
(1 row)

--Raghav


Re: [GENERAL] Schema-only dump dumps no constraints, no triggers

2012-07-29 Thread Marek Kielar
Hi,


Dnia 28 lipca 2012 1:10 Adrian Klaver  napisał(a):

> What where the deleted files?
>WAL, Logs, other?
> What type of WAL replication are you doing?
>Streaming, log shipping, etc?
> What are your settings for the WAL replication?
>In particular wal_keep_segments ?
> Is the WAL replication actually working?


at this time - a couple days after restart, the clog hasn't re-formed yet. 
Thus, I am unable to tell you what files they were, we didn't pay that much 
attention to it then - there were some WAL files but I can't tell what the 
actual structure was. I'll provide this information whenever possible.

The WAL replication is a streaming replication with a hot standby server. The 
servers have a direct connection with one another. Configuration appended.

The replication is working fine.

The primary server also has a single mostly-"idle" transaction from any client 
node - a trait of an "always online" client application. Immediately after 
primary server restart this does not add much overhead, though. I can't tell 
whether this, in the long run, might cause the problem.

Configuration follows:
 - primary server postgresql.conf - WAL section
 - standby server postgresql.conf - WAL section
 - standby server recovery.conf

The primary server WAL configuration is:
#--
# WRITE AHEAD LOG
#--

# - Settings -

wal_level = hot_standby # minimal, archive, or hot_standby
# (change requires restart)
#fsync = on # turns forced synchronization on or off
#synchronous_commit = on# immediate fsync at commit
#wal_sync_method = fsync# the default is the first option 
# supported by the operating system:
#   open_datasync
#   fdatasync (default on Linux)
#   fsync
#   fsync_writethrough
#   open_sync
#full_page_writes = on  # recover from partial page writes
wal_buffers = 1MB   # min 32kB
# (change requires restart)
#wal_writer_delay = 200ms   # 1-1 milliseconds

#commit_delay = 0   # range 0-10, in microseconds
#commit_siblings = 5# range 1-1000

# - Checkpoints -

#checkpoint_segments = 30   # in logfile segments, min 1, 16MB each
#checkpoint_timeout = 5min  # range 30s-1h
#checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0
#checkpoint_warning = 30s   # 0 disables

# - Archiving -

archive_mode = on   # allows archiving to be done
# (change requires restart)
archive_command = 'rsync %p /%f'   # command to use to archive a 
logfile segment
archive_timeout = 0 # force a logfile segment switch after this
# number of seconds; 0 disables

# - Streaming Replication -

max_wal_senders = 5 # max number of walsender processes
# (change requires restart)
#wal_sender_delay = 200ms   # walsender cycle time, 1-1 milliseconds
wal_keep_segments = 32  # in logfile segments, 16MB each; 0 disables
#vacuum_defer_cleanup_age = 0   # number of xacts by which cleanup is delayed

# - Standby Servers -

#hot_standby = off  # "on" allows queries during recovery
# (change requires restart)
#max_standby_archive_delay = 30s# max delay before canceling queries
# when reading WAL from archive;
# -1 allows indefinite delay
#max_standby_streaming_delay = 30s  # max delay before canceling queries
# when reading streaming WAL;
# -1 allows indefinite delay




The standby server WAL configuration is:
#--
# WRITE AHEAD LOG
#--

# - Settings -

#wal_level = minimal# minimal, archive, or hot_standby
# (change requires restart)
#fsync = on # turns forced synchronization on or off
#synchronous_commit = on# immediate fsync at commit
#wal_sync_method = fsync# the default is the first option 
# supported by the operating system:

Re: [GENERAL] Schema-only dump dumps no constraints, no triggers

2012-07-27 Thread Adrian Klaver

On 07/27/2012 09:32 AM, Marek Kielar wrote:

Hi, again,

I'm sorry about the lack of version information - I concentrated so much on 
describing the problem correctly, that I forgot to provide basic information. 
The version at locations is 9.1.4 (though this is irrelevant now), the server 
is 9.0.4.

We found what the problem was. Another problem stems from it, however. Please 
read on.

To add to the information already provided - we have a two-way backup of the template database. One 
is a WAL replication and the other is londiste (skytools) replication with periodic complete copy. 
As it turned out, the "stable" script uses not, as we remembered, the actual template 
database but the londiste-replicated database which was to make next complete copy a few days ago. 
The copy did not complete, however - the schema-table-column structure transfer completed, but the 
constraints and triggers did not get through somehow, as there was a lack of hard drive space. 
Digging on it, we found out that the drive's space was not used up by files in the filesystem, it 
was filled with deleted files that postgresql server was still clinging on to, probably for a good 
while. After restarting the server many, many gigabytes were suddenly made available on disk. And 
this is the new problem - the server has quite a throughput and this is probably what causes the 
"leakage". How can we forc

e
the server to let go of the files? Or maybe it is an actual leak that needs to 
be studied upon?


On a side note, obviously, the Windows dump came out alright because it was 
from the proper database, not the replicated copy.


What where the deleted files?
  WAL, Logs, other?
What type of WAL replication are you doing?
  Streaming, log shipping, etc?
What are your settings for the WAL replication?
  In particular wal_keep_segments ?
Is the WAL replication actually working?



Best regards,
Marek Kielar






--
Adrian Klaver
adrian.kla...@gmail.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema-only dump dumps no constraints, no triggers

2012-07-27 Thread Marek Kielar
Hi, again,

I'm sorry about the lack of version information - I concentrated so much on 
describing the problem correctly, that I forgot to provide basic information. 
The version at locations is 9.1.4 (though this is irrelevant now), the server 
is 9.0.4.

We found what the problem was. Another problem stems from it, however. Please 
read on.

To add to the information already provided - we have a two-way backup of the 
template database. One is a WAL replication and the other is londiste 
(skytools) replication with periodic complete copy. As it turned out, the 
"stable" script uses not, as we remembered, the actual template database but 
the londiste-replicated database which was to make next complete copy a few 
days ago. The copy did not complete, however - the schema-table-column 
structure transfer completed, but the constraints and triggers did not get 
through somehow, as there was a lack of hard drive space. Digging on it, we 
found out that the drive's space was not used up by files in the filesystem, it 
was filled with deleted files that postgresql server was still clinging on to, 
probably for a good while. After restarting the server many, many gigabytes 
were suddenly made available on disk. And this is the new problem - the server 
has quite a throughput and this is probably what causes the "leakage". How can 
we force the server to let go of the files? Or maybe it is an actual leak that 
needs to be studied upon?

On a side note, obviously, the Windows dump came out alright because it was 
from the proper database, not the replicated copy.

Best regards,
Marek Kielar


Dnia 27 lipca 2012 4:46 Adrian Klaver  napisał(a):

> On 07/26/2012 04:09 PM, Marek Kielar wrote:
> > Hi,
> >
> > we are using "pg_dump -s" (schema-only) to copy the structure of a 
> > template/prototype database as a set-up for several dozen Fedora boxes. The 
> > dump used to work alright until very recently. Now, across new machines 
> > that are to be introduced into the network it consistently refuses to dump 
> > constraints and triggers - seems pg_dump just skips over them. Otherwise 
> > the dump seems to be complete, the schema-table-column layout is complete. 
> > We thought it was the template server problem, but the oddity is that a 
> > dump made with Windows version of pgAdmin3 comes out complete.
> >
> > The command we use is:
> > /usr/bin/pg_dump -h  -p  -U  -F p -N 
> > '*somename*' -N 'somename2' -N 'somename3' -N 'somename4' -N 'somename5' -T 
> > '*somename6*' -s -v -f /some/dir/schemacopy.sql 
> >
> > The dump is made using a script we consider to be stable and therefore it 
> > hasn't changed since a long while ago. We also weren't able to pin down any 
> > other change between the systems where it previously worked and the ones 
> > where it now refuses to - the operating system (Fedora 16) is the same, the 
> > hardware is the same, the template database server is the same. It doesn't 
> > matter whether we are running the script on an up-to-date system or an 
> > outdated-off-liveCD-installation version, so it most probably is not 
> > update-related. The server (as a system) is sometimes under pretty much 
> > load so it might be resource-related - be it currently or previously.
> >
> > Searching through the archives, I have only stumbled upon a post from 2003 
> > about a similar issue 
> > (http://archives.postgresql.org/pgsql-admin/2003-08/msg00239.php) which 
> > might be connected, however, since the reporter gave up quickly, the issue 
> > remained unsolved.
> >
> > How can we dig into this further? What might be happening?
> 
> Postgres version?
> Is there more than one version of PG on machine?
> The dump made with PgAdmin uses the same parameters?
> Any errors in the logs on either the dump or restore side?
> 
> >
> > Best regards,
> > Marek Kielar
> >
> >
> 
> 
> 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema-only dump dumps no constraints, no triggers

2012-07-26 Thread Adrian Klaver

On 07/26/2012 04:09 PM, Marek Kielar wrote:

Hi,

we are using "pg_dump -s" (schema-only) to copy the structure of a 
template/prototype database as a set-up for several dozen Fedora boxes. The dump used to 
work alright until very recently. Now, across new machines that are to be introduced into 
the network it consistently refuses to dump constraints and triggers - seems pg_dump just 
skips over them. Otherwise the dump seems to be complete, the schema-table-column layout 
is complete. We thought it was the template server problem, but the oddity is that a dump 
made with Windows version of pgAdmin3 comes out complete.

The command we use is:
/usr/bin/pg_dump -h  -p  -U  -F p -N 
'*somename*' -N 'somename2' -N 'somename3' -N 'somename4' -N 'somename5' -T '*somename6*' -s -v -f 
/some/dir/schemacopy.sql 

The dump is made using a script we consider to be stable and therefore it 
hasn't changed since a long while ago. We also weren't able to pin down any 
other change between the systems where it previously worked and the ones where 
it now refuses to - the operating system (Fedora 16) is the same, the hardware 
is the same, the template database server is the same. It doesn't matter 
whether we are running the script on an up-to-date system or an 
outdated-off-liveCD-installation version, so it most probably is not 
update-related. The server (as a system) is sometimes under pretty much load so 
it might be resource-related - be it currently or previously.

Searching through the archives, I have only stumbled upon a post from 2003 
about a similar issue 
(http://archives.postgresql.org/pgsql-admin/2003-08/msg00239.php) which might 
be connected, however, since the reporter gave up quickly, the issue remained 
unsolved.

How can we dig into this further? What might be happening?


Postgres version?
Is there more than one version of PG on machine?
The dump made with PgAdmin uses the same parameters?
Any errors in the logs on either the dump or restore side?



Best regards,
Marek Kielar





--
Adrian Klaver
adrian.kla...@gmail.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema version control

2012-01-23 Thread Bill Moran
In response to Roger Leigh :

> On Thu, Feb 10, 2011 at 02:58:15PM -0700, Rob Sargent wrote:
> > On 02/10/2011 02:38 PM, Royce Ausburn wrote:
> > > My company is having trouble managing how we upgrade schema changes across
> > > many versions of our software.  I imagine this is a common problem and
> > > there're probably some neat solutions that we don't know about.  

http://dbsteward.org

We've been using this for several years and it's made our schema versioning
almost a non-issue.  Finally got the go-ahead to release it just this
month.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema version control

2012-01-23 Thread Roger Leigh
On Thu, Feb 10, 2011 at 02:58:15PM -0700, Rob Sargent wrote:
> On 02/10/2011 02:38 PM, Royce Ausburn wrote:
> > My company is having trouble managing how we upgrade schema changes across
> > many versions of our software.  I imagine this is a common problem and
> > there're probably some neat solutions that we don't know about.  
> > 
> > For the last 10 years we have been writing bash shell scripts essentially
> > numbered in order db0001, db0002, db0003 The number represents the
> > schema version which is recorded in the database and updated by the shell
> > scripts.  We have a template that provides all the functionality we need,
> > we just copy the script and fill in the blanks.  The schema upgrade
> > scripts are committed to svn along with the software changes, and we have
> > a process when installing the software at a site that runs the scripts on
> > the DB in order before starting up the new version of the software.
> 
> Don't the bash scripts get checked in to .../perforce/cvs/svn/git/...?
> Aren't they part of the resources of the project(s)?

I was thinking about this a little more.  With the new CREATE
EXTENSION functionality in Postgres, we have the infrastructure to
run various SQL scripts to migrate between versioned states.
Obviously the extension code relates to extensions such as
datatypes.  I was wondering if this is sufficiently generic
that it could be used to migrate between different versions of
a schema?

This wouldn't be using the EXTENSION functionality, just the
ability to run the scripts.  This would enable easy upgrades
(and downgrades, branching etc.) between different schema
versions, providing that the appropriate scripts were installed.
If this were optionally also accessible via an SQL syntax such
as an analogue of CREATE and/or ALTER EXTENSION, it would
provide a reliable and standardised method for installing and
upgrading a schema, which would potentially prevent a great
deal of wheel-reinvention between software packages.


Regards,
Roger

-- 
  .''`.  Roger Leigh
 : :' :  Debian GNU/Linux http://people.debian.org/~rleigh/
 `. `'   Printing on GNU/Linux?   http://gutenprint.sourceforge.net/
   `-GPG Public Key: 0x25BFB848   Please GPG sign your mail.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] schema question

2012-01-19 Thread Raymond O'Donnell
On 19/01/2012 20:14, Heine Ferreira wrote:
> Hi
> 
> I saw with PGAdmin that there is a public schema in the default postgres
> database.
> Does every database have a public schema?

Yes.

> What is a schema and can you create your own?

A schema is a means of making logical divisions within your database.
Read all about it here:

http://www.postgresql.org/docs/9.1/static/sql-createschema.html

Ray.

-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema for Website Comments

2011-07-12 Thread Vibhor Kumar

On Jul 12, 2011, at 7:54 PM, Prabhat Kumar wrote:

> Today I need to create a schema for my application website that allows user 
> comments too.
> I think we have to maintain hierarchical data and it is very common as all 
> sites are supporting this feature.
> Can somebody suggest me some guidelines to follow and some links too.

I think you would like to go through following thread:
http://archives.postgresql.org/pgsql-sql/2010-04/msg6.php

Presentation:
http://wiki.postgresql.org/images/9/91/Pguswest2008hnd.pdf

Thanks & Regards,
Vibhor Kumar
EnterpriseDB Corporation
The Enterprise PostgreSQL Company
vibhor.ku...@enterprisedb.com
Blogs: http://vibhork.blogspot.com
http://vibhorkumar.wordpress.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema for Website Comments

2011-07-12 Thread Prabhat Kumar
http://www.ferdychristant.com/blog//archive/DOMM-7QJPM7

On Sun, Jul 10, 2011 at 10:14 PM, Adarsh Sharma wrote:

> Dear all,
>
> Today I need to create a schema for my application website that allows user
> comments too.
>
> I think we have to maintain hierarchical data and it is very common as all
> sites are supporting this feature.
>
> Can somebody suggest me some guidelines to follow and some links too.
>
>
> Thanks
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql?**
> unsub=aim.prab...@gmail.com
>
>


-- 
Best Regards,

Prabhat Kumar
MySQL DBA

My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat


Re: [GENERAL] Schema for Website Comments

2011-07-11 Thread Chris Travers
On Sun, Jul 10, 2011 at 10:14 PM, Adarsh Sharma
 wrote:
> Dear all,
>
> Today I need to create a schema for my application website that allows user
> comments too.
>
> I think we have to maintain hierarchical data and it is very common as all
> sites are supporting this feature.
>
> Can somebody suggest me some guidelines to follow and some links too.
>

PostgreSQL supports WITH RECURSIVE as of 8.4 and higher.

http://www.postgresql.org/docs/8.4/static/queries-with.html

Best Wishes,
Chris Travers

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] schema access privs

2011-04-05 Thread Ray Stell
On Tue, Apr 05, 2011 at 06:33:46PM +0530, Vibhor Kumar wrote:
> 
> On Apr 5, 2011, at 6:07 PM, Ray Stell wrote:
> 
> > On Tue, Apr 05, 2011 at 03:58:46PM +0530, Vibhor Kumar wrote:
> >> 
> >> Following are details:
> >> postgres=UC/postgres+
> >> [user]   [privs] /[ ROLE who granted privs.
> > 
> > What's the logic for reporting the U priv twice?  
> 
> [public]=[access][ ROLE who granted privs]
> 
> for public there will no username, its always =(equals to) followed by 
> access/[Role who granted privs].


template1=# \pset expanded
Expanded display is on.
template1=# \dn+ information_schema
List of schemas
-[ RECORD 1 ]-+-
Name  | information_schema
Owner | postgres
Access privileges | postgres=UC/postgres
  | =U/postgres
Description   | 


>From this:

template1=# \pset expanded
Expanded display is off.
template1=# \dn+ information_schema
  List of schemas
Name|  Owner   |  Access privileges   | Description 
+--+--+-
 information_schema | postgres | postgres=UC/postgres+| 
|  | =U/postgres  | 
(1 row)

I was reading 3 fields:

1. postgres=UC
2. postgres=U
3. postgres

which made no sense at all.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] schema access privs

2011-04-05 Thread Vibhor Kumar

On Apr 5, 2011, at 6:07 PM, Ray Stell wrote:

> On Tue, Apr 05, 2011 at 03:58:46PM +0530, Vibhor Kumar wrote:
>> 
>> Following are details:
>> postgres=UC/postgres+
>> [user]   [privs] /[ ROLE who granted privs.
> 
> What's the logic for reporting the U priv twice?  


If you are talking about following:
 =U/postgres  
[public]=[access][ ROLE who granted privs]

for public there will no username, its always =(equals to) followed by 
access/[Role who granted privs].


Thanks & Regards,
Vibhor Kumar
EnterpriseDB Corporation
The Enterprise PostgreSQL Company
vibhor.ku...@enterprisedb.com
Blog:http://vibhork.blogspot.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] schema access privs

2011-04-05 Thread Ray Stell
On Tue, Apr 05, 2011 at 03:58:46PM +0530, Vibhor Kumar wrote:
> 
> Following are details:
>  postgres=UC/postgres+
>  [user]   [privs] /[ ROLE who granted privs.

What's the logic for reporting the U priv twice?  

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] schema access privs

2011-04-05 Thread Vibhor Kumar
[ Please don't overpost the list. Adding PG General List]

On Apr 5, 2011, at 3:30 AM, Ray Stell wrote:

> On Tue, Apr 05, 2011 at 02:42:30AM +0530, Vibhor Kumar wrote:
>> 
>> On Apr 5, 2011, at 2:31 AM, Ray Stell wrote:
>> 
>>> What does the results in col 'List of schemas Access privileges'
>>> indicate?  Are those three results split by the '/' char?  What
>>> are the three sections?  What is 'postgres+' 
>> 
>> Following link contains detail about Access privileges:
>> http://www.postgresql.org/docs/8.4/static/sql-grant.html
> 
> yeah, I saw that, but that's not what I asked about.  there seems
> to be 3 sections of the result.  What are the sections?  and what is
> postgres+ ?

Following are details:
 postgres=UC/postgres+
 [user]   [privs] /[ ROLE who granted privs.

Postgres is a user which has granted USAGE and CREATE Privileges to user 
postgres

+ seems a wrapper in next line.

Thanks & Regards,
Vibhor Kumar
EnterpriseDB Corporation
The Enterprise PostgreSQL Company
vibhor.ku...@enterprisedb.com
Blog:http://vibhork.blogspot.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] schema access privs

2011-04-04 Thread Vibhor Kumar

On Apr 5, 2011, at 2:31 AM, Ray Stell wrote:

> What does the results in col 'List of schemas Access privileges'
> indicate?  Are those three results split by the '/' char?  What
> are the three sections?  What is 'postgres+' 
> 
> Can't find this explained in the docs.
> 
> template1-# \dn+ pg_catalog
>   List of schemas
>Name|  Owner   |  Access privileges   |  Description  
> +--+--+---
> pg_catalog | postgres | postgres=UC/postgres+| system catalog schema
>|  | =U/postgres  | 
> (1 row)

Following link contains detail about Access privileges:
http://www.postgresql.org/docs/8.4/static/sql-grant.html
   r -- SELECT ("read")
  w -- UPDATE ("write")
  a -- INSERT ("append")
  d -- DELETE
  D -- TRUNCATE
  x -- REFERENCES
  t -- TRIGGER
  X -- EXECUTE
  U -- USAGE
  C -- CREATE
  c -- CONNECT
  T -- TEMPORARY
arwdDxt -- ALL PRIVILEGES (for tables, varies for other objects)
  * -- grant option for preceding privilege

  / -- role that granted this privilege

Thanks & Regards,
Vibhor Kumar
EnterpriseDB Corporation
The Enterprise PostgreSQL Company
vibhor.ku...@enterprisedb.com
Blog:http://vibhork.blogspot.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] schema Cleanning

2011-02-24 Thread salah jubeh
Hello, 

Exactly, I want to drop unused tables, views, etc...,  I am writing now a shell 
script to handle this issue by analyzing the log files. If you have suggestions 
and comments I will be glade to hear it.

Regards  

 





From: Michael Black 
To: s_ju...@yahoo.com
Sent: Wed, February 23, 2011 11:57:01 PM
Subject: RE: [GENERAL] schema Cleanning

 I do not know what your criteria is for dropping things.  But I would be very 
sure that something is no longer used.  For example, some tables are not used 
but once a quarter or once a year.  And the same is true with views and 
procedures.  But that would also depend on the type of data like accounting or 
inventory verses customer.  Just some thoughts.

Michael


Date: Wed, 23 Feb 2011 00:10:03 -0800
From: s_ju...@yahoo.com
Subject: Re: [GENERAL] schema Cleanning
To: s_ju...@yahoo.com; pgsql-general@postgresql.org


Hello Guys,

This is a simple way to check if there is no activity on the table, 

SELECT relname,seq_tup_read,idx_tup_fetch  FROM pg_stat_user_tables 
WHERE (idx_tup_fetch + seq_tup_read)= 0;


Is there is any way to make similar checks on views

Regards 


 





From: salah jubeh 
To: pgsql-general@postgresql.org
Sent: Mon, February  21, 2011 4:53:41 PM
Subject: [GENERAL] schema Cleanning


Hello,

I am trying to drop deprecated tables, views, procedures from an existing 
database, I am checking the pg_stat_user_tables view to check if the tables are 
accessed recently also to check live tuples. For deprecated views and 
procedures, I am wondering How can I do that, for example suppose the table is 
used and there is a view defined using this table but no one is using it. Is 
there is away to check the views statistics. 


Regards  


  

Re: [GENERAL] schema Cleanning

2011-02-23 Thread Guillaume Lelarge
Le 23/02/2011 09:10, salah jubeh a écrit :
> [...]
> This is a simple way to check if there is no activity on the table, 
> 
> SELECT relname,seq_tup_read,idx_tup_fetch  FROM pg_stat_user_tables 
> WHERE (idx_tup_fetch + seq_tup_read)= 0;
> 
> 
> Is there is any way to make similar checks on views
> 

No, there's no stats on views.


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] schema Cleanning

2011-02-23 Thread salah jubeh
Hello Guys,

This is a simple way to check if there is no activity on the table, 

SELECT relname,seq_tup_read,idx_tup_fetch  FROM pg_stat_user_tables 
WHERE (idx_tup_fetch + seq_tup_read)= 0;


Is there is any way to make similar checks on views

Regards


 





From: salah jubeh 
To: pgsql-general@postgresql.org
Sent: Mon, February 21, 2011 4:53:41 PM
Subject: [GENERAL] schema Cleanning


Hello,

I am trying to drop deprecated tables, views, procedures from an existing 
database, I am checking the pg_stat_user_tables view to check if the tables are 
accessed recently also to check live tuples. For deprecated views and 
procedures, I am wondering How can I do that, for example suppose the table is 
used and there is a view defined using this table but no one is using it. Is 
there is away to check the views statistics. 


Regards  


  

Re: [GENERAL] Schema version control

2011-02-21 Thread ChronicDB Community Team
On Fri, 2011-02-11 at 08:35 -0500, Daniel Popowich wrote:

> think no software process can make anyone happy.  It's a human
>  process: declare someone the owner of the database schema, let them
>  own the long term development of the schema, and if anyone needs a
>  change, they have to communicate and vet those changes with the db
>  designer.

This cannot be understated. It is very much a human process, and the
database schema owner should be governing the data model. 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema version control

2011-02-21 Thread ChronicDB Community Team

> What about upgrades that can't be derived directly from an inspection
> of the schema?  Some examples:
> 
> - Adding a NOT NULL constraint (without adding a DEFAULT).  You often
> want to precede this with filling in any existing NULL values, so the
> new constraint doesn't fail.

This is an important point. The upgrade to the new schema should be
defined with a matching transformer that will initialize such a value.

> - Updating triggers, functions and their effects.  For example, when I
> have an FTS index with a trigger to update an index column, and I
> change the underlying trigger, I often do something like "UPDATE table
> SET column = column", to cause all of the update triggers to fire and
> recalculate the index columns.

This is the kind of work one should not need to write for every upgrade.
The upgrade should be specified as succinctly as possible, and
everything else around it should be automatic.

> - Creating a new column based on an old one, and removing the old one;
> eg. add a column "n", run "UPDATE ... SET n = i*j * 2", and then drop
> the old columns "i" and "j".

Again, a transformer for computing such new columns is clearly needed.
With ChronicDB not only can such a transformer be provided, but also the
old version of the application client (like a web-app) can continue to
use the old schema while maintaining backwards compatible: its query
requests are also transformed.

> - For efficiency, dropping a specific index while making a large
> update, and then recreating the index.
> 
> In my experience, while generating schema updates automatically is
> handy, it tends to make nontrivial database updates more complicated.
> These sorts of things happen often and are an integral part of a
> database update, so I'm just curious how/if you deal with them.

Glenn, I think you have hit the nail on the head here. A more systematic
model for dealing with nontrivial database updates is needed. In the
case of making a large update that recreates an index, one approach
might be to instantiate a new schema that has the new index on it, and
ensure service is switched automatically and with data consistency.

> I've used Ruby's migrations, and for my Django databases I use my own
> migration system which is based in principle off of it: create scripts
> to migrate the database from version X to X+1 and X-1, and upgrade or
> downgrade by running the appropriate scripts in sequence.
> 
> It's not ideal, since it can't generate a database at a specific
> version directly; it always has to run through the entire sequence of
> migrations to the version you want, and the migrations accumulate.
> However, it can handle whatever arbitrary steps are needed to update a
> database, and I don't need to test updates from every version to every
> other version.

This is something we are planning on adding to ChronicDB. Given a list
of database upgrade definitions, the "end result" should be computed so
that one could upgrade from any arbitrary version to any other arbitrary
version.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema Archive cant find table

2011-02-21 Thread Vibhor Kumar

On Feb 18, 2011, at 9:10 PM, trex005 wrote:

> I am trying to archive a schema, however there is a certain table that
> gets skipped with pd_dump
> 
> The table's name is 'servers'.  I checked to see if it is a reserved
> word, and it does not appear to be...
> 
> Thinking I was missing something I tried using -t servers, and I get :
> pg_dump: No matching tables were found
> 
It should not. Try schemaname.tablename with switch -t.



Thanks & Regards,
Vibhor Kumar

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema Archive cant find table

2011-02-19 Thread Adrian Klaver
On Friday, February 18, 2011 7:40:59 am trex005 wrote:
> I am trying to archive a schema, however there is a certain table that
> gets skipped with pd_dump

What does the log file show when you do the dump?
Any output to the screen when the dump is being run?
If not maybe run with -v switch to get verbose output?


> 
> The table's name is 'servers'.  I checked to see if it is a reserved
> word, and it does not appear to be...
> 
> Thinking I was missing something I tried using -t servers, and I get :
> pg_dump: No matching tables were found
> 
> Any help?
> 
> Second question

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema Archive cant find table

2011-02-19 Thread Andy Colson

On 02/19/2011 08:03 AM, Raymond O'Donnell wrote:

On 18/02/2011 15:40, trex005 wrote:

I am trying to archive a schema, however there is a certain table that
gets skipped with pd_dump

The table's name is 'servers'. I checked to see if it is a reserved
word, and it does not appear to be...

Thinking I was missing something I tried using -t servers, and I get :
pg_dump: No matching tables were found


Do you mean schema as in the table definitions etc, or schema as in a 
particular logical division in the DB?

If the latter, you probably have to specify the schema name:

pg_dump -t my_schema.servers 

Ray.



...and the backup user probably needs rights.  Can you select from it?

-Andy

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema Archive cant find table

2011-02-19 Thread Raymond O'Donnell

On 18/02/2011 15:40, trex005 wrote:

I am trying to archive a schema, however there is a certain table that
gets skipped with pd_dump

The table's name is 'servers'.  I checked to see if it is a reserved
word, and it does not appear to be...

Thinking I was missing something I tried using -t servers, and I get :
pg_dump: No matching tables were found


Do you mean schema as in the table definitions etc, or schema as in a 
particular logical division in the DB?


If the latter, you probably have to specify the schema name:

  pg_dump -t my_schema.servers 

Ray.

--
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema version control

2011-02-11 Thread Glenn Maynard
On Fri, Feb 11, 2011 at 8:35 AM, Daniel Popowich
wrote:

> FWIW, this is what I do:
>
>  1.  I have a table in my database, meta, that contains exactly one
>  row, and holds configuration information.  A minimal version of this
>  table:
>
>CREATE TABLE meta (
>id integer DEFAULT 1 NOT NULL CHECK (id = 1),
>major integer NOT NULL,
>minor integer NOT NULL,
>patch integer NOT NULL
>);
>

What both Rails and my own system does is, rather than having a version,
each migration has a name; for example,
"20110211-193000-create_initial_db".  The table is the set of which
migrations have been applied to the database, eg.
"20110211-193000-create_initial_db".  The migrations are applied in sorted
order; hence the prefixed timestamp.

The benefit of this is when you're merging changes from separate branches.
Disconnected branches can each add their own migration rules for the part of
the database they affect.  When the two branches are merged, they fit
together naturally.  This doesn't work at all when you have ordered "version
numbers".

For example, if "20110211-add_column" is added to trunk, and then a branch
is merged which has "20110210-add_table", the merged migration fits in
implicitly and there are no collisions as with version numbers.  (It's
applied out of order if you already ran add_column, of course, so you still
need to be sure that branches don't collide with each other.)  Databases
migrate straightforwardly after the merge, regardless of whether they were
migrated to trunk or to the branch.

-- 
Glenn Maynard


Re: [GENERAL] Schema version control

2011-02-11 Thread Bill Moran
In response to Alban Hertroys :

> On 10 Feb 2011, at 23:59, Bill Moran wrote:
> 
> > The overview:
> > You store your schema and data as XML (this is easy to migrate to, because
> > it includes a tool that makes the XML from a live database)
> > Keep your XML schema files in some RCS.
> 
> That reminds me of something I've been wondering about - How well do modern 
> RCSs deal with structured data formats (like XML)? It would appear that most 
> of them still use diff, which is line-based with limited context tracking, to 
> determine change-sets.
> 
> Is that combination guaranteed to result in valid XML if you merge revisions 
> that are far enough apart? Or are there RCSs around that know about XML 
> format (and possibly other structured formats) and handle it differently?
> 
> I've heavily used RCSs (mostly Subversion) with, for example HTML, and merge 
> conflicts haven't been unusual. It doesn't help Subversion's diff is a bit 
> simplistic about white-space, I'm sure some of those conflicts were quite 
> unnecessary.

We keep a lot of XML in RCS (Subversion) aside from just our DB schemas,
and our experience has been somewhat similar to your experience with
HTML.  Since SVN is a line-based RCS, it can occasionally be painful
to resolve conflicts.

Probably our best defense against the problem is that we have a commit
hook that won't let you commit unless the xml file validates against
the DTD first.  This forces developers to make sure they haven't
mangled the xml.

The second best defense (in my opinion) is careful formatting.  By keeping
each tag on its own line, and indenting consistently, the number of mangled
xml files was never much worse than the number of mangled source code
files. (even before we had the commit hooks)

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema version control

2011-02-11 Thread Daniel Popowich

Glenn Maynard writes:
> - Adding a NOT NULL constraint (without adding a DEFAULT).  You often want
> to precede this with filling in any existing NULL values, so the new
> constraint doesn't fail.
> - Updating triggers, functions and their effects.  For example, when I have
> an FTS index with a trigger to update an index column, and I change the
> underlying trigger, I often do something like "UPDATE table SET column =
> column", to cause all of the update triggers to fire and recalculate the
> index columns.
> - Creating a new column based on an old one, and removing the old one; eg.
> add a column "n", run "UPDATE ... SET n = i*j * 2", and then drop the old
> columns "i" and "j".
> - Updating data from an external source, such as ORM model code; for
> example, if you have a table representing external files, an update may want
> to calculate and update the SHA-1 of each file.
> - For efficiency, dropping a specific index while making a large update, and
> then recreating the index.
> 
> In my experience, while generating schema updates automatically is handy, it
> tends to make nontrivial database updates more complicated.  These sorts of
> things happen often and are an integral part of a database update, so I'm
> just curious how/if you deal with them.
> 
> I've used Ruby's migrations, and for my Django databases I use my own
> migration system which is based in principle off of it: create scripts to
> migrate the database from version X to X+1 and X-1, and upgrade or downgrade
> by running the appropriate scripts in sequence.
> 
> It's not ideal, since it can't generate a database at a specific version
> directly; it always has to run through the entire sequence of migrations to
> the version you want, and the migrations accumulate.  However, it can handle
> whatever arbitrary steps are needed to update a database, and I don't need
> to test updates from every version to every other version.

I'm with Glenn on this point.  I have found updating a db version to
be far more complex (for reasons he illustrates and which I find more
common than not) than any automation tool can handle.  And I wonder if
the time spent developing such a tool (or writing changesets, xml,
etc. for a given tool) actually saves development time.

FWIW, this is what I do:

 1.  I have a table in my database, meta, that contains exactly one
 row, and holds configuration information.  A minimal version of this
 table:

CREATE TABLE meta (
id integer DEFAULT 1 NOT NULL CHECK (id = 1),
major integer NOT NULL,
minor integer NOT NULL,
patch integer NOT NULL
);

  The first column is to guarantee I only have one row.  The next
  three provide a tuple for my version, e.g., 1.1.3.  I add other
  columns for software configuration as needed.

 2.  My application configuration has a DB_VERSION variable which
 defines the version of the database the software depends on.  If
 there's a mismatch, my programs can't connect to the database, thus
 guaranteeing db<=>software happiness.

 3.  Like Glenn, I have a script that can go forwards and backwards,
 one revision at a time.  The effort in maintaining this script is
 minimal, actually: in my RCS, I have the current schema, which is
 generated with: `pg_dump -O -s`.  Based on diffs between current
 system and last revision I can manually generate the necessary DDL
 statements for the script.  If I have data as part of my "schema,"
 like the row in my meta table, those are simple enough to examine and
 add to the script with DML statements.

 4.  Whenever a revision changes the schema, I tag my my repository,
 so I can easily check out that version.

 5.  As for branches.  If production is at 1.1.3 and three branches
 create versions 1.1.4, another 1.1.4 and 1.2.0, and all three want to
 merge their various changes back into a 1.1.3 production database?  I
 say, without being flip, don't let this happen.  Here is where I
 think no software process can make anyone happy.  It's a human
 process: declare someone the owner of the database schema, let them
 own the long term development of the schema, and if anyone needs a
 change, they have to communicate and vet those changes with the db
 designer.


Dan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema version control

2011-02-11 Thread Bill Moran
In response to Andre Lopes :

> Hi,
> 
> Where can we donwload dbsteward?

You can't yet.  We're still in the process of getting everything lined up
to release it.

We've registerd dbsteward.org, so when we have things ready, that'll be
where you can go.  Expect it to come live in the next few months.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema version control

2011-02-11 Thread Andre Lopes
Hi,

Where can we donwload dbsteward?

Best Regards,



On Fri, Feb 11, 2011 at 5:16 AM, Bill Moran  wrote:
> In response to Glenn Maynard :
>
>> On Thu, Feb 10, 2011 at 6:44 PM, Bill Moran wrote:
>>
>> > dbsteward can do downgrades ... you just feed it the old schema and
>> > the new schema in reverse of how you'd do an upgrade ;)
>> >
>> > Oh, also, it allows us to do installation-specific overrides.  We use
>> > this ONLY for DML for lookup lists where some clients have slightly
>> > different names for things than others.  In theory, it could do DDL
>> > overrides as well, but we decided on a policy of not utilizing that
>> > because we wanted the schemas to be consistent on all our installs.
>> >
>>
>> What about upgrades that can't be derived directly from an inspection of the
>> schema?  Some examples:
>>
>> - Adding a NOT NULL constraint (without adding a DEFAULT).  You often want
>> to precede this with filling in any existing NULL values, so the new
>> constraint doesn't fail.
>> - Updating triggers, functions and their effects.  For example, when I have
>> an FTS index with a trigger to update an index column, and I change the
>> underlying trigger, I often do something like "UPDATE table SET column =
>> column", to cause all of the update triggers to fire and recalculate the
>> index columns.
>> - Creating a new column based on an old one, and removing the old one; eg.
>> add a column "n", run "UPDATE ... SET n = i*j * 2", and then drop the old
>> columns "i" and "j".
>> - Updating data from an external source, such as ORM model code; for
>> example, if you have a table representing external files, an update may want
>> to calculate and update the SHA-1 of each file.
>> - For efficiency, dropping a specific index while making a large update, and
>> then recreating the index.
>>
>> In my experience, while generating schema updates automatically is handy, it
>> tends to make nontrivial database updates more complicated.  These sorts of
>> things happen often and are an integral part of a database update, so I'm
>> just curious how/if you deal with them.
>>
>> I've used Ruby's migrations, and for my Django databases I use my own
>> migration system which is based in principle off of it: create scripts to
>> migrate the database from version X to X+1 and X-1, and upgrade or downgrade
>> by running the appropriate scripts in sequence.
>>
>> It's not ideal, since it can't generate a database at a specific version
>> directly; it always has to run through the entire sequence of migrations to
>> the version you want, and the migrations accumulate.  However, it can handle
>> whatever arbitrary steps are needed to update a database, and I don't need
>> to test updates from every version to every other version.
>
> You're correct (based on our experience over the past few years).
>
> The big caveat is that 99.9% of the database changes don't fall into those
> "nontrivial" categories, and dbsteward makes those 99.9% of the changes
> easy to do, reliable to reproduce, and easy to track.
>
> We've added some stuff to handle the other .1% as well, like 
> and  where you can put an arbitrary SQL strings to be run
> before or after the remainder of the automatic stuff is done.  We probably
> haven't seen every circumstance that needs a special handling, but we've
> already struggled through a bunch.
>
> All this is part of the reason we're pushing to get this stuff open-
> sourced.  We feel like we've got something that's pretty far along, and
> we feel that community involvement will help enhance things.
>
> --
> Bill Moran
> http://www.potentialtech.com
> http://people.collaborativefusion.com/~wmoran/
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema version control

2011-02-10 Thread Alban Hertroys
On 10 Feb 2011, at 23:59, Bill Moran wrote:

> The overview:
> You store your schema and data as XML (this is easy to migrate to, because
> it includes a tool that makes the XML from a live database)
> Keep your XML schema files in some RCS.

That reminds me of something I've been wondering about - How well do modern 
RCSs deal with structured data formats (like XML)? It would appear that most of 
them still use diff, which is line-based with limited context tracking, to 
determine change-sets.

Is that combination guaranteed to result in valid XML if you merge revisions 
that are far enough apart? Or are there RCSs around that know about XML format 
(and possibly other structured formats) and handle it differently?

I've heavily used RCSs (mostly Subversion) with, for example HTML, and merge 
conflicts haven't been unusual. It doesn't help Subversion's diff is a bit 
simplistic about white-space, I'm sure some of those conflicts were quite 
unnecessary.

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4d54eac711731788013809!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema version control

2011-02-10 Thread Glenn Maynard
On Fri, Feb 11, 2011 at 12:16 AM, Bill Moran wrote:

> The big caveat is that 99.9% of the database changes don't fall into those
> "nontrivial" categories, and dbsteward makes those 99.9% of the changes
> easy to do, reliable to reproduce, and easy to track.
>

My experience is maybe more like 95% than 99.9%, for what it's worth;
they're the exception, but not rare.

We've added some stuff to handle the other .1% as well, like
> 
> and  where you can put an arbitrary SQL strings to be run
> before or after the remainder of the automatic stuff is done.  We probably
> haven't seen every circumstance that needs a special handling, but we've
> already struggled through a bunch.
>

Here's a fairly common example, in the abstract:

version 1 has two columns, i and j;
version 2 has one column, k, where k = i + j; and
version 3 has one column, x, where x = k * 2

Not only is updating from 1 to 2 tricky ("k = i + j" lies between the adding
of "k" but before the removal of i and j; it's neither a "before" nor an
"after"), but updating directly from 1 to 3 without first migrating to 2 is
extremely hard.  I suspect you'd need to snapshot the schema at each version
where these are needed to update incrementally, rather than always trying to
convert directly to the current version--maybe you already do that.

Anyhow, just some thoughts based on my own experience with database
updates--good luck.

-- 
Glenn Maynard


Re: [GENERAL] Schema version control

2011-02-10 Thread Bill Moran
In response to Glenn Maynard :

> On Thu, Feb 10, 2011 at 6:44 PM, Bill Moran wrote:
> 
> > dbsteward can do downgrades ... you just feed it the old schema and
> > the new schema in reverse of how you'd do an upgrade ;)
> >
> > Oh, also, it allows us to do installation-specific overrides.  We use
> > this ONLY for DML for lookup lists where some clients have slightly
> > different names for things than others.  In theory, it could do DDL
> > overrides as well, but we decided on a policy of not utilizing that
> > because we wanted the schemas to be consistent on all our installs.
> >
> 
> What about upgrades that can't be derived directly from an inspection of the
> schema?  Some examples:
> 
> - Adding a NOT NULL constraint (without adding a DEFAULT).  You often want
> to precede this with filling in any existing NULL values, so the new
> constraint doesn't fail.
> - Updating triggers, functions and their effects.  For example, when I have
> an FTS index with a trigger to update an index column, and I change the
> underlying trigger, I often do something like "UPDATE table SET column =
> column", to cause all of the update triggers to fire and recalculate the
> index columns.
> - Creating a new column based on an old one, and removing the old one; eg.
> add a column "n", run "UPDATE ... SET n = i*j * 2", and then drop the old
> columns "i" and "j".
> - Updating data from an external source, such as ORM model code; for
> example, if you have a table representing external files, an update may want
> to calculate and update the SHA-1 of each file.
> - For efficiency, dropping a specific index while making a large update, and
> then recreating the index.
> 
> In my experience, while generating schema updates automatically is handy, it
> tends to make nontrivial database updates more complicated.  These sorts of
> things happen often and are an integral part of a database update, so I'm
> just curious how/if you deal with them.
> 
> I've used Ruby's migrations, and for my Django databases I use my own
> migration system which is based in principle off of it: create scripts to
> migrate the database from version X to X+1 and X-1, and upgrade or downgrade
> by running the appropriate scripts in sequence.
> 
> It's not ideal, since it can't generate a database at a specific version
> directly; it always has to run through the entire sequence of migrations to
> the version you want, and the migrations accumulate.  However, it can handle
> whatever arbitrary steps are needed to update a database, and I don't need
> to test updates from every version to every other version.

You're correct (based on our experience over the past few years).

The big caveat is that 99.9% of the database changes don't fall into those
"nontrivial" categories, and dbsteward makes those 99.9% of the changes
easy to do, reliable to reproduce, and easy to track.

We've added some stuff to handle the other .1% as well, like 
and  where you can put an arbitrary SQL strings to be run
before or after the remainder of the automatic stuff is done.  We probably
haven't seen every circumstance that needs a special handling, but we've
already struggled through a bunch.

All this is part of the reason we're pushing to get this stuff open-
sourced.  We feel like we've got something that's pretty far along, and
we feel that community involvement will help enhance things.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema version control

2011-02-10 Thread Glenn Maynard
On Thu, Feb 10, 2011 at 6:44 PM, Bill Moran wrote:

> dbsteward can do downgrades ... you just feed it the old schema and
> the new schema in reverse of how you'd do an upgrade ;)
>
> Oh, also, it allows us to do installation-specific overrides.  We use
> this ONLY for DML for lookup lists where some clients have slightly
> different names for things than others.  In theory, it could do DDL
> overrides as well, but we decided on a policy of not utilizing that
> because we wanted the schemas to be consistent on all our installs.
>

What about upgrades that can't be derived directly from an inspection of the
schema?  Some examples:

- Adding a NOT NULL constraint (without adding a DEFAULT).  You often want
to precede this with filling in any existing NULL values, so the new
constraint doesn't fail.
- Updating triggers, functions and their effects.  For example, when I have
an FTS index with a trigger to update an index column, and I change the
underlying trigger, I often do something like "UPDATE table SET column =
column", to cause all of the update triggers to fire and recalculate the
index columns.
- Creating a new column based on an old one, and removing the old one; eg.
add a column "n", run "UPDATE ... SET n = i*j * 2", and then drop the old
columns "i" and "j".
- Updating data from an external source, such as ORM model code; for
example, if you have a table representing external files, an update may want
to calculate and update the SHA-1 of each file.
- For efficiency, dropping a specific index while making a large update, and
then recreating the index.

In my experience, while generating schema updates automatically is handy, it
tends to make nontrivial database updates more complicated.  These sorts of
things happen often and are an integral part of a database update, so I'm
just curious how/if you deal with them.

I've used Ruby's migrations, and for my Django databases I use my own
migration system which is based in principle off of it: create scripts to
migrate the database from version X to X+1 and X-1, and upgrade or downgrade
by running the appropriate scripts in sequence.

It's not ideal, since it can't generate a database at a specific version
directly; it always has to run through the entire sequence of migrations to
the version you want, and the migrations accumulate.  However, it can handle
whatever arbitrary steps are needed to update a database, and I don't need
to test updates from every version to every other version.

-- 
Glenn Maynard


Re: [GENERAL] Schema version control

2011-02-10 Thread Royce Ausburn

> 
> Don't the bash scripts get checked in to .../perforce/cvs/svn/git/...?
> Aren't they part of the resources of the project(s)?

Yep - they absolutely are.  The issue is that there're multiple branches 
*potentially* having new scripts committed.  Fortunately it's rare as the 
release branches seldom require schema changes, but it's an important enough 
problem to need a better solution.

--Royce
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema version control

2011-02-10 Thread Royce Ausburn
> 
> So, 10.0 at 10057.
> 11.0 at 11023.
> 
> then 10.1 needs some fixes so db is bumped to 10058.
> 
> Then, later, you can upgrade 10057 to 11023, but you cant get 10058 to 11023.
> 
> Humm... maybe you need smarter upgrade scripts?  Would having logic in the 
> script help?  Something like:
> 
> if not fieldExists('xyz) then alter table ...  add xyz ...
> 
> 
> 
> Or, maybe your schema numbering system is to broad?  Maybe each table could 
> have a version number?
> 
> 
> Or some kinda flags like:
> create table dbver(key text);
> 
> then an update would be named: "add xyz to bob".
> 
> then the update code:
> 
> q = select key from dbver where key = 'add xyz to bob';
> if q.eof then
>   alter table bob add xyz
> 


This is effectively the approach we've been working with so far, but it isn't 
great.  The issue is that you need to be really aware of what changes might or 
might not have been in the previous databases... This can be hard.

There's also the problem of updating data, etc.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema version control

2011-02-10 Thread Royce Ausburn

On 11/02/2011, at 9:59 AM, Thomas Kellerer wrote:

> Royce Ausburn wrote on 10.02.2011 22:38:
>> I'm really interested to hear how you guys manage schema upgrades in
>> the face of branches and upgrading from many different versions of
>> the database.
> 
> We are quite happy with Liquibase. You can simply run it against a database 
> and tell it to migrate it to "Version x.y"
> 
> As it keeps track of all changes applied it automatically knows what to do.
> 
> I can handle static data as well as stored procedure and any custom SQL.

Thanks!  Liquibase looks pretty neat.  We'll have to check it out.

--Royce
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema version control

2011-02-10 Thread Rob Sargent


On 02/10/2011 04:44 PM, Bill Moran wrote:
> In response to Rob Sargent :
>>
>> On 02/10/2011 03:59 PM, Bill Moran wrote:
>>> In response to Rob Sargent :
 I for one will be waiting to see your dbsteward.  How does it compare
 functionally or stylistically with Ruby's migration tools (which I found
 to be pretty cool and frustrating all in one go).
>>>
>>> I'm not familiar with Ruby's migration tools, so I can't say much.
>>>
>>> The overview:
>>> You store your schema and data as XML (this is easy to migrate to, because
>>> it includes a tool that makes the XML from a live database)
>>> Keep your XML schema files in some RCS.
>>> When it's time for a new deployment, you run the dbsteward tool against
>>> the schema XML and it turns it into DDL and DML.
>>> When it's time for an upgrade, you run the dbsteward tool against two
>>> schema XML files, and it calculates what has changed and generates the
>>> appropriate DDL and DML to upgrade.
>>>
>>> So ... you know, however that compares with the Ruby stuff is how it
>>> does.
>>>
>> Now at the bottom :)
>>
>> It's been a couple years since I played with Ruby ActiveRecord but it's
>> (of course) radically than what you describe.  The ddl is in the ruby
>> code and naturally the code is in RCS.  So a revision is a new instance
>> of ActiveRecord (iirc) which does the change(s) (create table ttt, alter
>> table vvv etc).  Maybe skip a rev.  Rollback to a rev is definitely
>> there because one writes the undo for each new revision.  This include
>> manipulating the data of course, so there are limitations.
> 
> dbsteward can do downgrades ... you just feed it the old schema and
> the new schema in reverse of how you'd do an upgrade ;)
> 
> Oh, also, it allows us to do installation-specific overrides.  We use
> this ONLY for DML for lookup lists where some clients have slightly
> different names for things than others.  In theory, it could do DDL
> overrides as well, but we decided on a policy of not utilizing that
> because we wanted the schemas to be consistent on all our installs.
> 
>> I personally am leary of the 'make the prod match the dev db' approach.
>> Who knows what extras lurk in the depths. I think one should be able to
>> make the dev db from scratch and write the necessary scripts to change
>> to (and from if possible) each revision. Apply to prod when tested.
> 
> dbsteward allows us to do all this.  A developer can make a change,
> rebuild a test database from their change to make sure it works, then
> test the upgrade process as well, all before even checking the code in.
> 

Good work.  Will look forward to it.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema version control

2011-02-10 Thread Rob Sargent


On 02/10/2011 02:38 PM, Royce Ausburn wrote:
> Hi all,
> 
> My company is having trouble managing how we upgrade schema changes across 
> many versions of our software.  I imagine this is a common problem and 
> there're probably some neat solutions that we don't know about.  
> 
> For the last 10 years we have been writing bash shell scripts essentially 
> numbered in order db0001, db0002, db0003 The number represents the schema 
> version which is recorded in the database and updated by the shell scripts.  
> We have a template that provides all the functionality we need, we just copy 
> the script and fill in the blanks.  The schema upgrade scripts are committed 
> to svn along with the software changes, and we have a process when installing 
> the software at a site that runs the scripts on the DB in order before 
> starting up the new version of the software.
> 
> This has worked really well so far.  But  we've effectively only had one 
> version of the software in development at any time.  We're now in the habit 
> of branching the software to form releases to promote stability when making 
> large changes.  The idea is that only really important changes are merged in 
> to the releases.  This introduces a bit of a problem when some change needs 
> to be merged from one release to another.  The typical problem is that we 
> might have two versions of the software 10.0 at schema version 10057 and 11.0 
> at 11023 and we need to merge an important bug fix from schema 11023 in to 
> 10.0.  The issue is that 11023 might depend upon changes introduced in the 
> schema versions before it.  Or 11023 might introduce changes that cause later 
> scripts to break (11000 - 11023) when upgrading from 10.0 to 11.0.
> 
> One potential solution is to require that schema changes are never merged in 
> to a release, but of course sometimes business requires we do =(
> 
> I'm really interested to hear how you guys manage schema upgrades in the face 
> of branches and upgrading from many different versions of the database.
> 
> I've been reading 
> http://pyrseas.wordpress.com/2011/02/07/version-control-part-2-sql-databases/ 
> but I have a feeling that this blog post won't address branches.
> 
> Cheers!
> 
> --Royce
> 
> 

Don't the bash scripts get checked in to .../perforce/cvs/svn/git/...?
Aren't they part of the resources of the project(s)?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema version control

2011-02-10 Thread Thomas Kellerer

Bill Moran wrote on 11.02.2011 00:37:

Anyway ... based on nothing more than a quick scan of their quickstart
page, here are the differences I see:
* Liquibase is dependent on you creating "changesets".  I'm sure this
   works, but we took a different approach with dbsteward.  dbsteward
   expects you to maintain XML files that represent the entire database,
   then dbsteward does the work of figuring out what changed.  Our
   opinion was that svn already does the work of tracking changes, why
   reinvent the wheel.


That sounds like a very nice feature.


* Looks like liquibase requires you to talk to the database to push
   the changes?  dbsteward outputs a DDL/DML file that you can push
   in whatever way is best.  This is important to us because we use
   Slony, and DDL changes have to be submitted through EXECUTE SCRIPT()


No, Liquibase can also emit the SQL that it would execute.


* dbsteward has built-in Slony support (i.e. it will make slony configs
   as well as slony upgrade scripts in addition to DDL/DML)




* liquibase has a lot more supported platforms at this time.  dbsteward
   only supports PostgreSQL and MSSQL (because that's all that we needed)
   but I expect that other support will come quickly once we release it.




* Does liquibase support things like multi-column indexes and multi-
   column primary keys?  dbsteward does.


Yes without problems (including of course the necessary foreing keys)

 

Anyway ... sorry for the teaser on this, but we're trying to get through
all the hoops the company is requiring us to do to release it, and we
think we're on track to be ready by PGCon, so there'll be a website up
as soon as we can get it.


Thanks for the feedback, I would really like to see it.

The approach that you do not record the changes but simply let the software 
find them seems like a very nifty feature.
I wonder how you detect renaming a table or a column?

On which programming language is dbstewart based?


Regards
Thomas

 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema version control

2011-02-10 Thread Bill Moran
In response to Rob Sargent :
> 
> On 02/10/2011 03:59 PM, Bill Moran wrote:
> > In response to Rob Sargent :
> >> I for one will be waiting to see your dbsteward.  How does it compare
> >> functionally or stylistically with Ruby's migration tools (which I found
> >> to be pretty cool and frustrating all in one go).
> > 
> > I'm not familiar with Ruby's migration tools, so I can't say much.
> > 
> > The overview:
> > You store your schema and data as XML (this is easy to migrate to, because
> > it includes a tool that makes the XML from a live database)
> > Keep your XML schema files in some RCS.
> > When it's time for a new deployment, you run the dbsteward tool against
> > the schema XML and it turns it into DDL and DML.
> > When it's time for an upgrade, you run the dbsteward tool against two
> > schema XML files, and it calculates what has changed and generates the
> > appropriate DDL and DML to upgrade.
> > 
> > So ... you know, however that compares with the Ruby stuff is how it
> > does.
> > 
> Now at the bottom :)
> 
> It's been a couple years since I played with Ruby ActiveRecord but it's
> (of course) radically than what you describe.  The ddl is in the ruby
> code and naturally the code is in RCS.  So a revision is a new instance
> of ActiveRecord (iirc) which does the change(s) (create table ttt, alter
> table vvv etc).  Maybe skip a rev.  Rollback to a rev is definitely
> there because one writes the undo for each new revision.  This include
> manipulating the data of course, so there are limitations.

dbsteward can do downgrades ... you just feed it the old schema and
the new schema in reverse of how you'd do an upgrade ;)

Oh, also, it allows us to do installation-specific overrides.  We use
this ONLY for DML for lookup lists where some clients have slightly
different names for things than others.  In theory, it could do DDL
overrides as well, but we decided on a policy of not utilizing that
because we wanted the schemas to be consistent on all our installs.

> I personally am leary of the 'make the prod match the dev db' approach.
> Who knows what extras lurk in the depths. I think one should be able to
> make the dev db from scratch and write the necessary scripts to change
> to (and from if possible) each revision. Apply to prod when tested.

dbsteward allows us to do all this.  A developer can make a change,
rebuild a test database from their change to make sure it works, then
test the upgrade process as well, all before even checking the code in.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema version control

2011-02-10 Thread Bill Moran
In response to Thomas Kellerer :

> Bill Moran wrote on 10.02.2011 23:59:
> > The overview:
> > You store your schema and data as XML (this is easy to migrate to, because
> > it includes a tool that makes the XML from a live database)
> > Keep your XML schema files in some RCS.
> > When it's time for a new deployment, you run the dbsteward tool against
> > the schema XML and it turns it into DDL and DML.
> > When it's time for an upgrade, you run the dbsteward tool against two
> > schema XML files, and it calculates what has changed and generates the
> > appropriate DDL and DML to upgrade.
> 
> This very much sounds like Liquibase. Do you happen to know any differences?

Hrm ... before we started writing dbsteward, we looked around to see
if something already existed and didn't find Liquibase.  This is the
first I've heard of it.  I'm thinking it was some time in 2008, and
according to their changelog, Liquibase was around at that time.  I
wonder how we missed it ...

Anyway ... based on nothing more than a quick scan of their quickstart
page, here are the differences I see:
* Liquibase is dependent on you creating "changesets".  I'm sure this
  works, but we took a different approach with dbsteward.  dbsteward
  expects you to maintain XML files that represent the entire database,
  then dbsteward does the work of figuring out what changed.  Our
  opinion was that svn already does the work of tracking changes, why
  reinvent the wheel.
* Looks like liquibase requires you to talk to the database to push
  the changes?  dbsteward outputs a DDL/DML file that you can push
  in whatever way is best.  This is important to us because we use
  Slony, and DDL changes have to be submitted through EXECUTE SCRIPT()
* dbsteward has built-in Slony support (i.e. it will make slony configs
  as well as slony upgrade scripts in addition to DDL/DML)
* Does liquibase support UDFs?  dbsteward does.
* liquibase has a lot more supported platforms at this time.  dbsteward
  only supports PostgreSQL and MSSQL (because that's all that we needed)
  but I expect that other support will come quickly once we release it.
* Does liquibase support things like multi-column indexes and multi-
  column primary keys?  dbsteward does.

I don't think I should go on and on, as I could ask a lot of questions
about what liquibase does, and I simply don't have the time right now
to research it, or ask all those question ;)

Anyway ... sorry for the teaser on this, but we're trying to get through
all the hoops the company is requiring us to do to release it, and we
think we're on track to be ready by PGCon, so there'll be a website up
as soon as we can get it.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema version control

2011-02-10 Thread Andy Chambers
On Thu, 10 Feb 2011 17:59:30 -0500, Bill Moran   
wrote:



The overview:
You store your schema and data as XML (this is easy to migrate to,  
because

it includes a tool that makes the XML from a live database)


We're doing a similar thing here except we're a Lisp shop so our
schema is defined as a set of "defentities" and we can migrate from
one version to another using a corresponding set of "defmaps".


Keep your XML schema files in some RCS.
When it's time for a new deployment, you run the dbsteward tool against
the schema XML and it turns it into DDL and DML.
When it's time for an upgrade, you run the dbsteward tool against two
schema XML files, and it calculates what has changed and generates the
appropriate DDL and DML to upgrade.


This sounds pretty cool.  Ours doesn't do that yet but that's next
on my TODO list.

--
Andy Chambers

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema version control

2011-02-10 Thread Rob Sargent


On 02/10/2011 03:59 PM, Bill Moran wrote:
> In response to Rob Sargent :
> 
>> Top-posting is frowned upon by some (not me), but since Bill started it...
> 
> Oops ... the weird thing is that I'm usually really anal about not top-
> posting ...
> 
>> I for one will be waiting to see your dbsteward.  How does it compare
>> functionally or stylistically with Ruby's migration tools (which I found
>> to be pretty cool and frustrating all in one go).
> 
> I'm not familiar with Ruby's migration tools, so I can't say much.
> 
> The overview:
> You store your schema and data as XML (this is easy to migrate to, because
> it includes a tool that makes the XML from a live database)
> Keep your XML schema files in some RCS.
> When it's time for a new deployment, you run the dbsteward tool against
> the schema XML and it turns it into DDL and DML.
> When it's time for an upgrade, you run the dbsteward tool against two
> schema XML files, and it calculates what has changed and generates the
> appropriate DDL and DML to upgrade.
> 
> So ... you know, however that compares with the Ruby stuff is how it
> does.
> 
Now at the bottom :)

It's been a couple years since I played with Ruby ActiveRecord but it's
(of course) radically than what you describe.  The ddl is in the ruby
code and naturally the code is in RCS.  So a revision is a new instance
of ActiveRecord (iirc) which does the change(s) (create table ttt, alter
table vvv etc).  Maybe skip a rev.  Rollback to a rev is definitely
there because one writes the undo for each new revision.  This include
manipulating the data of course, so there are limitations.

I personally am leary of the 'make the prod match the dev db' approach.
Who knows what extras lurk in the depths. I think one should be able to
make the dev db from scratch and write the necessary scripts to change
to (and from if possible) each revision. Apply to prod when tested.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema version control

2011-02-10 Thread Thomas Kellerer

Bill Moran wrote on 10.02.2011 23:59:

The overview:
You store your schema and data as XML (this is easy to migrate to, because
it includes a tool that makes the XML from a live database)
Keep your XML schema files in some RCS.
When it's time for a new deployment, you run the dbsteward tool against
the schema XML and it turns it into DDL and DML.
When it's time for an upgrade, you run the dbsteward tool against two
schema XML files, and it calculates what has changed and generates the
appropriate DDL and DML to upgrade.


This very much sounds like Liquibase. Do you happen to know any differences?

Regards
Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema version control

2011-02-10 Thread Bill Moran
In response to Rob Sargent :

> Top-posting is frowned upon by some (not me), but since Bill started it...

Oops ... the weird thing is that I'm usually really anal about not top-
posting ...

> I for one will be waiting to see your dbsteward.  How does it compare
> functionally or stylistically with Ruby's migration tools (which I found
> to be pretty cool and frustrating all in one go).

I'm not familiar with Ruby's migration tools, so I can't say much.

The overview:
You store your schema and data as XML (this is easy to migrate to, because
it includes a tool that makes the XML from a live database)
Keep your XML schema files in some RCS.
When it's time for a new deployment, you run the dbsteward tool against
the schema XML and it turns it into DDL and DML.
When it's time for an upgrade, you run the dbsteward tool against two
schema XML files, and it calculates what has changed and generates the
appropriate DDL and DML to upgrade.

So ... you know, however that compares with the Ruby stuff is how it
does.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema version control

2011-02-10 Thread Thomas Kellerer

Royce Ausburn wrote on 10.02.2011 22:38:

I'm really interested to hear how you guys manage schema upgrades in
the face of branches and upgrading from many different versions of
the database.


We are quite happy with Liquibase. You can simply run it against a database and tell it 
to migrate it to "Version x.y"

As it keeps track of all changes applied it automatically knows what to do.

I can handle static data as well as stored procedure and any custom SQL.

Regards
Thomas

 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema version control

2011-02-10 Thread Bill Moran
In response to Andy Colson :

> On 2/10/2011 4:18 PM, Bill Moran wrote:
> >
> > We have this kickass solution we built at work called dbsteward that
> > just takes care of all of this for us, automatically.  You just give
> > it the new version and the old version and it generates update statements
> > to feed into PG.
> >
> > The reason I'm bringing this up is that we're working to release
> > dbsteward as open source for PGCon.  So, if you can wait a bit, you
> > can jump on that train.
> >
> > In response to Andy Colson:
> >
> 
> But... that assumes all updates are DDL only?  What if I have a lookup 
> table, and want to add some new entries into it?

It has provisions for maintaining static data as well.  We have a bunch
of lookup tables (too many, in my opinion) and it does an excellent job
of maintaining them.  They just need to have a primary key, but that's
not usually a problem with lookup tables.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema version control

2011-02-10 Thread Rob Sargent
Top-posting is frowned upon by some (not me), but since Bill started it...

I for one will be waiting to see your dbsteward.  How does it compare
functionally or stylistically with Ruby's migration tools (which I found
to be pretty cool and frustrating all in one go).

On 02/10/2011 03:18 PM, Bill Moran wrote:
> 
> We have this kickass solution we built at work called dbsteward that
> just takes care of all of this for us, automatically.  You just give
> it the new version and the old version and it generates update statements
> to feed into PG.
> 
> The reason I'm bringing this up is that we're working to release
> dbsteward as open source for PGCon.  So, if you can wait a bit, you
> can jump on that train.
> 
> In response to Andy Colson :
> 
>> On 2/10/2011 3:38 PM, Royce Ausburn wrote:
>>> Hi all,
>>>
>>> My company is having trouble managing how we upgrade schema changes across 
>>> many versions of our software.  I imagine this is a common problem and 
>>> there're probably some neat solutions that we don't know about.
>>>
>>> For the last 10 years we have been writing bash shell scripts essentially 
>>> numbered in order db0001, db0002, db0003 The number represents the 
>>> schema version which is recorded in the database and updated by the shell 
>>> scripts.  We have a template that provides all the functionality we need, 
>>> we just copy the script and fill in the blanks.  The schema upgrade scripts 
>>> are committed to svn along with the software changes, and we have a process 
>>> when installing the software at a site that runs the scripts on the DB in 
>>> order before starting up the new version of the software.
>>>
>>> This has worked really well so far.  But  we've effectively only had one 
>>> version of the software in development at any time.  We're now in the habit 
>>> of branching the software to form releases to promote stability when making 
>>> large changes.  The idea is that only really important changes are merged 
>>> in to the releases.  This introduces a bit of a problem when some change 
>>> needs to be merged from one release to another.  The typical problem is 
>>> that we might have two versions of the software 10.0 at schema version 
>>> 10057 and 11.0 at 11023 and we need to merge an important bug fix from 
>>> schema 11023 in to 10.0.  The issue is that 11023 might depend upon changes 
>>> introduced in the schema versions before it.  Or 11023 might introduce 
>>> changes that cause later scripts to break (11000 - 11023) when upgrading 
>>> from 10.0 to 11.0.
>>>
>>> One potential solution is to require that schema changes are never merged 
>>> in to a release, but of course sometimes business requires we do =(
>>>
>>> I'm really interested to hear how you guys manage schema upgrades in the 
>>> face of branches and upgrading from many different versions of the database.
>>>
>>> I've been reading 
>>> http://pyrseas.wordpress.com/2011/02/07/version-control-part-2-sql-databases/
>>>  but I have a feeling that this blog post won't address branches.
>>>
>>> Cheers!
>>>
>>> --Royce
>>>
>>>
>>
>> So, 10.0 at 10057.
>> 11.0 at 11023.
>>
>> then 10.1 needs some fixes so db is bumped to 10058.
>>
>> Then, later, you can upgrade 10057 to 11023, but you cant get 10058 to 
>> 11023.
>>
>> Humm... maybe you need smarter upgrade scripts?  Would having logic in 
>> the script help?  Something like:
>>
>> if not fieldExists('xyz) then alter table ...  add xyz ...
>>
>>
>>
>> Or, maybe your schema numbering system is to broad?  Maybe each table 
>> could have a version number?
>>
>>
>> Or some kinda flags like:
>> create table dbver(key text);
>>
>> then an update would be named: "add xyz to bob".
>>
>> then the update code:
>>
>> q = select key from dbver where key = 'add xyz to bob';
>> if q.eof then
>>  alter table bob add xyz
>>
>>
>> -Andy
>>
>> -- 
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
> 
> 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema version control

2011-02-10 Thread Andy Colson

On 2/10/2011 4:18 PM, Bill Moran wrote:


We have this kickass solution we built at work called dbsteward that
just takes care of all of this for us, automatically.  You just give
it the new version and the old version and it generates update statements
to feed into PG.

The reason I'm bringing this up is that we're working to release
dbsteward as open source for PGCon.  So, if you can wait a bit, you
can jump on that train.

In response to Andy Colson:



But... that assumes all updates are DDL only?  What if I have a lookup 
table, and want to add some new entries into it?


-Andy

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema version control

2011-02-10 Thread Bill Moran

We have this kickass solution we built at work called dbsteward that
just takes care of all of this for us, automatically.  You just give
it the new version and the old version and it generates update statements
to feed into PG.

The reason I'm bringing this up is that we're working to release
dbsteward as open source for PGCon.  So, if you can wait a bit, you
can jump on that train.

In response to Andy Colson :

> On 2/10/2011 3:38 PM, Royce Ausburn wrote:
> > Hi all,
> >
> > My company is having trouble managing how we upgrade schema changes across 
> > many versions of our software.  I imagine this is a common problem and 
> > there're probably some neat solutions that we don't know about.
> >
> > For the last 10 years we have been writing bash shell scripts essentially 
> > numbered in order db0001, db0002, db0003 The number represents the 
> > schema version which is recorded in the database and updated by the shell 
> > scripts.  We have a template that provides all the functionality we need, 
> > we just copy the script and fill in the blanks.  The schema upgrade scripts 
> > are committed to svn along with the software changes, and we have a process 
> > when installing the software at a site that runs the scripts on the DB in 
> > order before starting up the new version of the software.
> >
> > This has worked really well so far.  But  we've effectively only had one 
> > version of the software in development at any time.  We're now in the habit 
> > of branching the software to form releases to promote stability when making 
> > large changes.  The idea is that only really important changes are merged 
> > in to the releases.  This introduces a bit of a problem when some change 
> > needs to be merged from one release to another.  The typical problem is 
> > that we might have two versions of the software 10.0 at schema version 
> > 10057 and 11.0 at 11023 and we need to merge an important bug fix from 
> > schema 11023 in to 10.0.  The issue is that 11023 might depend upon changes 
> > introduced in the schema versions before it.  Or 11023 might introduce 
> > changes that cause later scripts to break (11000 - 11023) when upgrading 
> > from 10.0 to 11.0.
> >
> > One potential solution is to require that schema changes are never merged 
> > in to a release, but of course sometimes business requires we do =(
> >
> > I'm really interested to hear how you guys manage schema upgrades in the 
> > face of branches and upgrading from many different versions of the database.
> >
> > I've been reading 
> > http://pyrseas.wordpress.com/2011/02/07/version-control-part-2-sql-databases/
> >  but I have a feeling that this blog post won't address branches.
> >
> > Cheers!
> >
> > --Royce
> >
> >
> 
> So, 10.0 at 10057.
> 11.0 at 11023.
> 
> then 10.1 needs some fixes so db is bumped to 10058.
> 
> Then, later, you can upgrade 10057 to 11023, but you cant get 10058 to 
> 11023.
> 
> Humm... maybe you need smarter upgrade scripts?  Would having logic in 
> the script help?  Something like:
> 
> if not fieldExists('xyz) then alter table ...  add xyz ...
> 
> 
> 
> Or, maybe your schema numbering system is to broad?  Maybe each table 
> could have a version number?
> 
> 
> Or some kinda flags like:
> create table dbver(key text);
> 
> then an update would be named: "add xyz to bob".
> 
> then the update code:
> 
> q = select key from dbver where key = 'add xyz to bob';
> if q.eof then
>   alter table bob add xyz
> 
> 
> -Andy
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema version control

2011-02-10 Thread Andy Colson

On 2/10/2011 4:14 PM, Andy Colson wrote:

On 2/10/2011 3:38 PM, Royce Ausburn wrote:

Hi all,

My company is having trouble managing how we upgrade schema changes
across many versions of our software. I imagine this is a common
problem and there're probably some neat solutions that we don't know
about.

For the last 10 years we have been writing bash shell scripts
essentially numbered in order db0001, db0002, db0003 The number
represents the schema version which is recorded in the database and
updated by the shell scripts. We have a template that provides all the
functionality we need, we just copy the script and fill in the blanks.
The schema upgrade scripts are committed to svn along with the
software changes, and we have a process when installing the software
at a site that runs the scripts on the DB in order before starting up
the new version of the software.

This has worked really well so far. But we've effectively only had one
version of the software in development at any time. We're now in the
habit of branching the software to form releases to promote stability
when making large changes. The idea is that only really important
changes are merged in to the releases. This introduces a bit of a
problem when some change needs to be merged from one release to
another. The typical problem is that we might have two versions of the
software 10.0 at schema version 10057 and 11.0 at 11023 and we need to
merge an important bug fix from schema 11023 in to 10.0. The issue is
that 11023 might depend upon changes introduced in the schema versions
before it. Or 11023 might introduce changes that cause later scripts
to break (11000 - 11023) when upgrading from 10.0 to 11.0.

One potential solution is to require that schema changes are never
merged in to a release, but of course sometimes business requires we
do =(

I'm really interested to hear how you guys manage schema upgrades in
the face of branches and upgrading from many different versions of the
database.

I've been reading
http://pyrseas.wordpress.com/2011/02/07/version-control-part-2-sql-databases/
but I have a feeling that this blog post won't address branches.

Cheers!

--Royce




So, 10.0 at 10057.
11.0 at 11023.

then 10.1 needs some fixes so db is bumped to 10058.

Then, later, you can upgrade 10057 to 11023, but you cant get 10058 to
11023.

Humm... maybe you need smarter upgrade scripts? Would having logic in
the script help? Something like:

if not fieldExists('xyz) then alter table ... add xyz ...



Or, maybe your schema numbering system is to broad? Maybe each table
could have a version number?


Or some kinda flags like:
create table dbver(key text);

then an update would be named: "add xyz to bob".

then the update code:

q = select key from dbver where key = 'add xyz to bob';
if q.eof then
alter table bob add xyz


-Andy



D'oh!  a bug in my update script:


q = select key from dbver where key = 'add xyz to bob';
if q.eof then
alter table bob add xyz
insert into dbver('add xyz to bob');

How embarrassing :-)

-Andy



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema version control

2011-02-10 Thread Andy Colson

On 2/10/2011 3:38 PM, Royce Ausburn wrote:

Hi all,

My company is having trouble managing how we upgrade schema changes across many 
versions of our software.  I imagine this is a common problem and there're 
probably some neat solutions that we don't know about.

For the last 10 years we have been writing bash shell scripts essentially 
numbered in order db0001, db0002, db0003 The number represents the schema 
version which is recorded in the database and updated by the shell scripts.  We 
have a template that provides all the functionality we need, we just copy the 
script and fill in the blanks.  The schema upgrade scripts are committed to svn 
along with the software changes, and we have a process when installing the 
software at a site that runs the scripts on the DB in order before starting up 
the new version of the software.

This has worked really well so far.  But  we've effectively only had one 
version of the software in development at any time.  We're now in the habit of 
branching the software to form releases to promote stability when making large 
changes.  The idea is that only really important changes are merged in to the 
releases.  This introduces a bit of a problem when some change needs to be 
merged from one release to another.  The typical problem is that we might have 
two versions of the software 10.0 at schema version 10057 and 11.0 at 11023 and 
we need to merge an important bug fix from schema 11023 in to 10.0.  The issue 
is that 11023 might depend upon changes introduced in the schema versions 
before it.  Or 11023 might introduce changes that cause later scripts to break 
(11000 - 11023) when upgrading from 10.0 to 11.0.

One potential solution is to require that schema changes are never merged in to 
a release, but of course sometimes business requires we do =(

I'm really interested to hear how you guys manage schema upgrades in the face 
of branches and upgrading from many different versions of the database.

I've been reading 
http://pyrseas.wordpress.com/2011/02/07/version-control-part-2-sql-databases/ 
but I have a feeling that this blog post won't address branches.

Cheers!

--Royce




So, 10.0 at 10057.
11.0 at 11023.

then 10.1 needs some fixes so db is bumped to 10058.

Then, later, you can upgrade 10057 to 11023, but you cant get 10058 to 
11023.


Humm... maybe you need smarter upgrade scripts?  Would having logic in 
the script help?  Something like:


if not fieldExists('xyz) then alter table ...  add xyz ...



Or, maybe your schema numbering system is to broad?  Maybe each table 
could have a version number?



Or some kinda flags like:
create table dbver(key text);

then an update would be named: "add xyz to bob".

then the update code:

q = select key from dbver where key = 'add xyz to bob';
if q.eof then
alter table bob add xyz


-Andy

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema manipulation from plpgsql

2010-12-10 Thread Guillaume Lelarge
Le 10/12/2010 16:01, Gevik Babakhani a écrit :
> I was wondering if there are any schema manipulation statements which
> are not allowed from within a PL/PGSQL function. (Except from
> create/drop a database)
> 

create/drop tablespace

They are the only exception AFAICT.


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema tool

2010-11-15 Thread Aram Fingal

On Nov 11, 2010, at 4:56 PM, Thomas Kellerer wrote:
> 
> Actually I think it would be worthwhile documenting your experience in the 
> PostgreSQL Wiki as well:
> 
> http://wiki.postgresql.org/wiki/Community_Guide_to_PostgreSQL_GUI_Tools

Thanks, I will post something there after I have done some more investigation.  
I notice that some of the ones mentioned here are on there but with only a line 
or two of description.  I'll also see about putting something about PGnJ.  
That's a free, generic query tool which I have been using with PostgreSQL.


-Aram

Re: [GENERAL] Schema tool

2010-11-15 Thread Aram Fingal
Thanks, each of you for all the suggestions on schema generating tools.  The 
idea is to have something which will connect to the database and automatically 
make a schema from what you've got.  Here's what I have had a chance to 
evaluate so far...

DBVisualizer - It does a good job with the default placement of tables and 
links and some nice features for what to include/exclude from the diagram.   
The ability to select specific tables is very handy since I have a few tables 
which I plan to get rid of as soon as I have the data moved to a different 
database.  Also, part of the point of what I'm doing is to support certain 
other people who will want to write queries.  They each have specific interests 
and don't need all the tables.  I will probably make more than one simplified 
version of the schema, each focused on a specific set of data.  

Drawbacks:  It missed one foreign key relationship.  I don't know why but that 
link is missing from the diagram.  SQL Power Architect (see below) did see and 
diagram this relationship.  Links get a little jumbled when I move things 
around a little and then switch back and forth between different kinds of views 
(hierarchic, organic, orthogonal, etc.)  That's not too difficult to fix 
because all you have to do is drag each table slightly and the links get 
redrawn in a more optimal way.  It doesn't have the turkey foot (or whatever 
you call it) to indicate a many-to-one relationship.  I suppose that's not a 
big deal since the relationships are pretty obvious in my database, with each 
foreign key link going to the primary key of another table.  One very minor 
thing is that there are some places where I want to indicate that a foreign key 
is specified with "on update cascade." I will have to export to graphics and 
add that note to the diagram.  

The one big thing is that it displays views as disconnected objects.  Views are 
very important for this project and it would be good to indicate where their 
contents are coming from.   I'm not sure how best to diagram that since many of 
the view columns are taking several columns from various tables and performing 
a mathematical computation.  To understand what is in each column of the view, 
you need to see both the inputs and the formula.  

SQL Power Architect - also free and open source - This was not actually 
suggested on this list but some of the suggestions led me to the right key 
words to do another search and find it.  As I understand, the main point of 
this tool is to transfer data from one database instillation to another.  For 
example, if you want to migrate data from MS SQL Server to PostgreSQL, this is 
a tool you should look at.  Deriving a schema is just one element of this.  You 
connect to the source database and it reverse engineers it for you and shows 
you the results.  You make changes and selections of what you want to move, 
etc.  Then you hit "Forward Engineer" and it moves the data to your target, 
according to your edited schema.  The advantage is that it shows lots of 
information and looks to be good for figuring out environments with lots of 
different kinds of databases all over the place. It has more features to change 
appearance of the diagrams than DbVisualizer does, including colors of the text 
and the table, itself.  It can export to HTML.  

Drawbacks:  It does not put the tables in a convenient layout by default.  You 
can't easily exclude objects but you can delete them from the diagram.  Links 
point to any place on the tables and not the specific columns which they refer 
to/from.  You can drag the links around so that they do point to the exact 
column but this can be frustrating drudge work since each drag moves both ends 
of the link and you can easily mess up corrections you made earlier.  For some 
reason it didn't detect the correct datatypes for many rows and just put "CLOB" 
where it should be INTEGER, DATE, etc.  SQL Power Architect also displays views 
as disconnected objects.

SchemaSpy looks to be a good option since they mention that they had to do some 
custom work to support views but I'm having trouble getting it to work at this 
point.

dbWrench by Nizana looks interesting but it's commercial and I want to see if a 
free option will work first.  I'm not totally against spending money but I have 
seen situations where free stuff ends up being as good or better than 
commercial.  

MicroOLAP Database Designer and PostgreSQL Maestro are Windows only, unless I 
missed something.  I do have both WINE and a Windows VM under VirtualBox but I 
would prefer something Mac native.  

I haven't had a chance to check out Mogwai yet.

-Aram
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema tool

2010-11-11 Thread Dann Corbit
> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] On Behalf Of Marc Mamin
> Sent: Thursday, November 11, 2010 11:46 AM
> To: Aram Fingal; Postgres-General General
> Subject: Re: [GENERAL] Schema tool
> 
> Hello,
> 
> may schemaspy help you ?
> http://schemaspy.sourceforge.net/sample/relationships.html

This thing is also nice:
http://sourceforge.net/projects/mogwai/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema tool

2010-11-11 Thread Tom Lane
Thomas Kellerer  writes:
> Aram Fingal wrote on 11.11.2010 22:45:
>> I was thinking of reporting back to this forum with
>> advantages/disadvantages of each tool, as I see it, but realized that
>> I was rapidly getting too far off topic for a list focused
>> specifically on PostgreSQL.

> I don't think this woul be off-topic here if you post your experience using 
> those tools together with PostgreSQL

Agreed, that seems well within the list's area of interest.  If you told
us about some tool that couldn't be used with PG, maybe we'd get
impatient.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema tool

2010-11-11 Thread Thomas Kellerer

Aram Fingal wrote on 11.11.2010 22:45:

I was thinking of reporting back to this forum with
advantages/disadvantages of each tool, as I see it, but realized that
I was rapidly getting too far off topic for a list focused
specifically on PostgreSQL.


I don't think this woul be off-topic here if you post your experience using 
those tools together with PostgreSQL

Actually I think it would be worthwhile documenting your experience in the 
PostgreSQL Wiki as well:

http://wiki.postgresql.org/wiki/Community_Guide_to_PostgreSQL_GUI_Tools

Regards
Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema tool

2010-11-11 Thread Aram Fingal
Thanks, each of you for all the suggestions on schema generating tools.  I 
haven't had a chance to evaluate them all yet but DBVisualizer looks pretty 
good.  In the meanwhile I found SQL Power Architect, which is also free/open 
source, and can do this kind of diagraming but is not as good as DBVisualizer 
for my purposes.  It has some other interesting features which would be of 
interest to someone working in an environment with several different kinds of 
databases.  SchemaSpy looks to be a good option since the description mentions 
that they had to implement product-specific queries to support views but I 
haven't had a chance to try it out yet.  Support for views is important to me 
and DBVisualizer and the other tools I have looked at so far, just display them 
as disconnected objects.  

I was thinking of reporting back to this forum with advantages/disadvantages of 
each tool, as I see it, but realized that I was rapidly getting too far off 
topic for a list focused specifically on PostgreSQL.  

-Aram
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema tool

2010-11-11 Thread Gary Chambers
Marc,

> may schemaspy help you ?
> http://schemaspy.sourceforge.net/sample/relationships.html

Thank you *VERY* much for suggesting this tool!

-- Gary Chambers

/* Nothing fancy and nothing Microsoft! */

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema tool

2010-11-11 Thread Jeff Ross

On 11/11/10 12:45, Marc Mamin wrote:

Hello,

may schemaspy help you ?
http://schemaspy.sourceforge.net/sample/relationships.html

HTH,

Marc Mamin


Thanks for this link!  I've been looking for something that can run on 
the command line for quite a while, and even better it outputs to html 
and shows me some, um, interesting things in my database that probably 
need addressing.


Warm regards from snowy Wyoming!

Jeff Ross

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema tool

2010-11-11 Thread Marc Mamin
Hello,

may schemaspy help you ? 
http://schemaspy.sourceforge.net/sample/relationships.html

HTH,

Marc Mamin

-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Aram Fingal
Sent: Donnerstag, 11. November 2010 17:51
To: Postgres-General General
Subject: [GENERAL] Schema tool

A while back, I thought I remembered seeing a Mac OS X client for
PostgreSQL which had a feature where it would display a graphic schema
of whatever database you connect to but I can't seem to find it again
(web searching.)I did come across one post which said that this was
a planned feature for pgAdmin.  I need to do a formal schema of a
database which I have in development and it would be nice to have an
automated tool, whether it connects to the database or uses SQL table
definition code to generate graphics.  Is there such a thing?  if not,
what do you all use?  OpenOffice Draw? OmniGraffle?  

-Aram
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema tool

2010-11-11 Thread Guillaume Lelarge
Le 11/11/2010 18:58, Rob Sargent a écrit :
> 
> 
> On 11/11/2010 09:50 AM, Aram Fingal wrote:
>> A while back, I thought I remembered seeing a Mac OS X client for PostgreSQL 
>> which had a feature where it would display a graphic schema of whatever 
>> database you connect to but I can't seem to find it again (web searching.)   
>>  I did come across one post which said that this was a planned feature for 
>> pgAdmin.  I need to do a formal schema of a database which I have in 
>> development and it would be nice to have an automated tool, whether it 
>> connects to the database or uses SQL table definition code to generate 
>> graphics.  Is there such a thing?  if not, what do you all use?  OpenOffice 
>> Draw? OmniGraffle?  
>>
>> -Aram
> 
> DbVisualizer has a free and a commercial release and can do a decent job
> of diagramming a schema.  Nothing compared to Embarcadaro, put not as
> pricey either.
> 

I actually use DBVisualizer when I need to get a graphical view of an
existing database.

BTW, this is a planned feature of pgAdmin. We have an interesting patch
from a GSoC student (Luis Ochoa), but it still needs (a lot of) work. I
still hope to include it for the next release.


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema tool

2010-11-11 Thread Rob Sargent


On 11/11/2010 09:50 AM, Aram Fingal wrote:
> A while back, I thought I remembered seeing a Mac OS X client for PostgreSQL 
> which had a feature where it would display a graphic schema of whatever 
> database you connect to but I can't seem to find it again (web searching.)
> I did come across one post which said that this was a planned feature for 
> pgAdmin.  I need to do a formal schema of a database which I have in 
> development and it would be nice to have an automated tool, whether it 
> connects to the database or uses SQL table definition code to generate 
> graphics.  Is there such a thing?  if not, what do you all use?  OpenOffice 
> Draw? OmniGraffle?  
> 
> -Aram

DbVisualizer has a free and a commercial release and can do a decent job
of diagramming a schema.  Nothing compared to Embarcadaro, put not as
pricey either.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema tool

2010-11-11 Thread Dmitriy Igrishin
Hey Aram,

I recommend dbWrench by Nizana. It has a nice synchronization capabilities,
forward / reverse engineering and supports many built-in PostgreSQL types
and user-defined types as well.
NB: it is commercial application.

Another alternative is a MicroOLAP Database Designer.

The both tools are not require running PostgreSQL server.

If you wish to work with database directly you may look at tools like
PostgreSQL Maestro.

Good luck!

2010/11/11 Aram Fingal 

> A while back, I thought I remembered seeing a Mac OS X client for
> PostgreSQL which had a feature where it would display a graphic schema of
> whatever database you connect to but I can't seem to find it again (web
> searching.)I did come across one post which said that this was a planned
> feature for pgAdmin.  I need to do a formal schema of a database which I
> have in development and it would be nice to have an automated tool, whether
> it connects to the database or uses SQL table definition code to generate
> graphics.  Is there such a thing?  if not, what do you all use?  OpenOffice
> Draw? OmniGraffle?
>
> -Aram
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
// Dmitriy.


Re: [GENERAL] Schema search path

2010-09-13 Thread Yaroslav Tykhiy

On 14/09/2010, at 8:56 AM, Tom Lane wrote:


Bruce Momjian  writes:

Yaroslav Tykhiy wrote:

SELECT * FROM foo.bar WHERE bar.a=1;
   ^^^ this means foo.bar



Do you think it's a feature or a bug? :-)



Feature, and SQL-standard behavior.


It might be worth pointing out that this has nothing to do with
search_path; rather, the key is that the FROM clause establishes
a table alias "bar" for the query.


Sure, that makes sense because it just extends the well-known aliasing  
for unqualified column names, as in "SELECT a FROM foo", to table  
names as well.  But a remark on this feature in the SCHEMA related  
documentation pages can be a good idea IMHO.


Thanks!

Yar

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema search path

2010-09-13 Thread Tom Lane
Bruce Momjian  writes:
> Yaroslav Tykhiy wrote:
>> SELECT * FROM foo.bar WHERE bar.a=1;
>> ^^^ this means foo.bar

>> Do you think it's a feature or a bug? :-)

> Feature, and SQL-standard behavior.

It might be worth pointing out that this has nothing to do with
search_path; rather, the key is that the FROM clause establishes
a table alias "bar" for the query.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema search path

2010-09-13 Thread Bruce Momjian
Yaroslav Tykhiy wrote:
> Hi there,
> 
> Sorry but I've got yet another issue to discuss today, this time that  
> on schema search path.  In fact it may not be a bug, but it may be  
> worth a note in the documentation.
> 
> It seems that if the table in SELECT FROM has an explicit schema  
> specifier, further references to the same table name will implicitly  
> inherit it.  E.g., this query will be valid because the second  
> reference will be to foo.bar not public.bar:
> 
> SELECT * FROM foo.bar WHERE bar.a=1;
>  ^^^ this means foo.bar

No one has ever complained about this before.

> As just shown, this can be even more confusing with nested queries.
> 
> Do you think it's a feature or a bug? :-)

Feature, and SQL-standard behavior.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] schema proxying virtual database

2009-09-13 Thread Rajesh Kumar Mallah
thanks that sounds like a really easy & neat solution.

On Sun, Sep 13, 2009 at 11:02 PM, Stefan Kaltenbrunner
 wrote:

> Rajesh Kumar Mallah wrote:
>
>> Dear List,
>>
>> There are many opensource applications that support postgresql
>> (eg , gforge , bricolage , dspam ..) but does not use schemas(namespaces)
>> as a result of which you are forced to use/create a new database and loose
>> the
>> advantage of linking the application data with your existing database.
>> as no-cross database queries can be done in PG.
>>
>>
>> my question is , is it a feasible idea to have some special kind of
>> database
>> in the postgresql cluster that mimics a schema of an existsing database.
>>
>> say rt3 is a special database that links to existing maindb databases' rt3
>> namespace then  any DDL / DML done to public schema of rt3 database
>> is redirected to rt3 namespace of the existing database maindb's rt3
>> schema.
>>
>
> well in most cases it is enought to give those apps their own user and
> setting the default search_path for that user to it's own schema. That way
> you usually don't have to modify the app at all and still get it (more or
> less) running in it's own schema.
>
>
> Stefan
>


  1   2   3   >