Re: [GENERAL] Best way to create unique primary keys across schemas?

2012-01-24 Thread Chris Angelico
On Wed, Jan 25, 2012 at 9:54 AM, panam  wrote:
> What do you mean with "explicit sequence object"? An own sequence for each
> table per schema?

This:

On Wed, Jan 25, 2012 at 10:23 AM, Merlin Moncure  wrote:
> Barring domains, you can just manually apply the default instead of
> using a serial type:
>
> create table foo (gid bigint default nextval('global_seq'));

http://www.postgresql.org/docs/9.1/static/sql-createsequence.html

When you create a 'serial' column, Postgres creates a sequence and
makes the column as 'int' with a default that pulls from the sequence.
(Similarly for 'bigserial' and 'bigint'.) If you create the sequence
yourself, you get a bit more control over it (eg setting
min/max/step), and can name it appropriately.

Note the OWNED BY clause (as documented in the above link). That's
what I was saying about the sequence being owned by or linked to the
creating table.

ChrisA

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] any plans to support more rounding methods in sql?

2012-01-24 Thread Peter Geoghegan
On 25 January 2012 05:41, Pavel Stehule  wrote:
> Hello
>
> 2012/1/25 raf :
>> hi,
>>
>> i just needed to round some numbers down to 4 decimal places but a quick 
>> search
>> indicated that postgresql doesn't support all of the rounding methods so i 
>> had
>> to write this dreadful function:

Are you talking about always rounding down to the lower
smallest-possible-increment, rather than following standard rules for
rounding? That isn't such an esoteric use case - I believe that
financial regulations in some jurisdictions require just that when
calculating interest, for example.

If you require exactly 4 digits of precision, it's possible to use this syntax:

NUMERIC(precision, scale)

That isn't going to affect the rounding behaviour though.

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] any plans to support more rounding methods in sql?

2012-01-24 Thread Pavel Stehule
Hello

2012/1/25 raf :
> hi,
>
> i just needed to round some numbers down to 4 decimal places but a quick 
> search
> indicated that postgresql doesn't support all of the rounding methods so i had
> to write this dreadful function:
>
> create or replace function round_down_to_4_decimal_places(amount 
> decimal(12,6))
> returns decimal(10,4) stable language sql as $$
>
>    select
>        case
>            when $1 >= 0 then
>                case when $1 - round($1, 4) < 0 then round($1, 4) - 0.0001 
> else round($1, 4) end
>            else
>                case when $1 - round($1, 4) > 0 then round($1, 4) + 0.0001 
> else round($1, 4) end
>        end
>
> $$;
>
> this is fine for my purposes but it's not generic to different numbers of 
> decimal
> places and it's 26 times slower than the built-in round(v numeric, s int).
> strangely, a plpgsql version is much faster but it's still 11 times slower 
> than
> a built-in version would be.
>
> python's decimal module supports the following rounding methods:
>
>  ROUND_UP        (round away from zero)
>  ROUND_DOWN      (round towards zero)
>  ROUND_CEILING   (round up)
>  ROUND_FLOOR     (round down)
>  ROUND_HALF_UP   (round 5 away from zero, rest to nearest)
>  ROUND_05UP      (round away from zero if last significant digit is 0 or 5, 
> rest towards zero)
>  ROUND_HALF_DOWN (round 5 towards zero, rest to nearest)
>  ROUND_HALF_EVEN (round 5 to even, rest to nearest)
>
> are there any plans to support any other rounding methods natively?

numeric operations are not usual use case for relation databases. For
almost all users this complex set of functions should be contra
productive.

In PostgreSQL you can use a PLPythonu functionality or if you need it,
then you can write own fast implementation in C.

Regards

Pavel Stehule

>
> cheers,
> raf
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best way to create unique primary keys across schemas?

2012-01-24 Thread Rob Sargent


On 01/24/2012 04:23 PM, Merlin Moncure wrote:
> On Tue, Jan 24, 2012 at 5:23 AM, panam  wrote:
>> Wow, this is pretty useful. Just to fit it more to my original use case, I
>> used this:
>>
>> CREATE schema schema1;
>> CREATE schema schema2;
>> CREATE TABLE tbl (ID serial primary key,foo varchar,bar varchar);  --in
>> public schema
>> CREATE TABLE schema1.tbl (LIKE public.tbl INCLUDING ALL);  --draws ids from
>> sequence in public schema
>> CREATE TABLE schema2.tbl (LIKE public.tbl INCLUDING ALL);  --draws ids from
>> sequence in public schema
>> INSERT INTO schema1.tbl (foo,bar) VALUES ('asdf','qwer');
>> INSERT INTO schema2.tbl (foo,bar) VALUES ('hello','world');
>> INSERT INTO schema1.tbl (foo,bar) VALUES ('asdf','qwer');
>> INSERT INTO schema2.tbl (foo,bar) VALUES ('hello','world');
>>
>> Thanks, I now consider this my best practice. This way, I don't have to
>> allocate ranges any more a priori :)
> Another quirky way to do it is with domains;
>
> create sequence global_seq;
> create domain gid bigint default nextval('global_seq');
> create table foo (gid gid, f1 text);
> create table bar (gid gid, f2 int);
> etc.
>
> This looks very appealing on the surface but domains have some quirks
> that should give pause.  In particular, you can't make arrays of them,
> although you can make arrays of rowtypes that have a domain in them.
>
> Barring domains, you can just manually apply the default instead of
> using a serial type:
>
> create table foo (gid bigint default nextval('global_seq'));
>
> merlin
>
And UUIDs don't work because?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] any plans to support more rounding methods in sql?

2012-01-24 Thread raf
hi,

i just needed to round some numbers down to 4 decimal places but a quick search
indicated that postgresql doesn't support all of the rounding methods so i had
to write this dreadful function:

create or replace function round_down_to_4_decimal_places(amount decimal(12,6))
returns decimal(10,4) stable language sql as $$

select
case
when $1 >= 0 then
case when $1 - round($1, 4) < 0 then round($1, 4) - 0.0001 else 
round($1, 4) end
else
case when $1 - round($1, 4) > 0 then round($1, 4) + 0.0001 else 
round($1, 4) end
end

$$;

this is fine for my purposes but it's not generic to different numbers of 
decimal
places and it's 26 times slower than the built-in round(v numeric, s int).
strangely, a plpgsql version is much faster but it's still 11 times slower than
a built-in version would be.

python's decimal module supports the following rounding methods:

  ROUND_UP(round away from zero)
  ROUND_DOWN  (round towards zero)
  ROUND_CEILING   (round up)
  ROUND_FLOOR (round down)
  ROUND_HALF_UP   (round 5 away from zero, rest to nearest)
  ROUND_05UP  (round away from zero if last significant digit is 0 or 5, 
rest towards zero)
  ROUND_HALF_DOWN (round 5 towards zero, rest to nearest)
  ROUND_HALF_EVEN (round 5 to even, rest to nearest)

are there any plans to support any other rounding methods natively?

cheers,
raf


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best way to create unique primary keys across schemas?

2012-01-24 Thread Merlin Moncure
On Tue, Jan 24, 2012 at 5:23 AM, panam  wrote:
> Wow, this is pretty useful. Just to fit it more to my original use case, I
> used this:
>
> CREATE schema schema1;
> CREATE schema schema2;
> CREATE TABLE tbl (ID serial primary key,foo varchar,bar varchar);  --in
> public schema
> CREATE TABLE schema1.tbl (LIKE public.tbl INCLUDING ALL);  --draws ids from
> sequence in public schema
> CREATE TABLE schema2.tbl (LIKE public.tbl INCLUDING ALL);  --draws ids from
> sequence in public schema
> INSERT INTO schema1.tbl (foo,bar) VALUES ('asdf','qwer');
> INSERT INTO schema2.tbl (foo,bar) VALUES ('hello','world');
> INSERT INTO schema1.tbl (foo,bar) VALUES ('asdf','qwer');
> INSERT INTO schema2.tbl (foo,bar) VALUES ('hello','world');
>
> Thanks, I now consider this my best practice. This way, I don't have to
> allocate ranges any more a priori :)

Another quirky way to do it is with domains;

create sequence global_seq;
create domain gid bigint default nextval('global_seq');
create table foo (gid gid, f1 text);
create table bar (gid gid, f2 int);
etc.

This looks very appealing on the surface but domains have some quirks
that should give pause.  In particular, you can't make arrays of them,
although you can make arrays of rowtypes that have a domain in them.

Barring domains, you can just manually apply the default instead of
using a serial type:

create table foo (gid bigint default nextval('global_seq'));

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best way to create unique primary keys across schemas?

2012-01-24 Thread panam

Chris Angelico wrote
> 
> I would recommend using an explicit sequence object rather than
> relying on odd behavior like this; for instance, if you now drop
> public.tbl, the sequence will be dropped too. However, what you have
> there is going to be pretty close to the same result anyway.
> 
Oops, thanks for the warning. Any means to prevent accidently dropping the
sequence by deleting the corresponding "root"-table?
What do you mean with "explicit sequence object"? An own sequence for each
table per schema?


Chris Angelico wrote
> 
> I think it's possible
> to reset a sequence object to start producing lower numbers again,
> while your table still has some higher numbers in it (of course being
> careful not to get pkey collisions).
> 
Yes, this is definitely possible
(http://www.postgresql.org/docs/9.1/static/sql-altersequence.html)


--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Best-way-to-create-unique-primary-keys-across-schemas-tp5165043p5428997.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Document routing workflow database design

2012-01-24 Thread jonesd
I'm looking at a database design for tracking the movement/routing of  
documents through a workflow using PostgreSQL (version 9.1).   
Basically, I have a state diagram for the possible routings and came  
up with two different designs for how to implement the tables.  As a  
quick advance note, not all transitions have the same information  
content (for example, some require an explicit routing code, others  
reference other tables, etc.).


The basic method would be to create a routing table that captures all  
state transitions:


routing
---
action_id
document_id
new_state_id
action_timestamp
...

It would couple this to a table of permissable transitions:

transition
--
old_state_id
new_state_id
...

It would also have supplemental tables to capture information specific  
to different transition types (I prefer this to having a bunch of  
NULLs in the "master" table, as the additional data involved is  
somewhat varied and sparse).  Triggers would enforce creation these  
secondary records.  For example:


process_completion
--
action_id
resulting_activity_id

I came up with a second method, which may be too clever for my own  
good.  In this scheme, each transition type would have its own table:


routing_action
--
action_id
document_id
recipient_id
action_timestamp

completion_action
-
action_id
resulting_activity_id
action_timestamp

Right now, I'm modeling queries on the second method using UNION  
queries, although I realize that I could use inheritance to achieve  
the same effect.


To me, the second method saves some overhead (no new_state_id required  
- it's implicit in the table scheme) and simplifies the insertion  
process (one INSERT as opposed to two) at the cost of additional  
database complexity (more tables) and perhaps breaking from the SQL  
paradigm (by placing information in the table scheme instead of in  
rows).



Thanks in advance,

Dominic Jones

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] update with from

2012-01-24 Thread Adrian Klaver
On Monday, January 23, 2012 10:11:00 pm Sim Zacks wrote:
> On 01/23/2012 07:10 PM, Adrian Klaver wrote:
> > On Monday, January 23, 2012 7:32:35 am Sim Zacks wrote:
> >> On 01/23/2012 05:13 PM, Adrian Klaver wrote:
> >> 
> >> 
> >> When I throw in code to make the select only return the correct rows
> >> The select statement takes 9 secs by itself:
> >> select a.partid,a.deliverywks
> >> from poparts a where popartid in (
> >> 
> >>select b.popartid from poparts b
> >>join pos c using(poid)
> >>join stock.lastrfqdateperpart d using(partid)
> >>where c.isrfq and c.issuedate > d.issuedate-7
> >>AND b.unitprice > 0::numeric AND b.quantity >= 100::numeric AND
> >> 
> >> c.postatusid = ANY (ARRAY[40, 41])
> >> 
> >>and b.partid=a.partid
> >>order by b.partid,b.unitprice, b.deliverywks
> >>limit 1
> >> 
> >> )
> > 
> > To clarify what I posted earlier, my suggestion was based on rewriting
> > the
> > 
> > second query as:
> > select b.partid,b.deliverywks b.popartid from poparts b
> > join pos c using(poid)
> > join stock.lastrfqdateperpart d using(partid)
> > where c.isrfq and c.issuedate > d.issuedate-7
> > AND b.unitprice > 0::numeric AND b.quantity >= 100::numeric AND
> > 
> > c.postatusid = ANY (ARRAY[40, 41])
> > 
> > order by b.partid,b.unitprice, b.deliverywks
> > limit 1
> > 
> > I may be missing the intent of your original query, but I think the above
> > gets to the same result without the IN.
> 
> My first query returns all rows of each part ordered such so that the
> row I want to actually update the table with is last. This query returns
> 12000 rows, for the 600 parts I want to update.
> 
> My second query with the limit within the subselect gets 1 row per part.
> This returns 600 rows, 1 row for each part I want to update.
> 
> Your suggestion would only return one row.

Oops. So per a previous suggestion:

select DISTINCT ON (b.partid) b.partid, b.deliverywks from poparts b
join pos c using(poid)
join stock.lastrfqdateperpart d using(partid)
where c.isrfq and c.issuedate > d.issuedate-7
AND b.unitprice > 0::numeric AND b.quantity >= 100::numeric AND
c.postatusid = ANY (ARRAY[40, 41])
order by b.partid,b.unitprice, b.deliverywks
  

> 
> See
> http://www.pgsql.cz/index.php/PostgreSQL_SQL_Tricks#Select_first_n_rows_fro
> m_group for reference.

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best way to create unique primary keys across schemas?

2012-01-24 Thread Chris Angelico
On Tue, Jan 24, 2012 at 10:23 PM, panam  wrote:
> Wow, this is pretty useful. Just to fit it more to my original use case, I
> used this:
>
> CREATE TABLE tbl (ID serial primary key,foo varchar,bar varchar);  --in
> public schema
> CREATE TABLE schema1.tbl (LIKE public.tbl INCLUDING ALL);  --draws ids from
> sequence in public schema
> CREATE TABLE schema2.tbl (LIKE public.tbl INCLUDING ALL);  --draws ids from
> sequence in public schema
>
> Thanks, I now consider this my best practice. This way, I don't have to
> allocate ranges any more a priori :)

I would recommend using an explicit sequence object rather than
relying on odd behavior like this; for instance, if you now drop
public.tbl, the sequence will be dropped too. However, what you have
there is going to be pretty close to the same result anyway.

As someone who's moved from DB2 to MySQL (hey, it's all open source!)
to Postgres (hey, it's all the features of DB2 and it's _still_ open
source!), I've been pretty pleased with Postgres sequences. Instead of
special-casing the primary key (as MySQL does with auto_increment),
Postgres allows you to have any sequences you like, going any
direction you like, and have multiple in the same table if you so
desire. Incidentally - I've yet to need it, but I think it's possible
to reset a sequence object to start producing lower numbers again,
while your table still has some higher numbers in it (of course being
careful not to get pkey collisions).

ChrisA

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] left join with OR optimization

2012-01-24 Thread Tom Lane
Sim Zacks  writes:
> I've seen written that a b-tree index can't be used on a join with an
> OR.

That's not the case ...

> Is there a way to optimize a join so that it can use an index for a
> query such as:

> select
> a.partid,a.duedate,coalesce(a.quantity,0)+sum(coalesce(b.quantity,0))
> from stat_allocated_components a
> left join stat_allocated_components b on a.partid=b.partid and
> b.quantity>0 and
> (a.duedate>b.duedate or (a.duedate=b.duedate and a.popartid>b.popartid))
> where a.quantity>0
> group by a.partid,a.duedate,a.quantity

... but in this example, it would be both more readable and more easily
optimizable if you expressed the duedate/popartid requirement as a row
comparison:

row(a.duedate, a.popartid) > row(b.duedate, b.popartid)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] left join with OR optimization

2012-01-24 Thread David Johnston
What version of PostgreSQL?

On Jan 24, 2012, at 9:28, Sim Zacks  wrote:

> I've seen written that a b-tree index can't be used on a join with an
> OR. Is there a way to optimize a join so that it can use an index for a
> query such as:
> 
> select
> a.partid,a.duedate,coalesce(a.quantity,0)+sum(coalesce(b.quantity,0))
> from stat_allocated_components a
> left join stat_allocated_components b on a.partid=b.partid and
> b.quantity>0 and
> (a.duedate>b.duedate or (a.duedate=b.duedate and a.popartid>b.popartid))
> where a.quantity>0
> group by a.partid,a.duedate,a.quantity
> 
> Where I am doing a self join to get a running sum, but some rows have
> the same due date so I am saying if the due date is the same then the
> first one entered should be considered earlier.
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG 9.0 EBS Snapshot Backups on Slave

2012-01-24 Thread Robert Treat
On Mon, Jan 23, 2012 at 8:02 PM, Alan Hodgson  wrote:
> On Monday, January 23, 2012 07:54:16 PM Andrew Hannon wrote:
>> It is worth noting that, the slave (seemingly) catches up eventually,
>> recovering later log files with streaming replication current. Can I trust
>> this state?
>>
>
> Should be able to. The master will also actually retry the logs and eventually
> ship them all too, in my experience.
>

Right, as long as the failure case is temporary, the master should
retry, and things should work themselves out. It's good to have some
level of monitoring in place for such operations to make sure replay
doesn't get stalled.

That said, have you tested this backup? I'm a little concerned you'll
have ended up with something unusable because you aren't starting xlog
files that are going on during the snapshot time. It's possible that
you won't need them in most cases (we have a script called
"zbackup"[1] which does similar motions using zfs, though on zfs the
snapshot really is instantaneous, in I can't remember a time when we
got stuck by that, but that might just be faulty memory. A better
approach would probably be to take the omnipitr code [2], which
already had provisions for slaves from backups and catching the
appropriate   wal files, and rewrite the rsync bits to use snapshots
instead, which would give you some assurances against possibly missing
files.

[1] this script is old and crufty, but provides a good example:
http://labs.omniti.com/labs/pgtreats/browser/trunk/tools/zbackup.sh

[2] https://github.com/omniti-labs/omnipitr


Robert Treat
conjecture: xzilla.net
consulting: omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] I cant create excluding constaint

2012-01-24 Thread hubert depesz lubaczewski
On Tue, Jan 24, 2012 at 12:41:28PM +0100, pasman pasmański wrote:
> Hi.
> 
> I have a table with two columns:
> 
> create table "GroupsOfOrders" (
>   "Orders" text[];  -- a set of identifiers
>   "Period" cube; -- a period of time for all identifiers in field "Orders"
> );
> 
> How to create excluding constraint, which prevent overlapping "Period"
> for all orders in a field "Orders" ?

http://www.depesz.com/index.php/2010/01/03/waiting-for-8-5-exclusion-constraints/

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] I cant create excluding constaint

2012-01-24 Thread Andreas Kretschmer
pasman pasmański  wrote:

> Hi.
> 
> I have a table with two columns:
> 
> create table "GroupsOfOrders" (
>   "Orders" text[];  -- a set of identifiers
>   "Period" cube; -- a period of time for all identifiers in field "Orders"
> );
> 
> How to create excluding constraint, which prevent overlapping "Period"
> for all orders in a field "Orders" ?

something like this?

test=# create table orders (orders text, period daterange, exclude using gist 
(orders with =, period with &&));
NOTICE:  CREATE TABLE / EXCLUDE will create implicit index 
"orders_orders_period_excl" for table "orders"
CREATE TABLE
Time: 98,833 ms
test=*# insert into orders values ('1','[2012-01-01,2012-01-31)');
INSERT 0 1
Time: 0,618 ms
test=*# insert into orders values ('1','[2012-01-01,2012-02-10)');
ERROR:  conflicting key value violates exclusion constraint 
"orders_orders_period_excl"
DETAIL:  Key (orders, period)=(1, [2012-01-01,2012-02-10)) conflicts with 
existing key (orders, period)=(1, [2012-01-01,2012-01-31)).
test=!#

Yes? It's cool, isn't it?


Sorry, but you have to wait for 9.2, or you should use the temporal-patch from 
Jeff Davis.
http://thoughts.j-davis.com/2010/03/09/temporal-postgresql-roadmap/


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly."   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] left join with OR optimization

2012-01-24 Thread Sim Zacks
I've seen written that a b-tree index can't be used on a join with an
OR. Is there a way to optimize a join so that it can use an index for a
query such as:

select
a.partid,a.duedate,coalesce(a.quantity,0)+sum(coalesce(b.quantity,0))
from stat_allocated_components a
left join stat_allocated_components b on a.partid=b.partid and
b.quantity>0 and
(a.duedate>b.duedate or (a.duedate=b.duedate and a.popartid>b.popartid))
where a.quantity>0
group by a.partid,a.duedate,a.quantity

Where I am doing a self join to get a running sum, but some rows have
the same due date so I am saying if the due date is the same then the
first one entered should be considered earlier.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] I cant create excluding constaint

2012-01-24 Thread pasman pasmański
Hi.

I have a table with two columns:

create table "GroupsOfOrders" (
  "Orders" text[];  -- a set of identifiers
  "Period" cube; -- a period of time for all identifiers in field "Orders"
);

How to create excluding constraint, which prevent overlapping "Period"
for all orders in a field "Orders" ?



pasman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best way to create unique primary keys across schemas?

2012-01-24 Thread panam

Chris Angelico wrote
> 
> 
> You can "share" a sequence object between several tables. This can
> happen somewhat unexpectedly, as I found out to my surprise a while
> ago:
> 
> CREATE TABLE tbl1 (ID serial primary key,foo varchar,bar varchar);
> INSERT INTO tbl1 (foo,bar) VALUES ('asdf','qwer');
> CREATE TABLE tbl2 LIKE tbl1 INCLUDING ALL;
> INSERT INTO tbl2 (foo,bar) VALUES ('hello','world');
> 
> Both tables will be drawing IDs from the same sequence object, because
> "create table like" copies the default value, not the "serial"
> shorthand. (It makes perfect sense, it just surprised me that the IDs
> were looking a little odd.)
> 
Wow, this is pretty useful. Just to fit it more to my original use case, I
used this:

CREATE schema schema1;
CREATE schema schema2;
CREATE TABLE tbl (ID serial primary key,foo varchar,bar varchar);  --in
public schema
CREATE TABLE schema1.tbl (LIKE public.tbl INCLUDING ALL);  --draws ids from
sequence in public schema
CREATE TABLE schema2.tbl (LIKE public.tbl INCLUDING ALL);  --draws ids from
sequence in public schema
INSERT INTO schema1.tbl (foo,bar) VALUES ('asdf','qwer');
INSERT INTO schema2.tbl (foo,bar) VALUES ('hello','world');
INSERT INTO schema1.tbl (foo,bar) VALUES ('asdf','qwer');
INSERT INTO schema2.tbl (foo,bar) VALUES ('hello','world');

Thanks, I now consider this my best practice. This way, I don't have to
allocate ranges any more a priori :)

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Best-way-to-create-unique-primary-keys-across-schemas-tp5165043p5281409.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [RFE] auto ORDER BY for SELECT

2012-01-24 Thread Alban Hertroys
On 24 January 2012 09:29, Chris Angelico  wrote:
> On Mon, Jan 23, 2012 at 11:17 PM, Douglas Eric  wrote:
>> I suggest to change this behavior. If one makes a SELECT statement without
>> any ORDER BY, it would be
>> clever to automatically sort by the first primary key found in the query, if
>> any.

I recently submitted a problem report with a product that had that
behaviour. The data involved was a table of database ID's and text
labels for use in a drop-down list. In such cases, sorting the data by
primary key (the ID) is rarely what you want!

For example, if you have a listing of car brands, sorting them by some
arbitrary ID quickly makes such a list impossible to use. You want
such a list sorted alphabetically. Defaulting to sorting by ID (like
aforementioned product did) does not make sense in such a case.

So, this is not just a bad idea from a performance perspective, it's
also often not what you want.

Of course specifying a "different" sort order than the default one
would solve the issue, but that's not the point here.
-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Incomplete startup packet help needed

2012-01-24 Thread Florian Weimer
* David Johnston:

> Immediately upon starting the server I get an "incomplete startup
> packet" log message.  Just prior there is an "autovacuum launcher
> started" message.

Like this?

2012-01-23 10:42:55.245 UTC 11545   LOG:  database system is ready to accept 
connections
2012-01-23 10:42:55.245 UTC 11549   LOG:  autovacuum launcher started
2012-01-23 10:42:55.268 UTC 11551 [unknown] [unknown] LOG:  incomplete startup 
packet

I think it's harmless, it's been there for years.  It might be related
to the init script that starts the database server.

-- 
Florian Weimer
BFK edv-consulting GmbH   http://www.bfk.de/
Kriegsstraße 100  tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [RFE] auto ORDER BY for SELECT

2012-01-24 Thread Chris Angelico
On Mon, Jan 23, 2012 at 11:17 PM, Douglas Eric  wrote:
> I suggest to change this behavior. If one makes a SELECT statement without
> any ORDER BY, it would be
> clever to automatically sort by the first primary key found in the query, if
> any.
> The present behavior would still be used in case of queries without any
> primary key fields.

This would require that work be done without text commanding it, which
is IMHO a bad idea. Generally, SQL follows the principle that more
text --> more work: compare SELECT and SELECT DISTINCT (it's more work
to look for duplicates), VACUUM and VACUUM ANALYZE, etc, etc. The
default state is to do the least work that makes sense. (There are
exceptions - UNION ought to be UNION DISTINCT, versus UNION ALL to
reduce the work done - but this is the general rule.)

Often, a query is done with genuine disregard for order. If you're
going to take the results of the query and stuff them into a
hashtable, you don't care what order they come up in. Why have the
database sort them? Let 'em come in the easiest order possible.

ChrisA

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general