Re: [SQL] avg(interval)

2006-06-26 Thread Erik Jones
Well, the query is working ok numerically, but should the answer really 
be reported as 4 days and 33 hours?


Erik

Aaron Bono wrote:

Right, the 23 is just less than 1 day, not 23 days.

The good news: your query is working!

-Aaron

On 6/26/06, *Joe* < [EMAIL PROTECTED] 
<mailto:[EMAIL PROTECTED]>> wrote:


Tom Lane wrote:
> "Jeremiah Elliott" < [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>>
writes:
>> however if i don't average them here is what i get:
>> "7 days 22:24:50.62311";"*2420"
>> "9 days 22:21: 02.683393";"*2420"
>> "23:21:35.458459";"*2420"
>> "4 days 22:47:41.749756";"*2420"
>> "3 days 06:05:59.456947";"*2420"
>
>> which should average to just over nine days -
>
> Uh ... how do you arrive at that conclusion?  I haven't done the
    math,
> but by eyeball an average of four-something days doesn't look
out of
> line for those values.

It seems he's calculating (7 + 9 + 23 + 4 + 3) / 5 ...

Joe




--
erik jones <[EMAIL PROTECTED]>
software development
emma(r)


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [SQL] Alternative to Select in table check constraint

2006-07-01 Thread Erik Jones

Richard Broersma Jr wrote:

But now that I think about it, using the authors suggestion (if it actually 
worked), how would
would it be possible to change the active status from one badge to another?
  
Well, the check constraint as you wrote it requires that there always be 
a badge listed as active for each employee.  If you changed
the contstraint to: CHECK  ( 2 > .) then you'd be able to unset the 
active status and then set a new one to active.


--
erik jones <[EMAIL PROTECTED]>
software development
emma(r)


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[SQL] Problem with array subscripts in plpgsql trigger function

2006-07-05 Thread Erik Jones

Ok, I have a trigger set up on the following (stripped down) table:

CREATE TABLE members (
member_id   bigint,
member_status_id   smallint,
member_is_deletedboolean
);

Here's a shortened version of the trigger function:

CREATE OR REPLACE FUNCTION update_member() RETURNS TRIGGER AS $um$
DECLARE
   status_deltas integer[];
BEGIN
   IF(NEW.member_status_id != OLD.member_status_id AND 
NEW.member_is_deleted IS NOT TRUE) THEN

  status_deltas[NEW.member_status_id] := 1;
  status_deltas[OLD.member_status_id] := -1;
   END IF;
   /*and after a couple more such conditional assignments I use the 
values in status_deltas to update another table holding status totals here*/

END;
$um$ LANGUAGE plpgsql;

on the two lines that access set array values I'm getting the following 
error:


ERROR:  invalid array subscripts

What gives?

--
erik jones <[EMAIL PROTECTED]>
software development
emma(r)


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] Problem with array subscripts in plpgsql trigger function

2006-07-05 Thread Erik Jones

Aaron Bono wrote:

On 7/5/06, *Erik Jones* <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:

Ok, I have a trigger set up on the following (stripped down) table:

CREATE TABLE members (
member_id   bigint,
member_status_id   smallint,
member_is_deletedboolean
);

Here's a shortened version of the trigger function:

CREATE OR REPLACE FUNCTION update_member() RETURNS TRIGGER AS $um$
DECLARE
status_deltas integer[];
BEGIN
IF(NEW.member_status_id != OLD.member_status_id AND
NEW.member_is_deleted IS NOT TRUE) THEN
   status_deltas[NEW.member_status_id] := 1;
   status_deltas[OLD.member_status_id] := -1;
END IF;
/*and after a couple more such conditional assignments I use the
values in status_deltas to update another table holding status
totals here*/
END;
$um$ LANGUAGE plpgsql;

on the two lines that access set array values I'm getting the
following
error:

ERROR:  invalid array subscripts

What gives?



What values are being used for member_status_id? 


1,  2, and 3

--
erik jones <[EMAIL PROTECTED]>
software development
emma(r)


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [SQL] Problem with array subscripts in plpgsql trigger function

2006-07-05 Thread Erik Jones

Aaron Bono wrote:

On 7/5/06, *Erik Jones* <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:

Aaron Bono wrote:
> On 7/5/06, *Erik Jones* <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]> <mailto:[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>>> wrote:
>
> Ok, I have a trigger set up on the following (stripped down)
table:
>
> CREATE TABLE members (
> member_id   bigint,
> member_status_id   smallint,
> member_is_deletedboolean
> );
>
> Here's a shortened version of the trigger function:
>
> CREATE OR REPLACE FUNCTION update_member() RETURNS TRIGGER
AS $um$
> DECLARE
> status_deltas integer[];
> BEGIN
> IF(NEW.member_status_id != OLD.member_status_id AND
> NEW.member_is_deleted IS NOT TRUE) THEN
>status_deltas[NEW.member_status_id] := 1;
>status_deltas[OLD.member_status_id] := -1;
> END IF;
> /*and after a couple more such conditional assignments I
use the
> values in status_deltas to update another table holding status
> totals here*/
> END;
> $um$ LANGUAGE plpgsql;
>
> on the two lines that access set array values I'm getting the
> following
> error:
>
> ERROR:  invalid array subscripts
>
> What gives?
>
>
>
> What values are being used for member_status_id?
>
1,  2, and 3 



I did some digging through the documentation and cannot find any 
examples of using arrays like this.  Do you have to initialize the 
array before you use it?


Does anyone know where to look for informaiton about using arrays in 
stored procedures?


-Aaron

Ok, that was apparently it.  I found href="http://archives.postgresql.org/pgsql-general/2005-02/msg01270.php";>this 
thread in the archive which indicated to me that without first 
initializing the array the bounds weren't set and thus a subscript error 
if each new index isn't sequential.


--
erik jones <[EMAIL PROTECTED]>
software development
emma(r)


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] SELECT substring with regex

2006-07-07 Thread Erik Jones

T E Schmitz wrote:

Gary Stainburn wrote:

On Friday 07 July 2006 14:51, T E Schmitz wrote:


I would like to split the contents of a column using substring with a
regular expression:

The column contains something like
"150mm LD AD Asp XR Macro"
I want to split this into
"150mm", "LD AD Asp XR Macro"




select substring('150mm LD AD Asp XR Macro','^\\d+mm') as BASE_NAME,
   substring('150mm LD AD Asp XR Macro','^\\d+mm (.*)$') as SUFFIX;
 base_name |   suffix
---+
 150mm | LD AD Asp XR Macro
(1 row)

The brackets surround the required match



This is ingenious! I had been looking at chapter 9.6 Pattern Matching. 
Am I missing something?  I did not realize that the brackets indicate 
the required match.


But that takes me to the next problem:

For the sake of the example I simplified the regular pattern.
In reality, BASE_NAME might be:

28mm
28-70mm

So the reg. expr. requires brackets:

substring (NAME, '^(\\d+(-\\d+)?mm)' ) as BASE_NAME

Actually, the pattern is more complex than that and I cannot see how I 
can express it without brackets.




Will the mm always be the end of the base name?

substring(NAME, '^(.*?mm)') as BASE_NAME

That should match all the way up to the first mm.  If there are actually 
a set number of different units that it might end in, such as mm and cm, 
you can do:


substring(NAME, '^(.*?(mm|cm))') as BASE_NAME

That will match everything up to the first mm or cm.  Note that you 
don't have to worry about the second set of brackets returning anything 
as the regexp version of substring only returns what is matched by the 
first parenthesised subexpression.


--
erik jones <[EMAIL PROTECTED]>
software development
emma(r)


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] Table Join (Maybe?)

2006-07-19 Thread Erik Jones

Phillip Smith wrote:


Hi again all,

I have two tables:

1. Sales figures by date and customer.

2. Customer details – including their Geographic State

I need to extract a report from the first table (I can do that!), and 
in that report order by their State (I can do that too!), but I also 
need a summary of all the customers in each state, below the end of 
each state, and have a grand total at the bottom.


Eg:

Customer 1 State 1 $100.00

Customer 2 State 1 $100.00

State 1 $200.00

Customer 3 State 2 $100.00

Customer 4 State 2 $100.00

State 2 $200.00

Grand Total $400.00

Does anyone have any magic pointers for me? I’ve been playing with 
SELECT INTO as 2 queries (the individual customers, then the summary 
figures added to the temp table) but I end up with ROWS IN FIRST QUERY 
* ROWS IN SECOND QUERY instead of them all sorted together nicely L


Thanks all,

-p

Well, two queries one for the individual totals and one for the summary 
totals is good (maybe a third for the grand total), but you should do a 
union of the two and then play with the order by and/or group by clauses 
(depending on the data) to get the ordering that you want. I can't even 
count the times I've spent banging my head against the proverbial wall 
(you do have a proverbial wall don't you?) trying to get these kinds of 
queries to work with joins, sub-queries, case statements, etc... only to 
come back to using union on simple, to-the-point queries.


--
erik jones <[EMAIL PROTECTED]>
software development
emma(r)


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [SQL] CREATE TABLE AS inside of a function

2006-07-21 Thread Erik Jones

Rodrigo De Leon wrote:

On 7/21/06, Kevin Nikiforuk <[EMAIL PROTECTED]> wrote:
So now that I've got my loops working, on to my next newbie 
question.  I've created my function and in it, I want to loop through 
the results of a select and for each value of my loop counter, I want 
to create a new table, but I can't figure out how to use a variable 
in the name of the new table, see below.


CREATE OR REPLACE FUNCTION rgio() RETURNS INTEGER as $$
DECLARE
lv RECORD;

BEGIN
FOR lv IN SELECT DISTINCT rg FROM ldevrg LOOP
CREATE TABLE rgio_$lv AS
SELECT ldev
FROM ldevrg
WHERE rg='$lv';
END LOOP;
RETURN 1;
END;
$$ LANGUAGE plpgsql;

Thanks,
Kevin


See:
http://www.postgresql.org/docs/8.1/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN 

Also, I really recommend enough that you read chapters 32. Extending 
SQL, 33. Triggers, and 36. Pl/pgSQL in their entirety.  In fact, to keep 
up with the linking to them for you:


http://www.postgresql.org/docs/8.1/interactive/extend.html
http://www.postgresql.org/docs/8.1/interactive/triggers.html
http://www.postgresql.org/docs/8.1/interactive/plpgsql.html

And, so that I don't feel like I'm becoming one of those rtfm jerks I 
always complain about:  what you need to do is place your CREATE TABLE 
statement in an EXECUTE directive like so (inside the the FOR body):


EXECUTE 'CREATE TABLE rgio_' || $lv || ' AS
   SELECT ldev
   FROM ldevrg
   WHERE rg=\'' || $lv || '\';' -- this line could have also been:  
WHERE rg=' || quote_literal($lv) || ';'


EXECUTE takes a query in a string to execute and you  need to use string 
concatenation to build the string if you're using variables from the 
function in the query.  Pl/pgSQL doesn't  have any variable substitution 
inside of strings (like in double quoted string in PHP) which is why you 
need to use the concatenation bit.


--
erik jones <[EMAIL PROTECTED]>
software development
emma(r)


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[SQL] Disk is full, what's cool to get rid of?

2006-07-27 Thread Erik Jones
Hello, quick question.  I've run into an issue with the disk that my 
development box is on filling up and preventing pretty much any writing 
(inserts, updates, deletes, etc...) from happening.  Other than some 
piddly text logs the db is pretty much the only thing on the box.  So, 
my question is:  what can I do to free space and run a full vacuum? (I 
was not the one who set up this box and there has been virtually no 
administration or maintenance on it that I know of...)  How about the 
WAL files in pg_xlog?  How critical are they when no data on the system 
is critical in and of itself?  Any suggestions would be greatly 
appreciated...


--
erik jones <[EMAIL PROTECTED]>
software development
emma(r)


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] Disk is full, what's cool to get rid of?

2006-07-27 Thread Erik Jones
Awesome.  Do I need to reset that to any magic # after the vacuum?  I'm 
not all that up on filesystem maintenance/tweaking...


Scott Marlowe wrote:

I can't tell you the number of times that little trick has saved my
life.

On Thu, 2006-07-27 at 11:32, Jeff Frost wrote:
  
You can probably just "tune2fs -m 0 " to give yourself enough 
space to get out of the jam before you go deleting things.  Then you might 
want to vacuum full afterwards.


On Thu, 27 Jul 2006, Erik Jones wrote:


Hello, quick question.  I've run into an issue with the disk that my 
development box is on filling up and preventing pretty much any writing 
(inserts, updates, deletes, etc...) from happening.  Other than some piddly 
text logs the db is pretty much the only thing on the box.  So, my question 
is:  what can I do to free space and run a full vacuum? (I was not the one 
who set up this box and there has been virtually no administration or 
maintenance on it that I know of...)  How about the WAL files in pg_xlog? 
How critical are they when no data on the system is critical in and of 
itself?  Any suggestions would be greatly appreciated...



  


---(end of broadcast)---
TIP 6: explain analyze is your friend
  



--
erik jones <[EMAIL PROTECTED]>
software development
emma(r)


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] Disk is full, what's cool to get rid of?

2006-07-27 Thread Erik Jones
Awesome.  Makes sense as 5% is exactly the amount of space that appeared 
after running it.  Thanks!


Jeff Frost wrote:
Depends what the default is on your system.  The default is 5% with 
the version of mke2fs that I have here, so you would just:


tune2fs -m 5 

to put it back.

On Thu, 27 Jul 2006, Erik Jones wrote:

Awesome.  Do I need to reset that to any magic # after the vacuum?  
I'm not all that up on filesystem maintenance/tweaking...


Scott Marlowe wrote:

I can't tell you the number of times that little trick has saved my
life.

On Thu, 2006-07-27 at 11:32, Jeff Frost wrote:

You can probably just "tune2fs -m 0 " to give yourself 
enough space to get out of the jam before you go deleting things.  
Then you might want to vacuum full afterwards.


On Thu, 27 Jul 2006, Erik Jones wrote:


Hello, quick question.  I've run into an issue with the disk that 
my development box is on filling up and preventing pretty much any 
writing (inserts, updates, deletes, etc...) from happening.  Other 
than some piddly text logs the db is pretty much the only thing on 
the box.  So, my question is:  what can I do to free space and run 
a full vacuum? (I was not the one who set up this box and there 
has been virtually no administration or maintenance on it that I 
know of...)  How about the WAL files in pg_xlog? How critical are 
they when no data on the system is critical in and of itself?  Any 
suggestions would be greatly appreciated...






---(end of 
broadcast)---

TIP 6: explain analyze is your friend










--
erik jones <[EMAIL PROTECTED]>
software development
emma(r)


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [SQL] Disk is full, what's cool to get rid of?

2006-07-28 Thread Erik Jones

Tom Lane wrote:

Andrew Sullivan <[EMAIL PROTECTED]> writes:
  

If the data isn't critical, you maybe could truncate a table to clear
enough space.  Deleting anything under pg_xlog is more or less
guaranteed to mean your database is garbage.



If you're desperate you could shut down the postmaster, run
pg_resetxlog, restart the postmaster.  This would cut xlog contents
to the minimum --- however, they'd probably soon grow back to whatever
they are now, so it's not much of a long-term solution.  It might give
you some breathing room to look for other places to trim though.

If the database hasn't been maintained well then you likely are
suffering from table bloat and/or index bloat.  A desperation measure
for fixing that is drop all indexes, vacuum full, recreate all indexes.
(There are other procedures you may find recommended, such as CLUSTER,
but they have transient disk-space requirements that you're not gonna
be able to afford when your back is to the wall.)
  
Thanks for your suggestions.  Doing the tune2fs trick worked enough for 
me to get a vacuum full to run -- which then pointed out the fact that 
max_fsm_pages was still running off of the default 1000 (doing a SELECT 
COUNT(*) FROM pg_class; showed over 100k relations) so, I made some much 
needed tweaks to postresql.conf, restarted the db and then restarted the 
vacuum full (it's still running now).  As far as the rest of the 
suggestion (CLUSTER, index drop/add, etc...), there really isn't much of 
a point as this database is for development only and is only going to be 
around for about another month when we build a whole new pristine 
development db and environment from the ground up (I can't wait!), but 
these are all good things to know.


--
erik jones <[EMAIL PROTECTED]>
software development
emma(r)


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] UPDATE with WHERE clause on joined table

2006-07-28 Thread Erik Jones

Fabian Peters wrote:

Hi,

I'm only very rarely using SQL, so please forgive me if I show any 
obvious signs of ignorance...


I've got three tables "customer", "address" and "country". I want to 
set the "language" attribute on "customer" on rows returned by a 
SELECT such as this:


SELECT title, first_names, last_name, email, language, 
country.country_name FROM ((customer JOIN address ON customer.eoid = 
address.eoid_customer) JOIN country ON address.eoid_country = 
country.eoid) WHERE email LIKE '%.es' AND country.country_name = 'SPAIN';


That is, I want to set the "language" to 'Spanish' where the 
"customer.email" is like '%.es' and where "country.country_name" is 
'SPAIN'.


I've tried all sorts of places to put the JOIN and the WHERE clauses 
within the UPDATE statement, but I just don't get it.


I'd be most grateful for any help...

TIA

Fabian

P.S.: One of my sorry attempts looked like this - which updates all 
rows in "customer" so I figure the WHERE clause is not where it should 
be:


UPDATE customer SET language = 'Spanish' FROM ((customer AS 
customer_address JOIN address ON customer_address.eoid = 
address.eoid_customer) JOIN country ON address.eoid_country = 
country.eoid) WHERE customer.email LIKE '%.es' AND 
country.country_name = 'SPAIN'); 
The FROM clause is where you put relations other than the one you  are 
updating.  Try this:


UPDATE customer
SET language='Spanish'
FROM address ad, country co
WHERE customer.eoid=ad.eoid_customer AND ad.eoid_country=co.eoid
   AND co.country_name='SPAIN' AND customer.email LIKE '%.es';

Note that for demonstration purposes I've aliased the join tables and 
that (unfortunately) you can't alias the update table.


erik jones <[EMAIL PROTECTED]>
software development
emma(r)


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [SQL] Taking the cache out of the equation?

2009-06-12 Thread Erik Jones


On Jun 9, 2009, at 10:51 AM, Rob Sargent wrote:

Caching helps a *lot* and I'm thankful for that but I would like to  
take it out of the picture as I massage my queries for better  
performance.  Naturally the first invocation of the query cannot  
take advantage of the cache and these queries would normally only be  
called once for the same target data.What tricks are there to  
flush, ignore, circumvent the caching boost?  (Especially in the  
production environment.)


Why on earth would you want your queries to always go to disk?

Erik Jones, Database Administrator
Engine Yard
Support, Scalability, Reliability
866.518.9273 x 260
Location: US/Pacific
IRC: mage2k






--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Advice returning data as an array

2009-08-26 Thread Erik Jones


On Aug 20, 2009, at 5:07 AM, George Woodring wrote:


I have the following query that works the way I need:

SELECT '20090812' AS date, sum(in_oct) AS bytes, sum(in_pkt) AS  
pkts, 'appl' AS name, ARRAY['appl'] AS fields, CAST((LEAST(src_port,  
dst_port)) AS varchar) as fieldVal FROM flow_history.flows_20090812  
GROUP BY fieldVal ORDER BY bytes DESC LIMIT 20;


My issue is that I need to get the fieldVal  to return as an array  
and be fieldVal[1].  Any suggestions?


The manual has everything you should need: http://www.postgresql.org/docs/8.4/interactive/arrays.html#ARRAYS-IO 
  What would help us help you past that is if you show what you have  
already tried so we know where you need correction/help.


Erik Jones, Database Administrator
Engine Yard
Support, Scalability, Reliability
866.518.9273 x 260
Location: US/Pacific
IRC: mage2k






--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] AccessExclusiveLock on CREATE TABLE REFERENCES deadlocks (formatted better)

2009-10-06 Thread Erik Jones
 | t
relation | pg_class_relname_nsp_index  |  3911 |  
AccessShareLock | t
relation | pg_locks|  3911 |  
AccessShareLock | t
relation | site_key_idx| 18891 |  
AccessShareLock | t
relation | virtual_ancestor_key_idx| 18891 |  
AccessShareLock | t
relation | virtual_ancestors   | 18891 |  
AccessShareLock | t
relation | virtual_ancestors_pkey  | 18891 |  
AccessShareLock | t
relation | virtual_context_key_idx | 18891 |  
AccessShareLock | t
relation | words   | 18891 |  
AccessShareLock | t
relation | words_pkey  | 18891 |  
AccessShareLock | t
relation | words_word  | 18891 |  
AccessShareLock | t


Note that pid 3879 can't get the AccessExclusiveLock because pid 18891  
is already holding an AccessShareLock.  pid 18891 also had a lot of  
AccessShareLocks on a bunch of other tables.  What is it doing?   
Figure out/get rid of that and you're problem will go away.


Erik Jones, Database Administrator
Engine Yard
Support, Scalability, Reliability
866.518.9273 x 260
Location: US/Pacific
IRC: mage2k






--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] error with mor than 1 sub-select

2006-08-23 Thread Erik Jones
Actually, it pointed you right to it.  Notice that exits is just before 
where the pointer was.


chrisj wrote:

thanks, stupid user error.

i guess the pointer on the error message led me astray


Michael Fuhr wrote:
  

On Tue, Aug 22, 2006 at 06:47:51PM -0700, chrisj wrote:


The first query works fine, but when a second sub-query is added I get a
syntax error...

psql protocal2 -U p2user << EOF1
select *
   from serv_res SR
   where serv_key = 10
 and not exists 
(select 1

  from reservation R
)
-- and not exits 
  

Does the real query have "exits" instead of "exists"?  I created
some test tables and fixed that typo and then both queries worked.

--
Michael Fuhr

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org





  



--
erik jones <[EMAIL PROTECTED]>
software development
emma(r)


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [SQL] i have table

2006-10-04 Thread Erik Jones
There is one non-SQL related reason that I like to be able to order 
columns, at least the way they are displayed whenever the table is 
described:  human comprehension.  For example, I like to group all keys 
in a table before data, that includes primary as well as foreign keys.   
So, say I'm building on to an existing application and I need to do an 
ALTER TABLE on an existing table to add a foreign key to an existing 
table.  I'd like that key to be listed with the other keys, but 
presently that's not possible in a simple way and, to be honest, I 
usually just go without as the process you've described below is too 
prone to user (human) error when dealing with live, sensitive data for 
me to want to mess with it.


Markus Schaber wrote:

Hi, Penchalaiah,

Penchalaiah P. wrote:

  

now I want to add one more field in this table.. but that field has to
come next to cda_no.. I mean as a 3^rd field.. If I am adding that field
it is coming last field …



In SQL, field order in the table is not given by design.

A "SELECT * FROM table" might even give you the columns alphabetically
ordered, or in a different random order each time in a different server
implementation.

If you need the colums in a specific order, use "SELECT foo, bar, baz
FROM table" or create a View.

All relevant SQL constructs (SELECT, INSERT, UPDATE, COPY, etc.) let you
specify the columns explicitly to guarantee a given order.

  

may I know how it is possible to that table…



If you _really_ want to do that despite what I wrote above, you have
several possibilities:

- COPY the table to some file, drop the table, recreate the table with
  the desired new column order, and then COPY the table back using an
  explicitly specified, correct row order.

- use CREATE TABLE ... AS SELECT to select the data into a new table,
  drop the old table, rename the new one to the old one.

In both cases, you've to recreate all missing indices, foreing key
constraints etc.


HTH,
Markus


  



--
erik jones <[EMAIL PROTECTED]>
software development
emma(r)


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] i have table

2006-10-04 Thread Erik Jones

Aaron Bono wrote:
On 10/4/06, *Erik Jones* <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> 
wrote:


There is one non-SQL related reason that I like to be able to order
columns, at least the way they are displayed whenever the table is
described:  human comprehension.  For example, I like to group all
keys
in a table before data, that includes primary as well as foreign
keys.
So, say I'm building on to an existing application and I need to do an
ALTER TABLE on an existing table to add a foreign key to an existing
table.  I'd like that key to be listed with the other keys, but
presently that's not possible in a simple way and, to be honest, I
usually just go without as the process you've described below is too
prone to user (human) error when dealing with live, sensitive data for
me to want to mess with it.


Ah, but it is possible... if you use views.

I recommend you build views and query off them.  Then you can control 
the order the columns will appear.
Which would be great if I didn't have (many) thousands of lines of code 
that already use the tables.  Besides, this is no where near a 'make or 
break' thing.  It's just a matter of aesthetic preference.


--
erik jones <[EMAIL PROTECTED]>
software development
emma(r)


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [SQL] Wildcard LIKE and Sub-select

2006-11-10 Thread Erik Jones

Travis Whitton wrote:

Hi everybody,
I have two tables of the following structure:

Table "keywords"

column   | type
-
id   | integer
keyword | varchar(255)

and

Table "badwords"

column   | type
--
badword  | varchar(255)


I need to delete all the rows from the keywords table where badword 
partially matches the keyword field. I know I can do an exact match 
with a sub-select, but I'm not sure how to structure a wildcard match 
/ like clause with a sub-select. Also, is that the best way to do it, 
or should I be looking into full-text? I have roughly 10 million 
keywords and 1 million badwords.


Thanks,
Travis


Hmm...  Maybe (this is untested):

DELETE FROM keywords
USING badwords
WHERE keyword ILIKE ANY (SELECT '%' || badword || '%'
  FROM badwords)

--
erik jones <[EMAIL PROTECTED]>
software development
emma(r)


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[SQL] COALESCE and GROUP BY and AGGREGATES

2006-11-13 Thread Erik Jones

Ok, here's a sample table for the question I have:

CREATE TABLE sales_table (
sale_type varchar default 'setup' not null,
sale_amount numeric not null
sale_date timestamp without timezone default now());

So, let's say there are 3 different sale_types: 'setup', 'layaway', 
'the_hookup' and I want to get totals for each type in a given month:


SELECT sale_type, SUM(sale_amount)
FROM sales_table
WHERE sale_date LIKE '2006-11%'
GROUP BY sale_type;

If there hasn't been a sale of a given type in that month there won't be 
a row in the result set for that type.  I want a row for each type with 
a default of 0 if there haven't been any sales for that type yet that 
month.  I've tried:


SELECT sale_type, (COALESCE(SUM(sale_amount), 0)
FROM sales_table
WHERE sale_date LIKE '2006-11%'
GROUP BY sale_type;

but, no dice.  Any ideas?  I know I can break this out into separate 
queries for each type and the COALESCE will work, but in my real-world 
situation I have a lot more than three types and that'd be ugly.


Thanks,

--
erik jones <[EMAIL PROTECTED]>
software development
emma(r)


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] COALESCE and GROUP BY and AGGREGATES

2006-11-13 Thread Erik Jones

Volkan YAZICI wrote:

On Nov 13 10:49, Erik Jones wrote:
  

Ok, here's a sample table for the question I have:

CREATE TABLE sales_table (
sale_type varchar default 'setup' not null,
sale_amount numeric not null
sale_date timestamp without timezone default now());

So, let's say there are 3 different sale_types: 'setup', 'layaway', 
'the_hookup' and I want to get totals for each type in a given month:


SELECT sale_type, SUM(sale_amount)
FROM sales_table
WHERE sale_date LIKE '2006-11%'
GROUP BY sale_type;

If there hasn't been a sale of a given type in that month there won't be 
a row in the result set for that type.  I want a row for each type with 
a default of 0 if there haven't been any sales for that type yet that 
month.



What about such a schema design:

CREATE TABLE sale_types (
id  serial  PRIMARY KEY,
nametextNOT NULL DEFAULT 'setup'
);

CREATE UNIQUE INDEX sales_types_typ_idx ON sales_types (typ);

CREATE TABLE sales_table (
typ bigint  REFERENCES sale_types (id),
amount  numeric NOT NULL,
sdate   timestamp without time zone DEFAULT CURRENT_TIMESTAMP
);

SELECT TYP.name, COALESCE(SUM(TBL.amount), 0)
  FROM sale_types AS TYP
   LEFT OUTER JOIN sales_table AS TBL ON (TYP.id = TBL.typ)
 WHERE TBL.sale_date LIKE '2006-11%'
 GROUP BY TYP.name;

I didn't try the above SQL queries, but I hope you understand what I
meant.
  
Awesome.  I didn't (and couldn't) change the schema, but doing a 
self-outer join on the table did the trick.  Thanks!


--
erik jones <[EMAIL PROTECTED]>
software development
emma(r)


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [SQL] Finding gaps in scheduled events

2006-12-12 Thread Erik Jones

Richard Huxton wrote:

Marcin Stępnicki wrote:


start | finish | type_id
--++
8:30  |  8:45  |1-> type A
9:30  | 10:00  |2-> type B



I have to deal with the existing schema, but if it can be done in a
better way please let me know so I could avoid mistakes in my own 
programs

(although I think it's quite flexible and I like the concept).


The reason you're finding it difficult is that you're asking the 
database for information based on what *isn't* stored in it. That is 
you're asking it for all the gaps in your event data.


Now, if you were doing it by hand you'd sort the events according to 
time and compare finish/start times in order. You can do something 
similar with PG and write a plpgsql function that returns a setof 
(start,finish,length) for gaps.
You  wouldn't even need the sort.  In the function just loop, starting 
at the earliest possible event start time, and increment by 15 minutes 
until you've hit the last possible start time and at each time check to 
see if there is already an event scheduled for that time, if there was a 
30 minute event that is scheduled to start 15 minutes earlier, or if 
there is an event already scheduled to start at the next time (which 
would limit an event at the current time to 15 minutes).  You could make 
it  "smarter" by observing that whenever you get to an event that's 30 
minutes long you can skip checking the next start time.
If you have a lot of events and you need to find gaps quite often it 
might be easier to keep a separate table to track them. Triggers on 
the events table would keep the gaps table up to date. If events can 
be deleted/moved you'll want to consider how to merge adjacent gaps.

Also a good idea.


If you don't like either of those, you'll need to figure out what the 
"next" and "previous" events are for each event in your table. That 
will need to be a sub-query with something like:


SELECT
  a.start,
  a.finish,
  (
SELECT start FROM test_events WHERE start>a.finish ORDER BY start 
LIMIT 1

  ) AS next_start
FROM
  test_events a
ORDER BY start;

Note the subquery is in the SELECT clause and this query-plan will 
probably run over the table twice (via indexes).
Here your still left mostly in the dark and still need to loop through 
the results checking the gaps between a.finish and next start.  And, 
since you're working with the full result set at that point and it is 
already ordered by each event's start time, you don't need the subquery 
as at each iteration of the loop you can do a simple difference of the 
current row's a.finish and the next's a.start to get the gap (with a 
special case to handle the last scheduled event).


--
erik jones <[EMAIL PROTECTED]>
software development
emma(r)


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] Finding gaps in scheduled events

2006-12-12 Thread Erik Jones

Alvaro Herrera wrote:

Marcin Stępnicki wrote:

  

Now I need to create a query to find hours at which each of the type can
start. So, if it's event A (which take 15 minutes) it can start at:

8:00 (to 8:15)
8:15 (to 8:30)
( 8:30 to 8:45 is already taken )
8:45 (to 9:00)
9:00 (to 9:15)
9:15 (to 9:30)
( 9:30 to 10:00 (9:30-9:45 and 9:45-10:00) is already taken))
10:00 (to 10:15)



I think something like this should help you:

select my_hour
from test_events right join test_timeline on
((start, finish) overlaps (my_hour, my_hour + 15 * '1 minute'::interval))
where start is null;

With your test data, it shows all the times except for 8:30, 9:30 and
9:45.
  
Nice!  And, he can run that query again, flipping the 15 to 30, to get  
the list of available 30 minute gaps.  That's a heck-of-a lot simpler 
than the stuff I discussed earlier.


--
erik jones <[EMAIL PROTECTED]>
software development
emma(r)


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] join/group/count query.

2006-12-20 Thread Erik Jones

Ragnar wrote:

On mið, 2006-12-20 at 10:12 +, Gary Stainburn wrote:
  
Hi folks.  I have the following query which works for me at the moment.  
However, o_model refers to a table stock_models which has one record for each 
model type. At the moment if I another record to the stock_models I have to 
amend the select.  Is it possible to make this automatic by joining the 
stock_models data somehow?


select o_p_id as key, o_p_id as p_id, p_name, count(o_p_id) as total,
   count (case when o_model = 5 then 1 else NULL end) as KA,
   count (case when o_model = 10 then 1 else NULL end) as Focus,
   count (case when o_model = 13 then 1 else NULL end) as C_Max,
   count (case when o_model = 16 then 1 else NULL end) as S_Max,
   count (case when o_model = 20 then 1 else NULL end) as Fiesta,
   count (case when o_model = 25 then 1 else NULL end) as Fusion,
   count (case when o_model = 30 then 1 else NULL end) as Mondeo,
   count (case when o_model = 35 then 1 else NULL end) as Galaxy,
   count (case when o_model = 40 then 1 else NULL end) as Ranger,
   count (case when o_model = 50 then 1 else NULL end) as Connect,
   count (case when o_model = 60 then 1 else NULL end) as Transit,
   count (case when o_model = 70 then 1 else NULL end) as Fiesta_Van
   from order_details 
   where date_trunc('month',CURRENT_DATE) = date_trunc('month',o_ord_date)

   group by o_p_id, p_name;



if I understand correctly, you want one column in your output, 
for each row in the table table stock_models


you can do this with the crosstabN function in the contrib 
module 'tablefunc', or by making your own procedural language

function.

gnari
  
And, I may be missing something, but I'm having a hard time 
understanding why you have all of those select columns  of  the form:


count (case when o_model = 5 then 1 else NULL end) as KA,

Considering that that can only return 1 or 0, the case statement would do.  Is 
it to avoid putting all of the column names in the group by clause?  That's 
hackish and is as much or more typing.

With regards to what you are actually trying to do, giving us your table 
definitions and what you are trying to achieve would help a lot more than just 
telling us the problem you are having.  The column names in your query are in 
no way descriptive and tell us nothing about your actual table structure.


--
erik jones <[EMAIL PROTECTED]>
software development
emma(r)


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] join/group/count query.

2006-12-20 Thread Erik Jones

Gary Stainburn wrote:

And, I may be missing something, but I'm having a hard time
understanding why you have all of those select columns  of  the form:

count (case when o_model = 5 then 1 else NULL end) as KA,

Considering that that can only return 1 or 0, the case statement would do. 
Is it to avoid putting all of the column names in the group by clause? 
That's hackish and is as much or more typing.


With regards to what you are actually trying to do, giving us your table
definitions and what you are trying to achieve would help a lot more than
just telling us the problem you are having.  The column names in your query
are in no way descriptive and tell us nothing about your actual table
structure.



The order_details view is a join of the orders table to the other tables.

The order table contains the orders. 

Each order has a business partner which is in a separate table. p_id is the 
key, p_name is the name of the partner.


Each order is for a single vehicle, which must be of a specific mode. The 
models are also stored in a separate table. o_model is the attribute in the 
orders table that contains the key to the models table.


The output I'm getting is below, which is what I want. For each partner I get 
a total followed by a breakdown by model the orders for the current month.


key|p_id|p_name  | total | ka | focus | c_max | s_max | fiesta | 
fusion | mondeo | galaxy | ranger | connect | transit | fiesta_van

---+++---++---+---+---++++++-+-+
 40|  40|rrr | 2 |0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 |0
 30|  30|r Group Plc | 2 |0 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |0
 78|  78|r r | 2 |0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |0
 46|  46| )  | 3 |0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 |0
  3|   3|e   | 8 |0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 | 2 | 4 |0
  9|   9|eee | 3 |0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |0
 12|  12||13 |0 | 0 | 0 | 0 | 0 | 0 | 2 | 0 | 0 | 5 | 6 |0
 10|  10|rr Motor| 7 |0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 | 5 |0
 34|  34|ff fff  | 3 |0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |0
102| 102| xxx| 1 |0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |0
  
Ok, then, yeah, I read those count statements wrong and the crosstab 
contrib is what you're looking for.


--
erik jones <[EMAIL PROTECTED]>
software development
emma(r)


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] Droping indexes

2007-01-16 Thread Erik Jones

Mario Behring wrote:

Hi Alvaro,

Thank you for your advise..I was thinking about doing 
exactly that, I wasn't sure on how to do it though, meaning, 
considering the info below, how should I use the CREATE INDEX command 
to create these indexes??

Have you read the documentation?

http://www.postgresql.org/docs/8.2/interactive/sql-createindex.html

--
erik jones <[EMAIL PROTECTED]>
software development
emma(r)


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [SQL] new idea

2007-04-09 Thread Erik Jones


On Apr 9, 2007, at 7:21 AM, Andrew Sullivan wrote:


On Mon, Apr 09, 2007 at 09:42:24AM +0300, sql4-en.narod.ru wrote:
Let me know your opinion about next way of processing and  
extracting data.
This would very comfortable for delivery xml-data into any  
program, for example into browser.

Has this idea future ? What are you think ?


What does this do that inheritance doesn't already do?  I don't think
I see anything.


I don't really even see the need for inheritance here.  This is what  
most ORMs do at the application level already.


erik jones <[EMAIL PROTECTED]>
software developer
615-296-0838
emma(r)





Re: [SQL] new idea

2007-04-09 Thread Erik Jones


On Apr 9, 2007, at 9:14 AM, Andrew Sullivan wrote:


On Mon, Apr 09, 2007 at 09:11:57AM -0500, Erik Jones wrote:


I don't really even see the need for inheritance here.  This is what
most ORMs do at the application level already.


Wel, sure, but the poster seemed to think that having a way to
represent this in the database was a good thing.  (I'm not actually
convinced even of that.  The whole point of SQL was to move away from
the hierarchical model, and so grafting a lot of hierarchy back onto
it suggests to me that the OP has picked the wrong technology for the
problem at the outset.)


You're parenthesized comments were what I was getting at. :)

erik jones <[EMAIL PROTECTED]>
software developer
615-296-0838
emma(r)




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [SQL] [GENERAL] yet another simple SQL question

2007-06-25 Thread Erik Jones

On Jun 25, 2007, at 12:44 PM, Joshua wrote:


Ok,

You guys must be getting sick of these newbie questions, but I  
can't resist since I am learning a lot from these email lists and  
getting results quick! Thanks to everyone for their contributions.


Here is my questions

I have a column that looks like this

firstname
-
John B
Mark A
Jennifer D

Basically I have the first name followed by a middle initial. Is  
there a quick command I can run to strip the middle initial?  
Basically, I just need to delete the middle initial so the column  
would then look like the following:


firstname
---
John
Mark
Jennifer

Thanks again for all of your help today. Everything you guys have  
been sending has produced successful results.




Try something along the lines of:


SELECT substring(firstname from '^(\w*)\W') from table_name;


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] Using escape strings in an insert statement.

2007-07-03 Thread Erik Jones


On Jul 2, 2007, at 6:11 PM, Michael Glaesemann wrote:



On Jul 2, 2007, at 17:45 , Paul Lambert wrote:


tester=# insert into testing (test_text) values ('abcE'\\'123');


This should be
INSERT INTO testing (test_text) values (E'abc\123');


No, that will leave him with the string 'abc23' beinginserted, he  
wants the backslash to be included in the string, that's why he had  
two, so it should be:


INSERT INTO testing (test_text) values (E'abc\\123');



The help itself (ch 4.1.2.1) tells me to use double backslash  
"Thus, to include a backslash character, write two backslashes (\ 
\). "


Note that the String Constants section (4.1.2.1) says put the E  
"before the opening single quote".


http://www.postgresql.org/docs/8.2/interactive/sql-syntax- 
lexical.html#SQL-SYNTAX-CONSTANTS


An escape string constant is specified by writing the letter E  
(upper or lower case) just before the opening single quote, e.g.  
E'foo'.


Also be sure to read the Caution section.

Using \ as an escape character is the old non-standard PostgreSQL  
escape syntax that the WARNING (above) is, uh, warning you about.  
With standard_conforming_strings on (i.e., follow the SQL spec),  
the backslash is just a backslash character.


Which one is the correct syntax and how can I make it not return  
anything other than a successful insert?


Depends on the setting of standard_conforming_strings.


With standard_conforming_strings turned on, it would just need to be:

INSERT INTO test (test_text) values ('abc\123');



Michael Glaesemann
grzm seespotcode net



---(end of  
broadcast)---

TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [SQL] How do I connect postgres table structures and view structures to an existing svn repository?

2007-08-01 Thread Erik Jones


On Aug 1, 2007, at 1:17 PM, D'Arcy J.M. Cain wrote:


On Wed, 1 Aug 2007 11:46:11 -0500
"Scott Marlowe" <[EMAIL PROTECTED]> wrote:

On 8/1/07, John Mitchell <[EMAIL PROTECTED]> wrote:
I am trying to store schema definitions in version-control which  
I can do by
saving the definition and then  importing  into svn, but I  would  
like  it
to be automatic , so that when an update occurs to a table or  
view within
postgres then that table or view is flagged within svn.  This  
would be
similar to what I currently do with source code that I have for a  
web app

within  eclipse.


My point being that you need to drive the changes from the svn side,
not the database side.  I've seen no reliable way to do it otherwise,
but would love to be pleasantly surprised.


You are right.  It is always difficult to add structured control to
unstructured development methods.  The method that I suggested was the
best I could think of to handle a database that is changed ad hoc with
no design work beforehand.  If you work from the design side first you
can create schema documents that feed your database creation and use
svn to save those versions and document exactly why things changed.


Another good idea is to include in these delta (or migration) scripts  
the necessary sql to rollback the change.  Then it's not too hard to  
write a tool that you can give db connection params and a version #  
to sync to.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [SQL] Foreign Key inter databases

2007-08-03 Thread Erik Jones


On Aug 3, 2007, at 2:24 AM, Dani Castaños wrote:


Josh Tolley escribió:

On 8/2/07, Dani Castaños <[EMAIL PROTECTED]> wrote:


Hi all!

Is it possible to have a foreign key where referenced table is in
another database?

Thank you in advance

---(end of  
broadcast)---

TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so  
that your

   message can get through to the mailing list cleanly




Unless you use things like dblink to make two separate databases  
think

they're one database, you really can't do anything where one database
depends on another. Your best bet, if you can have both data sets on
the same machine, is to stick them in two separate schemas, and then
you can have foreign keys on a table in one schema referencing a  
table

in another schema. That's (in part) what schemas are for in the first
place -- to separate logically distinct data sets while allowing
accesses between the two when necessary.

If for whatever reason having all the data on one machine isn't
possible, you can try using dbi-link or dblink to create links  
between

the two databases and do foreign keys that way, but I've never used
either, so that might not work/make sense/be possible, etc.

-Josh



Thanks Josh!
I use dblinks for queries, but I'm pretty sure you can't use it in  
constraints definitions.
My situation is that I have one main database and many customer's  
databases. In main database there is a languages table, and  
customers databases must use the language_id. Then, I don't want to  
create languages table in each customer db, that's the reason why i  
was trying to do a foreign key from customer db to main db.


Not directly as pg constraints, no.  But, what you can do is create a  
trigger that simulates the same effect.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] Increment a sequence by more than one

2007-08-03 Thread Erik Jones

On Aug 3, 2007, at 11:50 AM, Steve Midgley wrote:


Hi,

I'm writing an import app in a third party language. It's going to  
use "copy to" to move data from STDIN to a postgres (8.2) table.  
There are some complexities though: it's going to copy the records  
to a "versioned" table first, and then at a later time the records  
will be copied by a different procedure to the "live" table. The  
live table and versioned table are identical in terms of their  
field definitions. But there is no sequence associated with the  
versioned table (whose primary key is "id" plus "import_group_id",  
whereas the live table's pk is just "id"). So all versioned table  
entries must already "know" what their id would be in the live  
table. (This makes sense for other business process we have, but  
it's a bit of a problem in this instance).


My problem: I'd like to be able to grab a block of id's from the  
live table's pk sequence. So let's say my importer has 5,000 new  
rows to import and the current max pk in the live table is 540,203.  
I'd like to be able to increment the primary key sequence in such a  
way that I get a block of ids all to myself and the sequence is  
reset to 545,203 with a guarantee that all the id's between 540203  
and 545203 are unused.


I'm guessing this can be done with a stored procedure, but if  
possible I'd like to be able to run this command from my third  
party app without calling a stored procedure (we try not to use  
stored procedures here b/c we code entirely in this third party  
language - if we had to, it's possible we could install a stored  
procedure though).


But since I've seen so much magic on display from people on this  
list, I'm going to ask if it's possible to do this solely from PG  
SQL sent from a third party language? The main tricky bit seems to  
be ensuring that everything is locked so two such increment calls  
at the same time don't yield overlapping blocks of ids. Is there a  
way to "lock" the sequence generator for the duration of a  
"nextval" and "setval" call? Since pk sequence functions like  
nextval cannot be rolled back, I'm guessing that "begin/end" won't  
do the job?


I.e:

-- need "magic lock" statement on pk sequence here
nextval
-- returns 540203
setval(545203)
-- now sequence is set to where I want it and I "own" 5000 id's
-- release magic lock here

My fallback is to just have a routine that calls "nextval" a bunch  
of times and stores all the id's it gets - they may or may not be  
sequential but they'll be unique. This is going to be a really slow  
way to get a large number of id's of course and just seems plain  
wrongheaded in many ways.


Any insights? All help is appreciated and input on a better way to  
solve the problem completely is of course welcome as well.


Is there actually a requirement that the block of 5000 values not  
have gaps?  If not, why not make the versioned table's id column  
default to nextval from the same sequence?  Then when the data is  
copied over to the live table, as long as you supply the the id it  
won't generate a new id and you'll maintain your row-row  
relationships.  If you do require that the block not have gaps, check  
out the article on how to do this here: http://www.varlena.com/ 
varlena/GeneralBits/130.php


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [SQL] Block size with pg_dump?

2007-08-26 Thread Erik Jones

On Aug 26, 2007, at 8:09 PM, Jean-David Beyer wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Bruce Momjian wrote:

Jean-David Beyer wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

When I make a backup of a database, I put the output file  
directly on

magnetic tape; i.e., my command looks like this:

pg_dump --file=/dev/st0 

This way I do not have to worry if the total backup exceeds the  
size of a
file system, and it saves me the trouble of copying it to the  
tape as a
separate step. My current tapes will hold 20 GBytes raw or  
40GBytes if I
enable hardware compression (assuming 2:1 compression happens).  
Now it says
in the documentation that if I use format c it will compress the  
data in

software, so I doubt the hardware compression will do much.

I do not know what blocksize pg_dump uses, or if it insists on a  
particular

blocksize on input.

Now my tape drive will work with any blocksize, but prefers 65536- 
byte
blocks. I do not see any options for this in pg_dump, but I could  
pipe the

output of pg_dump through dd I suppose to make any blocksize I want.

On the way back, likewise I could pipe the tape through dd before  
giving it

to pg_restore.

Does pg_dump care what blocksize it gets? If so, what is it?


I assume you could pipe pg_dump into dd and specify the block size in
dd.


Of course on the way out I can do that.

The main question is, If I present pg_restore with a 65536-byte  
blocksize

and it is expecting, e.g., 1024-bytes, will the rest of each block get
skipped? I.e., do I have to use dd on the way back too? And if so,  
what

should the blocksize be?


Postgres (by default) uses 8K blocks.

Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] Database normalization

2007-08-28 Thread Erik Jones

On Aug 28, 2007, at 6:47 AM, Sebastian Ritter wrote:


Hello,

I have a fairly basic question about database design where im not  
sure which approach is considered correct.


I have two different entities: Clients and Services. Both allow  
users to add progressive updates about the two entities.


The update/message format is exactly the same for both. Should I  
make two different tables:


  client_updates and service_updates
  or
  one table with extra columns : is_client, client_id, service_id,  
where either client_id or service_id would be null depending on the  
is_client boolean?


The major problem is simply relating the foreign key in the updates  
table back to the correct entity, client or service.


Are client_id and service_id artificial ids?  If so, is it an actual  
requirement that they have distinct id sequences?  I.e. is it  
necessary that there can be both client_id=1 and service_id=1?  If  
not, you can use one table, say Entities, lose client_id, service_id,  
and is_clent and replace them with entity_id and entity_type.  Then  
your foreign key in your updates table just needs to reference  
entity_id, or possibly (entity_id, entity_type) if you want to be  
really strict about things.  If you want to make querying the table  
simple for either case create Clients and Services views on the  
table.  This also gives you the ability to add other entity types  
where you may to track whatever kind of updates these are.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [SQL] postgresql HEAD build failure

2007-09-10 Thread Erik Jones


On Sep 9, 2007, at 9:51 PM, Tom Lane wrote:


John Summerfield <[EMAIL PROTECTED]> writes:

-d fixed the missing directories
and -P prunes obsolete ones.



Obviously I needed to read the instructions more closely:-(


FWIW, I've used the following ~/.cvsrc for many years:

cvs -z3
update -d -P
checkout -P

The usefulness of -z3 depends on the speed of your connection, but
the other two lines fix obviously stupid defaults ...

regards, tom lane


Heh, most of the people I know who use Subversion made the switch  
just because of the need for those two lines.  (Of course, repository- 
wide versioning another common reason.)


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [SQL] Many databases

2007-09-23 Thread Erik Jones


On Sep 21, 2007, at 6:16 PM, Stefan Arentz wrote:


Well I'm just toying with an idea. The problem I'm facing is that I
would like clients to only see the tuples that they have created and
own. I guess I'll just skip direct sql access to the db and write some
API on top of it that manages the data. Not a big deal but it
complicates things :-)



You could do the same thing with views on those tables.  One problem  
with multiple databases is keeping global meta data for all of your  
customers together in a simple way.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] Many databases

2007-09-24 Thread Erik Jones


On Sep 23, 2007, at 11:56 PM, Erik Jones wrote:



On Sep 21, 2007, at 6:16 PM, Stefan Arentz wrote:


Well I'm just toying with an idea. The problem I'm facing is that I
would like clients to only see the tuples that they have created and
own. I guess I'll just skip direct sql access to the db and write  
some

API on top of it that manages the data. Not a big deal but it
complicates things :-)



You could do the same thing with views on those tables.  One  
problem with multiple databases is keeping global meta data for all  
of your customers together in a simple way.


Oh, yeah, I forgot to mention separate schemas as well.

Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [SQL] Finding broken regex'es

2007-10-02 Thread Erik Jones

On Oct 2, 2007, at 3:23 PM, Enrico Weigelt wrote:



Hi folks,


I'm looking for some way to find broken regex'es in some column
to kick them off. For now I'm regularily fetching all regexes
from an PHP script, try an preg_match() and so find the broken
ones to later remove them.

Is there any way to do this directly within the db ?


IIRC, if they're PERL compatible which it would seem from the php  
function you're using, no.  Postgres supports POSIX regexes but not  
(right now anyway) PERL regexes.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] Finding broken regex'es

2007-10-02 Thread Erik Jones


On Oct 2, 2007, at 10:48 PM, Tom Lane wrote:


Erik Jones <[EMAIL PROTECTED]> writes:

IIRC, if they're PERL compatible which it would seem from the php
function you're using, no.  Postgres supports POSIX regexes but not
(right now anyway) PERL regexes.


Actually what we support are Tcl (Henry Spencer) regexes, which are
about as powerful as Perl's but have a few minor incompatibilities.
Tcl and Perl regexes are both supersets of the POSIX requirements.



Excellent point of information.  I was really just trying to focus on  
why the Postgres regex engine wouldn't be equivalent to the matching  
functionality of php's preg_* functions.  One example would be look  
behind assertions.  Btw, am I crazy or do I remember someone  
mentioning that support for Perl regexes possibly being added to  
Postgres in the future.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] pg_dump question

2007-10-05 Thread Erik Jones

On Oct 5, 2007, at 11:58 AM, Judith wrote:


 Hello every body!!  I have a quesyion respect pg_dump...

   I need to backup the db structure, I suposse that I do with  
pg_dump -s, but I want to restore but just with some tables with  
the db catalogues, can I do this?, I mean, restore al the db  
structure without data and after restore the data of the catalogues  
tables??


Don't worry about the catalog tables.  If all you want is the schema  
dumped and restored then, yes, use the -s flag.  The data in the  
catalog tables is built as the schema is rebuilt during the restore,  
i.e postgres takes care of those for you.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [SQL] Accessing field of OLD in trigger

2007-10-12 Thread Erik Jones


On Oct 12, 2007, at 8:18 AM, Josh Trutwin wrote:


On Fri, 12 Oct 2007 12:00:55 +0200 (CEST)
Daniel Drotos <[EMAIL PROTECTED]> wrote:


Hi,


I'm working on a row level plpgsql trigger running after delete,
using a 8.0.3 server. It gets a parameter which is a field name of
the OLD record. How can that field be accessed?

I'd like to do something like:

for recvar in 'select OLD.'||quote_ident(TG_ARGV[0])...


I THINK you are out of luck here.  I hear it's possible to do but in
one of the other PL languages say pl/tcl, though I can't seem to find
an example


Right, "dynamic variables" aren't available in plpgsql.  Check out  
any of the other pl languages available if you can.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [PERFORM] [SQL] two queryes in a single tablescan

2007-10-20 Thread Erik Jones

On Oct 20, 2007, at 12:19 PM, Andreas Kretschmer wrote:


Markus Schaber <[EMAIL PROTECTED]> schrieb:

is there any way to get both results in a single query,
eventually through stored procedure?
The retrieved [count(*),A] ; [count(*),B)] data couldnt fit
on a single table, of course.

The main goal would be to get multiple results while scanning the
table[s] once only
thus getting results in a faster  way.


PostgreSQL 8.3 contains great improvements in this area, you can  
simply

start the selects from concurrent connections, and the backend will
synchronize the scans.


works this right across different transactions? I mean, for  
instance, TX

a insert rows and TX b insert other rows and both clients (with
different transactions) starts a seq-scan?


If you are in read-committed mode and both backends start their scans  
after the other has made its insert, then yes.  Note Markus's point  
that both queries must be initiated by concurrent connections.  Since  
Postgres doesn't have any kind of shared transaction mechanism across  
connections then this is inherent.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] get only rows for latest version of contents

2007-10-25 Thread Erik Jones


On Oct 25, 2007, at 1:50 AM, Sébastien Meudec wrote:


Thx a lot Chris.

In fact the correct SQL was (rewritten with inner join because of  
it is

required by my api):

select b1.*
from business b1
inner join (select idnode,max(version_no) as version_no from business
group by idnode) as b2
on b1.idnode = b2.idnode and
  (b1.version_no = b2.version_no or b2.version_no is null)

Regards,
Seb.



-Message d'origine-
De : [EMAIL PROTECTED] [mailto:pgsql-sql-
[EMAIL PROTECTED] De la part de Christian Kindler
Envoyé : mercredi 24 octobre 2007 11:55
À : Sébastien Meudec
Cc : pgsql-sql@postgresql.org
Objet : Re: [SQL] get only rows for latest version of contents

Hi!

not quick mut works

select * from business b1
where b1.version_no = (SELECT max(version_no) FROM business b2.
where b2.idnode = b1.idnode
   )

If you want to make this quiry faster du a regular join

select b1.*
  from business b1,
   (SELECT max(version_no) FROM business b2.
 where b2.idnode = b1.idnode
   ) as b2
where b1.idnode = b2.idnode
  and b1.version_no = b2.version_nr

Regards Chris

PS written without running any sql, maybe there are some syntax  
issues,

but i am shure you will figure these out :-)



On Wed, October 24, 2007 11:29 am, Sébastien Meudec wrote:

Hi everybody.

I have a table like that (i simplified it):
CREATE TABLE business {
  idnode integer not null,
  version_no integer,
  c1 text,
  c2 text,
  c3 text
}
With a unique index in (idnode,version_no).

This table records many version from contents identified by  
idnode where

texts may be different.
So i can have:
Idnode | version_no | c1| c2| c3
111| 2  | foo1  | foo2  | foo3
111| 1  | fee1  | foo2  | foo3
111| null   | fee1  | fee2  | fee3
222| null   | too1  | too2  | too3
333| 1  | xoo1  | xoo2  | xoo3
333| null   | yoo1  | yoo2  | yee3

I want to select all columns but only for last (greatest) version of

each

content. So I want a result like:
Idnode | version_no | c1| c2| c3
111| 2  | foo1  | foo2  | foo3
222| null   | too1  | too2  | too3
333| 1  | xoo1  | xoo2  | xoo3

If i do:
SELECT idnode, max(version_no) FROM business
GROUP BY idnode ORDER BY idnode;

I get effectively only last version:
Idnode | version_no
111| 2
222| null
333| 1

But as soon that i want to get texts, I don't know how to build  
the SQL.
In each SQL i tested i've been forced to put text column in a  
group by

since
i used aggregate for version_no:
SELECT idnode, max(version_no), c1, c2, c3 FROM BUSINESS
GROUP BY idnode, c1, c2, c3 ORDER BY idnode;

But with that SQL, because of the group by and different values  
in text

i

get
Idnode | version_no | c1| c2| c3
111| 2  | foo1  | foo2  | foo3
111| 1  | fee1  | foo2  | foo3
111| null   | fee1  | fee2  | fee3
222| null   | too1  | too2  | too3
333| 1  | xoo1  | xoo2  | xoo3
333| null   | yoo1  | yoo2  | yee3

As we can't do aggregate in join neither in where, i can't get  
what i

want.

Anybody could help me to build proper SQL ?

Thx for your answers.
Sébastien.


Here's another little trick that can come in handy for this:

SELECT DISTINCT ON (Idnode) Idnode, version_no, c1, c2, c3
FROM business
ORDER BY Idnode, version_no DESC;

Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] JOINing based on whether an IP address is contained within a CIDR range?

2007-10-25 Thread Erik Jones


On Oct 25, 2007, at 1:22 PM, Jamie Tufnell wrote:


Hi,

I am storing a log of HTTP requests in a database table (including  
IP address):


http_log: id(PK), path, time, ip

I have another table that contains CIDR ranges and names for them:

network_names: id(PK), cidr, name

Some example data for both tables:

network_names:
1, 192.168.0.0/24, 'Engineering'
2, 192.168.1.0/24, 'Media'
3, 192.168.2.0/24, 'Engineering'
4, 192.168.3.0/24, 'Accounting'
5, 192.168.4.0/24, 'Engineering'
6, 10.0.0.0/8, 'Engineering'

http_log:
1, '/index.html', 11001, 192.168.0.47/32
2, '/index.html', 11023, 200.1.2.3/32
3, '/index.html', 11059, 1.2.3.4/32
4, '/index.html', 11232, 192.168.2.1/32
5, '/index.html', 113919102, 192.168.1.39/32
6, '/index.html', 129101293, 10.2.2.4/32
7, '/index.html', 132828282, 192.168.4.2/32

Now, in trying to produce a report on this data, I've come up  
against an interesting (to me at least!) problem..


I basically want the same output as in http_log, but substituting  
the IP with the network name where available, i.e:


1, '/index.html', 11001, Engineering
2, '/index.html', 11023, 200.1.2.3/32
3, '/index.html', 11059, 1.2.3.4/32
4, '/index.html', 11232, Engineering
5, '/index.html', 113919102, Media
6, '/index.html', 129101293, Engineering
7, '/index.html', 132828282, Engineering

I'm wondering what the best way of doing this is (considering that  
http_log could have >10 rows)  Is it possible to somehow JOIN  
using the <<= and >>= network operators?  Or would I have to  
iterate the network_names table manually with LOOP (or something)  
on every row of the http_log?


If anyone can share some advice, that would be great!


Check out: http://www.postgresql.org/docs/8.2/interactive/functions- 
net.html


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [SQL] design of tables for sparse data

2007-11-13 Thread Erik Jones


On Nov 12, 2007, at 8:10 PM, Andreas wrote:


Fernando Hevia schrieb:

--- Andreas Wrote: ---
...

MY QUESTIONS:



Your questions have a strong "home-work" look.


Yes but I didn't want to bother everyone with my project's details.
It's more like a CRM.
Up until now I just tried to manage somehow with the sql basics and  
now I like to get better.
One interesting thing are crosstabs because I could use them in the  
reporting module.


I used this schoolbook scenario because it's such an easy  
example.   ;)



1)   How would I SELECT a report that looks like the first  
version of

the pupil table out of the 3 table design?
There must be a nontrivial SELECT statement that combines all 3  
tables.




You should check out the JOIN clause in select statements. Simple  
example:


Select t1.col1, t2.col1, t2.col2
>from t1 inner join t2 b on (t1.col1 = t2.col1)



A simple JOIN won't do the trick.
That would give me something like:
(42, Frank Miller,  Maths)
(42, Frank Miller,  English)
(42, Frank Miller,  Sports)
(43, Suzy Smith,  Maths)
(43, Suzy Smith,  History)

But I want it turned around and a bit interpreted like:

Column heads = (ID, Name, Maths, English, Sports, History)
(42, Frank Miller, yes, yes, yes, no )
(43, Suzy Smith, yes, no, no, yes)


You should look into the crosstab contrib package.

Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [SQL] NULLIF problem

2007-11-28 Thread Erik Jones


On Nov 27, 2007, at 8:47 PM, Michael Glaesemann wrote:



On Nov 27, 2007, at 21:04 , Gera Mel Handumon wrote:


I encounter an error if i use NULLIF with timestamp with time zone.
eq. dbtime=nullif(mytime,'')

i want to null the value of field DBTIME if the variable mytime=" "

DBTIME ="timestamp with time zone" datatype

error: column DBTIME is of type timestamp with time zone but  
expression is of type text.


I believe the reason is that '' is not a valid timestamp value:  
think of it this way:


IF mytime = '' THEN
  mytime := NULL;
END IF;

The first thing it needs to do is compare the mytime value with ''.  
As '' is not a valid timestamp value, it may be casing mytime to  
text. You'll run into problems if you're assigning a text value to  
a timestamp field (which happens after the initial comparison--and  
the cast--are done.)


I think you may need to handle this is you middleware, or handle  
the IF THEN explicitly in a function. Maybe CASE would work:


CASE WHEN mytime = '' THEN NULL
 ELSE CAST(mytime AS TIMESTAMP)
END



Why not just:

UPDATE table
SET mytime=NULL
WHERE mytime='';

Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] execute system command from storage procedure

2007-12-06 Thread Erik Jones


On Dec 6, 2007, at 10:32 AM, Sabin Coanda wrote:


Hi there,

Is it possible to execute a system command from a function ? (e.g.  
bash )


If you use one of the untrusted procedural languages (e.g. plperlu,  
plpythonu, ...) you can.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[SQL] Rule rewrite to possible union?

2007-12-06 Thread Erik Jones

Here's what I'd like to happen, but I'm not seeing how it can be done.

Say we have this simple table:

CREATE TABLE foo (
id integer,
foo varchar
);

and then many tables along these lines:

CREATE TABLE ud1_foo (LIKE foo);
CREATE TABLE ud2_foo (LIKE foo);

What I'd like is to do is select against foo and if there is an id in  
the where clause equal to 1 or 2 add a union with the appropriate  
table.  I could easily do this at the application level, but this is  
for a migration (the ud tables are going away) and I'd like to  
minimize the number of transient application code changes wherever  
possible.  Is there any way I can make this happen?


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] Foreign Key for multi PK or design question

2007-12-11 Thread Erik Jones


On Dec 11, 2007, at 12:20 PM, PostgreSQL Admin wrote:


I have a table in which people will have a number of  questions to
answer.  I want those pk to be placed in my user table.  So if  a user
answers three question I want those 3 pk's in the user table (fk).
What should I be doing?


You're going to have to give a more concrete example of what it is  
you're trying to do, i.e what those questions are, table structures,  
etc.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] Function result using execute

2007-12-11 Thread Erik Jones


On Dec 11, 2007, at 11:15 PM, Paul Lambert wrote:

I have a function which uses execute to populate the value of a  
variable based on a defined select construct.


The relevant part of the code looks like thus:
   EXECUTE curr_query INTO curr_amount;
   RAISE NOTICE '%',curr_amount;
   IF NOT FOUND THEN
  curr_amount=0;
   END IF;
   RAISE NOTICE '%',curr_amount;

I've added the if found to trap if nothing is returned by the  
execute so that the value gets set to a default 0 rather than null.


When I call the function, the first raise notice gives me a value  
that is correct based on the select it would be performing, but the  
second raise notice gives me a 0, which suggests to me that  
although the execute has populated the curr_amount field with  
something, the IF NOT FOUND is always firing.


Am I misunderstanding what the FOUND variable can be used for -  
i.e. is it not compatible with/not set by the EXECUTE command and  
should therefore I just be using a test of IF curr_amount IS NOT NULL?


If the result of your execute doesn't assign any value(s) to  
curr_amount it sets it to NULL.  With that in mind,


IF curr_amount IS NULL THEN
curr_amount := 0;
END IF;

should do.

Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] Trigger definition . . . puzzled

2007-12-12 Thread Erik Jones


On Dec 12, 2007, at 3:38 PM, Rolf A. de By wrote:


Greetings list,

Running pg 8.2.3. on a windows machine,  I have become blind in a  
trigger definition puzzle, so hope that somebody may help me  
understand where I goof.


I have a base table i_s that has three tables that inherit from it,  
one of them being i_s_nowhere.  The base table should be left  
empty, but I want it to be the prime port of call for data  
changes.  Hence the following set-up.  Let me try to be brief, in  
the hope of not leaving out relevant detail.


Base table def is as follows:

CREATE TABLE i_s
(
sidx integer NOT NULL, -- The s identifier
gid integer NOT NULL, -- The i identifier
status character(1),
confirmation character(1),
CONSTRAINT pk_is PRIMARY KEY (sidx, gid)
)
WITH (OIDS=FALSE);

And the trigger definition is here:

CREATE TRIGGER aw_archival_is2
BEFORE UPDATE OR INSERT OR DELETE
ON i_s
FOR EACH ROW
EXECUTE PROCEDURE aw_archive_test();

And the trigger function:

CREATE OR REPLACE FUNCTION aw_archive_test()
RETURNS "trigger" AS
$BODY$
BEGIN
RAISE WARNING 'Starting isa trigger for %', TG_OP;
IF (TG_OP = 'UPDATE') THEN
  RAISE WARNING 'Calling insert_isa with update';
ELSIF (TG_OP = 'INSERT') THEN
   RAISE WARNING 'Calling insert_isa with insert';
ELSIF (TG_OP = 'DELETE') THEN
  RAISE WARNING 'Calling insert_isa with delete';
END IF;RETURN NULL;
END;
$BODY$
LANGUAGE 'plpgsql' VOLATILE;

The trigger is enabled.

Yes, this does intentionally nothing.  The real code will obviously  
take care of data change in proper subtables.  Well, the trigger  
should do nothing now . . .  What I cannot get round to  
understanding is that an insert attempt will nicely give me two  
warnings, and will not insert, as expected with this code:


WARNING:  Starting isa trigger for INSERT
WARNING:  Calling insert_isa with insert

Query returned successfully: 0 rows affected, 31 ms execution time.

But an attempt to update actually magically goes to the proper  
subtable and performs the update:


Query returned successfully: 1 rows affected, 16 ms execution time.

Where did I deserve this?? ;-)

In attempts to solve this I did mess around with trigger and  
trigger function definitions a bit.  Could there be funny traces of  
this?  What is the best way to analyse this behavior?  I am testing  
from a pgAdmin 1.8 setup.


The function you've shown won't do anything because BEFORE row  
triggers that return NULL don't do anything (for that row).  If you  
want the operation to continue without any modification then just  
return NEW.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] Describe Table

2007-12-17 Thread Erik Jones


On Dec 17, 2007, at 10:56 AM, <[EMAIL PROTECTED]>  
<[EMAIL PROTECTED]> wrote:



I've reviewed much of the documentation and the forums,
but unable to seem to find a easy way to obtain the same
thing as the 'psql \d table' through sql. I know I can create
through collecting info on table, but seems there must be an
easier way. I desire to create a standard type SQL dump
syntax.


If you start psql with the -E flag, it will display all sql generated  
by internal commands such as those generated by \d commands.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] System catalog future changes

2007-12-18 Thread Erik Jones

On Dec 18, 2007, at 10:53 AM, Steve Midgley wrote:


Hello again,

Reading a previous recent post and answers called "Describe Table"  
got me thinking about a little piece of SQL I use in an application  
to get a list of all the tables for a specific namespace:


select pg_class.relname as table_name
from pg_class
 join pg_namespace on pg_namespace.oid = relnamespace
where pg_class.relkind = 'r' and pg_namespace.nspname = 'public'
order by pg_class.relname

I have assumed that this sort of query would be relatively  
"durable" - that future system catalog changes are unlikely to make  
this query stop working? Does anyone have any knowledge of how  
"fixed" the columns and values of this query are (i.e. are there a  
lot of internal and external dependencies that make future Pg  
versiosn unlikely to break the above code)?


Any other input on the above SQL - should I be doing this in  
another way?


Thanks for any thoughts or advice,


If all you're looking for is regular tables, the I'd use the  
pg_tables view.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] Advice for generalizing trigger functions

2007-12-27 Thread Erik Jones


On Dec 25, 2007, at 6:20 PM, Richard Broersma Jr wrote:

I've created quite a few functions that log modifications to  
various history tables. (the history table has the same name as the  
base table but is prefixed by the 'History.' schema.) The only  
difference between functions I can find is the table name.


Is there any way to generalize these myriad of functions into one?


Below is a sample of a typical logging trigger function.

Regards,
Richard Broersma Jr.


CREATE OR REPLACE FUNCTION "project"."log_managers_ops"()
RETURNS trigger AS
$BODY$
BEGIN


 IF TG_OP IN ( 'UPDATE', 'DELETE' ) THEN

  UPDATE History.Managers AS M
 SET endts = now()
   WHERE M.manager_id = OLD.manager_id
 AND now() BETWEEN M.startts AND M.endts;

 end IF;


 IF TG_OP IN ( 'UPDATE', 'INSERT' ) THEN

 INSERT INTO History.Managers
  VALUES ( now()::timestamptz, 'INFINITY'::timestamptz,  
NEW.*);


  RETURN NEW;

 END IF;

 RETURN OLD;

END;
$BODY$
LANGUAGE 'plpgsql' VOLATILE SECURITY DEFINER;


TG_TABLE_NAME will have the name of the table the trigger was fired  
on.  With that and using EXECUTE for your INSERT statements, you'll  
probably be set.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] Advice for generalizing trigger functions

2007-12-27 Thread Erik Jones

On Dec 27, 2007, at 12:03 PM, Richard Broersma Jr wrote:


--- On Thu, 12/27/07, Erik Jones <[EMAIL PROTECTED]> wrote:


TG_TABLE_NAME will have the name of the table the trigger
was fired  on.  With that and using EXECUTE for your INSERT
statements, you'll  probably be set.


True the table name is the easy part, but how do I determine the  
Primary Key components of the table that will also be needed in the  
SQL string that will be executed?


That depends on how generalized you want this trigger function to  
be.  If you have a set number of tables you can branch on the table  
name to determine the id column.   That's brittle and would require  
updating the the function every time you want to use if for a new  
table type but if there's a limited number of tables it may work for  
you.  Another way to go would be to the table name to join across  
pg_class, pg_attribute, and pg_constraint.  Take a look at the table  
layouts for those in the manual and it should be pretty clear how to  
do that.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] temp table existence

2007-12-29 Thread Erik Jones


On Dec 29, 2007, at 3:16 PM, Marcin Krawczyk wrote:

I just realized something... my bad. It will work since TRUNCATE  
removes only table from current session.


If the table exists and you're going to TRUNCATE it before using it,  
you could just use DROP TABLE IF EXISTS and then create it.  I don't  
know, that feels cleaner to me than TRUNCATEing a table that might  
not be available to the session.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [SQL] reading WAL files in python

2008-01-07 Thread Erik Jones


On Jan 7, 2008, at 7:19 AM, Gerardo Herzig wrote:

Hi all. Im having some fun trying to write my own replication  
system using python. I will use the postgres own WAL archiving to  
write the files,  then my app will read them and do some stuff. As  
im not a C programmer, im stuck in the mission of reading the  
binary files.


I guess im needing to know the internals of how wals archives are,  
how big the buffer has to be, in order to have a complete sql command.


Can someone point some advice?
I was trying to understad src/backend/access/transam/xlog.c, but  
seems too difficult to me :(


The biggest problem with what you're wanting to do here is that the  
data written to the WALL archives isn't going to be at all easily  
translatable into SQL statements.  AFAIK, the WAL data records on- 
disk file changes, i.e. it's strictly a binary mode scenario.  As  
someone has already suggested, if you want to learn more about  
Postgres and Python, look at Skytools.  I'm not just saying to use  
it, read the code and, if you like, offer help with patches.


Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com




---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [SQL] Get the max(value1, value2, value3) from a table

2008-01-07 Thread Erik Jones


On Jan 7, 2008, at 4:43 PM, Scott Marlowe wrote:


On Jan 7, 2008 4:38 PM, Josh Williams <[EMAIL PROTECTED]> wrote:

On Mon, 2008-01-07 at 17:03 -0500, Emi Lu wrote:

select ?max?(col1, col2, col3) as result;
will return

result
---
5
8
12

(3 rows)


8.1 (I believe?) introduced GREATEST(), which does precisely what  
you're

looking for.


How would greatest give him three rows like that?  Maybe I'm
misunderstanding what the OP was asking for...


His test data was:

col1  col2  col3
--
1  5   2
8  1   3
121  1

So, SELECT test(col1, col2, col3);

returning

test
-
5
8
12

Is giving the max of the three columns for each row.
Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [SQL] How to keep at-most N rows per group? periodic DELETEs or constraints or..?

2008-01-08 Thread Erik Jones


On Jan 8, 2008, at 8:24 AM, Jamie Tufnell wrote:


Hi,

I have a table that stores per-user histories of recently viewed items
and I'd like to limit the amount of history items to <= 50 per user.
I'm considering doing this with a query run from cron every so often
but I'm not happy with what I've come up with so far, and since it's a
quite active table I thought I'd ask here to see if there's a more
efficient way.

Right now the table structure is as follows...

user_item_history: id (PK), user_id (FK), item_id (FK), timestamp

For user_ids that have more than 50 rows, I want to keep the most
recent 50 and delete the rest.


Create an row trigger that fires after insert containing something  
along the lines of :


DELETE FROM user_item_history
WHERE id IN (SELECT id
   FROM user_item_history
   WHERE user_id=NEW.user_id
   ORDER BY timestamp DESC
   OFFSET 50);

Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] How to keep at-most N rows per group? periodic DELETEs or constraints or..?

2008-01-08 Thread Erik Jones

On Jan 8, 2008, at 11:41 AM, Jamie Tufnell wrote:


On 1/8/08, codeWarrior <[EMAIL PROTECTED]> wrote:

Jamie:

I think you are probably having slowdown issues in your "DELETE  
FROM WHERE
NOT IN SELECT ORDER BY DESCENDING" construct -- that seems a bit  
convoluted

to me


Hmm so rather than NOT IN ( .. LIMIT 50)  would you suggest IN ( ...
OFFSET 50) like in Erik's example?  Or something else entirely?


Well, that would give you some gain.  Think about it like this:  once  
a given user's history records are at 50 and you insert a row, if you  
use the NOT IN clause your comparing each of 51 rows to each of the  
50 you want to keep to find the one that can go while with the IN  
version your comparing each of the 51 rows to the 1 that can go.  Now  
how much of a gain that will be I can't say, YMMV.  I don't remember  
you saying anything about it so I'll also go ahead and point out that  
you most likely will want an index on user_id if you don't already.


ALSO: It looks to me like you have a column named "timestamp' ???  
This is
bad practice since "timestamp" is a reserved word... You really  
ought NOT to

use reserved words for column names... different debate.


I do realize it would be better to use something else and thanks for
the tip :-)  This is an established database and "timestamp" has been
used in other tables which is why I stuck to it here.. one day when
time permits maybe I'll rename them all!

Why bother deleting records anyway ? Why not alter your query that  
tracks

the 50 records to LIMIT 50 ???


The read query does LIMIT 50 and the reason for deleting the rest of
the records is because they're not needed by the application and
there's loads of them being created all the time (currently several
million unnecessary rows) -- I imagine eventually this will slow
things down?

Do you think a regular batch process to delete rows might be more
appropriate than a trigger in this scenario?


That depends on your usage pattern.  Assuming you aren't running user  
history report queries constantly that's probably what I'd do.  Also,  
if you're sure you won't need anything but the last 50 records per  
user, I'd definitely agree with cleaning out data that's not needed.


Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [SQL] Re: How to keep at-most N rows per group? periodic DELETEs or constraints or..?

2008-01-09 Thread Erik Jones


On Jan 9, 2008, at 1:09 PM, Scott Marlowe wrote:


On Jan 9, 2008 12:20 PM, Steve Midgley <[EMAIL PROTECTED]> wrote:

This is kludgy but you would have some kind of random number test at
the start of the trigger - if it evals true once per every ten  
calls to
the trigger (say), you'd cut your delete statements execs by about  
10x
and still periodically truncate every set of user rows fairly  
often. On

average you'd have ~55 rows per user, never less than 50 and a few
outliers with 60 or 70 rows before they get trimmed back down to 50..
Seems more reliable than a cron job, and solves your problem of an  
ever

growing table? You could adjust the random number test easily if you
change your mind of the balance of size of table vs. # of delete
statements down the road.


And, if you always through a limit 50 on the end of queries that
retrieve data, you could let it grow quite a bit more than 60 or 70...
Say 200.  Then you could have it so that the random chopper function
only gets kicked off every 100th or so time.


I like that idea.

Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [SQL] trigger for TRUNCATE?

2008-01-11 Thread Erik Jones


On Jan 11, 2008, at 2:24 AM, Richard Huxton wrote:


Tom Lane wrote:

Alvaro Herrera <[EMAIL PROTECTED]> writes:
My thinking is that a TRUNCATE trigger is a per-statement trigger  
which
doesn't have access to the set of deleted rows (Replicator uses  
it that
way -- we replicate the truncate action, and replay it on the  
replica).

In that way it would be different from a per-statement trigger for
DELETE.
Ah, right.  I was thinking in terms of having TRUNCATE actually  
fire the
existing ON DELETE-type triggers, but that's not really helpful  
--- you'd
need a separate trigger-event type.  So we could just say by fiat  
that
an ON TRUNCATE trigger doesn't get any rowset information, even  
after we

add that for the other types of statement-level triggers.


I've always considered TRUNCATE to be DDL rather than DML. I  
mentally group it with DROP TABLE rather than DELETE>


Not that DDL statement triggers wouldn't be just as useful for  
replication.


Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [SQL] Create on insert a unique random number

2008-03-18 Thread Erik Jones


On Mar 18, 2008, at 1:40 PM, Campbell, Lance wrote:


Thanks for all of your input.  It appears that the best way to do this
is to create a default random number in the primary id field in the
table definition and then return that value after insert.  If an
exception occurs because of duplicates I will simple perform the same
insert statement again. I doubt there would be many duplicate hits  
if I

use a really large number.

Why use a random number as a primary key?  Security via obscurity.

I build web applications for a living.  In most of my applications  
it is

preferable to use a random primary key.  Why?

Example:

I built a web application called the Form Builder.  It allows
individuals to create web forms.  After a user is done building their
web form the tool provides a URL for the user to access the form.
Obviously the URL has the random ID of the form in it.  Most of the
forms created with this tool can be accessed and filled out by the
general public.

So why not use a sequential number?  So if I used a sequential number
then a student or outside individual could easily change the number in
the URL to see what other forms there are.  It is not that they don't
have access to the forms but they should not be messing with them if
they really don't have a reason to.  So by using a very large random
number the users filling out a form cannot easily guess what another
form ID is.

Does that make sense?  I have at least a dozen tools that I prefer to
use this approach on.

This is not a security approach.  It is more about not giving obvious
access to people that want to mess around.


I'd say it makes total sense and we do much the same thing with  
unsubscribe links in the emails we push here.  However, we keep the  
primary key based on sequences and for sent messages generate a random  
id using md5(now()::text).  In a sense, then, we have "public" and  
"private" keys.


Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com




--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Select into

2008-03-20 Thread Erik Jones


On Mar 20, 2008, at 7:10 AM, Joe wrote:


Gavin 'Beau' Baumanis wrote:


The copy is inside the same table, so I don't understand why it  
(the required query ) would require any joins.


Ie. I want to copy the contents of a row (but for the id column -  
of course) into a record in the same table.


I think what you want is something like this:

Given (col1 being the id or PK):

col1 | col2 | col3
--+--+---
  1 |  123 | first record
  2 |  456 | second record
  3 |  789 | third record

then

update t1  set col2 = t1copy.col2, col3 = t1copy.col3
from t1 as t1copy
where t1.col1 = 1 and t1copy.col1 = 3;

will result in:

col1 | col2 | col3
--+--+---
  1 |  789 | third record
  2 |  456 | second record
  3 |  789 | third record

So, it is a join ... of a table with a virtual copy of itself.


Note that in 8.2.x and above you can write that as:

update t1
set (col2, col3) = (t1copy.col2, t1copy.col3)
from t1 as t1copy
where t1.col =1 and t1copy.col1=3;

Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com




--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Dynamic sql and variable record types

2008-03-20 Thread Erik Jones
Hi, I've been working on a generic date partitioning system and I  
think I've reached something that can't be done, but I thought I'd  
post a question to the masses in the hope that I'm missing something.   
The basic idea of what I'm doing is some userland scripts that will  
accept a table name argument along with the name of a date/timestamp  
attribute to partition on and create partitions for that table along  
with the appropriate trigger and trigger function.  The part I'm  
having trouble with is the trigger function.


What I'm done for that is to create a template file that my scripts  
read in and substitute the table column names wherever necessary, then  
run the results through the db to create the functions.  The problem  
is that for the function to be generic it needs to be able to work  
with different record/row types.  Here's the template for function  
(not working, which I'll discuss below):


CREATE OR REPLACE FUNCTION %s_ins_func(op text, rec %s)
RETURNS boolean AS $$
DECLARE
partition varchar;
name_parts varchar[];
upper_dim integer;
BEGIN
FOR partition IN
SELECT relname
FROM pg_class
WHERE relname ~ ('^%s_[0-9]{8}_[0-9]{8}$')
LOOP
name_parts := string_to_array(partition, '_');
upper_dim := array_upper(name_parts, 1);
IF rec.%s >= name_parts[upper_dim-1]::timestamp AND rec.%s <  
name_parts[upper_dim] THEN

IF op = 'INSERT' THEN
EXECUTE 'INSERT INTO %s_' || name_parts[upper_dim-1]  
|| '_' ||
name_parts[upper_dim] || ' VALUES ' ||  
rec || ';'; -- the problem is here with rec

RETURN TRUE;
END IF;
END IF;
END LOOP;
RETURN FALSE;
END;
$$ language plpgsql;

The userland scripts substitute the table and column names for the %s  
escapes where appropriate.  What the function actually does is to us  
the parent table's name to find all of the child partitions which are  
name like some_table_20080101_20080201, split out the dates from those  
to determine which table the insert needs to be redirected to.  That  
works fine.  The problem is that since I have to dynamically generate  
the destination table name I have to use EXECUTE for the INSERT  
statement.  But, I can't see how to use a record in query passed to  
EXECUTE.  Am I right in thinking (now) that this can't be done?


Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com




--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


SOLVED - Re: [SQL] Dynamic sql and variable record types

2008-03-21 Thread Erik Jones


On Mar 20, 2008, at 5:28 PM, Erik Jones wrote:

Hi, I've been working on a generic date partitioning system and I  
think I've reached something that can't be done, but I thought I'd  
post a question to the masses in the hope that I'm missing  
something.  The basic idea of what I'm doing is some userland  
scripts that will accept a table name argument along with the name  
of a date/timestamp attribute to partition on and create partitions  
for that table along with the appropriate trigger and trigger  
function.  The part I'm having trouble with is the trigger function.


What I'm done for that is to create a template file that my scripts  
read in and substitute the table column names wherever necessary,  
then run the results through the db to create the functions.  The  
problem is that for the function to be generic it needs to be able  
to work with different record/row types.  Here's the template for  
function (not working, which I'll discuss below):


CREATE OR REPLACE FUNCTION %s_ins_func(op text, rec %s)
   RETURNS boolean AS $$
DECLARE
   partition varchar;
   name_parts varchar[];
   upper_dim integer;
BEGIN
   FOR partition IN
   SELECT relname
   FROM pg_class
   WHERE relname ~ ('^%s_[0-9]{8}_[0-9]{8}$')
   LOOP
   name_parts := string_to_array(partition, '_');
   upper_dim := array_upper(name_parts, 1);
   IF rec.%s >= name_parts[upper_dim-1]::timestamp AND rec.%s <  
name_parts[upper_dim] THEN

   IF op = 'INSERT' THEN
   EXECUTE 'INSERT INTO %s_' || name_parts[upper_dim-1]  
|| '_' ||
   name_parts[upper_dim] || ' VALUES ' ||  
rec || ';'; -- the problem is here with rec

   RETURN TRUE;
   END IF;
   END IF;
   END LOOP;
   RETURN FALSE;
END;
$$ language plpgsql;

The userland scripts substitute the table and column names for the  
%s escapes where appropriate.  What the function actually does is to  
us the parent table's name to find all of the child partitions which  
are name like some_table_20080101_20080201, split out the dates from  
those to determine which table the insert needs to be redirected  
to.  That works fine.  The problem is that since I have to  
dynamically generate the destination table name I have to use  
EXECUTE for the INSERT statement.  But, I can't see how to use a  
record in query passed to EXECUTE.  Am I right in thinking (now)  
that this can't be done?


I solved this by doing a lookup of the table's attributes and putting  
them directly into the function during the templating step.


Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com




--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Finding all References to a Primary Key

2008-03-27 Thread Erik Jones


On Mar 26, 2008, at 1:19 PM, Matthew T. O'Connor wrote:
I'm sorry if this is has been discussed, but I tried to find the  
answer in the archives and failed, so...


How do I find all the rows in other tables that reference a specific  
row in another table?  I'm only trying to find rows that are in  
tables where there is a Foreign Key referencing the primary key of  
the table in question.


Example:
Table People has a primary key of people_id
There are say 20 tables that have foreign keys referencing  
people.people_id
How do I find all the rows in all of those 20 tables that reference  
a particular person in the people table?


pg_catalog.pg_constraint has that info.

Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com




--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] advocacy: case studies

2008-04-07 Thread Erik Jones

That's really cool.

On Apr 7, 2008, at 3:33 PM, Ivan Sergio Borgonovo wrote:

Can it be one of those things that end up in the "case studies"
http://www.postgresql.org/about/casestudies/

http://www.theregister.co.uk/2008/04/03/xtremedata_fpga_xeon_data_warehouse/

"The XtremeDB is built from PostgreSQL – a full-featured, open-source
RDBMS, that has been re-engineered by XDI," the company says in a
newsletter. "All of the front-end PostgreSQL interfaces have been
maintained intact and the back-end execution engine has been
transformed to leverage the shared-nothing parallel cluster
environment with FPGA acceleration."

?

--
Ivan Sergio Borgonovo
http://www.webthatworks.it


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com




--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Testing for null record in plpgsql

2008-04-10 Thread Erik Jones

Ok, let's say I have the following:

CREATE TABLE foo (
val1 integer,
val2 integer
);

CREATE OR REPLACE FUNCTION returns_null_maybe()
RETURNS foo AS $$
DECLARE
res integer;
BEGIN
SELECT INTO res extract('month' from now()::integer % 2;

IF res == 0 THEN
RETURN NULL;
ELSE
RETURN (5,5)::foo;
END IF;
END;
$$ LANGUAGE plpgsql;

Now, let's say I want to call this from another function and test the  
result to see if I have a null record (null, null),.  I've got the  
following working but it feels like there should be something much  
simpler but I just can't seem to hit on it.  Is this it?


CREATE FUNCTION test_null_rec()
RETURNS boolean AS $$
DECLARE
res boolean;
null_rec foo;
BEGIN
SELECT INTO res * FROM returns_null_maybe();

IF row(res.*) IS DISTINCT FROM row(null_rec.*) THEN
RETURN TRUE;
ELSE
RETURN FALSE;
END IF;
END;
$$ LANGUAGE plpgsql;

Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com




--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Testing for null record in plpgsql

2008-04-11 Thread Erik Jones

Pavel & Craig, I'm replying to both of you to save some time :)

All I was really trying to demonstrate was the need to be able to  
distinguish a completely null record from one that isn't.  I can see  
by both of your responses that by incorporating that in a dummy  
example I inadvertently added misleading context.  In my actual code,  
nothing returns boolean, I just used that to show the crux of the  
specific syntax/semantics issue I was having.  What I've actually got  
are foo_ins_func() and foo_ins_trig() where foo is a table with both  
functions being generated dynamically by userland scripts.   
foo_ins_trig() is a BEFORE INSERT trigger function that calls  
foo_ins_func() (this layering allows me to use foo_ins_func() both  
directly and in the trigger) so, Pavel: I can't just return the record  
that gets returned from foo_ins_func as if it's completely null that  
causes an error -- trigger functions need to return NULL not (NULL,  
NULL), and Craig: obviously I can't return a simple boolean from a  
trigger function.


Here are the dynamically generated functions I've been using for  
testing along with the table def:


CREATE TABLE foo (
id serial primary key,
val integer,
val_ts timestamp without time zone not null
);

CREATE OR REPLACE FUNCTION foo_ins_func(rec foo)
RETURNS foo AS $$
DECLARE
 partition varchar;
 name_parts varchar[];
 upper_dim integer;
 ins_sql varchar;
 BEGIN
 FOR partition IN
 SELECT relname
 FROM pg_class
 WHERE relname ~ ('^foo_[0-9]{8}_[0-9]{8}$')
 LOOP
 name_parts := string_to_array(partition, '_');
 upper_dim := array_upper(name_parts, 1);
 IF rec.val_ts >= name_parts[upper_dim-1]::timestamp
 AND rec.val_ts < name_parts[upper_dim]::timestamp THEN
 ins_sql := 'INSERT INTO foo_' || name_parts[upper_dim-1]  
|| '_' ||
 name_parts[upper_dim] || ' (id,val,val_ts)  
VALUES (' || quote_nullable(rec.id) || ',' || quote_nullable(rec.val)  
|| ',' || quote_nullable(rec.val_ts) || ');';

 EXECUTE ins_sql;
 RETURN NULL;
 END IF;
 END LOOP;
 RAISE WARNING 'No partiion created for foo to hold timestamp  
value %, leaving data in parent table.', rec.val_ts;

 RETURN rec;
 END;
$$ LANGUAGE plpgsql;

CREATE OR REPLACE FUNCTION foo_ins_trig()
RETURNS trigger AS $$
DECLARE
 res foo;
 null_rec foo;
 BEGIN
 SELECT INTO res * FROM foo_ins_func(NEW) as g;
 IF row(res.*) IS DISTINCT FROM row(null_rec.*) THEN
 RETURN NEW;
 END IF;
 RETURN NULL;
 END;
$$ LANGUAGE plpgsql;

Fwiw, this is a piece of some range based table partitioning  
automation scripts I'm working on.  Once I've got it rounded out  
(right now it only supports timestamps but I doubt swapping in integer  
or dates will be difficult) and a test suite written I'll probably  
throw it up on github since people often ask how to do this kind of  
thing.



On Apr 11, 2008, at 2:40 AM, Craig Ringer wrote:

Erik Jones wrote:
Now, let's say I want to call this from another function and test  
the result to see if I have a null record (null, null),.  I've got  
the following working but it feels like there should be something  
much simpler but I just can't seem to hit on it.  Is this it?
I'm assuming that returns_null_maybe() is  a dummy to show general  
behavior. I can't imagine why you'd ever want to do what it's doing.


In general I'm suspicious of code that's testing for a real, known  
value and returning NULL in its place. It seems like an odd thing to  
do. Still, I'm sure you have your reasons and they probably make  
sense in the real application rather than the simplified example.


You can tidy test_null_rec a little by just using:

RETURN row(res.*) IS DISTINCT FROM row(null_rec.*);

but otherwise, without incorporating it into the containing query as  
a subquery I don't see much to be done. I'm still curious about the  
purpose of using null values like this is, though.


--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com




--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Problem with FOREIGN KEY

2008-04-22 Thread Erik Jones


On Apr 22, 2008, at 10:26 AM, Вадим Жерновой |Vadim  
Zhernovoi| wrote:



I have a Problem with Foreign key query...i've posted the code on
http://pgsl.privatepaste.com/e44p3Orqiq, so it have syntax higlight.
Please, need help!


What that's telling you is that there is data in your Users table that  
doesn't satisfy the foreign key constraint, i.e. there is a Users row  
with SecurityRoleId=0 and no row in SecurityRole with ID=0.


Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com




--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] First day of month, last day of month

2008-04-24 Thread Erik Jones


On Apr 24, 2008, at 9:15 AM, Colin Wetherbee wrote:


Frank Bax wrote:

Frank Bax wrote:

Nacef LABIDI wrote:
is there a better method to retrieve all the rows with dates in  
the current month.


select * from mytable where extract(month from mydate) =  
extract(month from now()) and extract(year from mydate) =  
extract(year from now());

Sorry; I was not thinking clearly - date_trunc is better for this:
select * from mytable where date_trunc('month',mydate) =  
date_trunc('month',now());


I have some code that uses extract() for this sort of thing.  Would  
you mind explaining how date_trunc() is better for this?


Most of my extract() results end up in  drop-down boxes in  
HTML.



extract will pull specific date unit value out of a given date/ 
timestamp/interval.  date_trunc will "round" a given date/timestamp  
down to the given unit.


extract(month from now()) -> 4

date_trunc('month', now()) -> 2008-04-01 00:00:00-05

I typically find date_trunc much more useful but I may just think that  
because I've been writing partitioning code a lot lately.


Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com




--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] First day of month, last day of month

2008-04-24 Thread Erik Jones


On Apr 24, 2008, at 9:52 AM, Terry Lee Tucker wrote:


On Thursday 24 April 2008 10:47, Bart Degryse wrote:

> Well, that's what it does afaikt.



afaikt -> as far as I can tell.

Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com




--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] First day of month, last day of month

2008-04-24 Thread Erik Jones


On Apr 24, 2008, at 9:56 AM, Scott Marlowe wrote:

On Thu, Apr 24, 2008 at 8:52 AM, Terry Lee Tucker <[EMAIL PROTECTED] 
> wrote:

On Thursday 24 April 2008 10:47, Bart Degryse wrote:



Well, that's what it does afaikt.


And what does afaikt mean?


As Far As I Kan Tell?  ???

I'm used to AFAIR, As Far As I Rekall... :)


Or AFAICS, As Far As I Can See

Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com




--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Array from INSERT .. RETURNING in plpgsql?

2008-10-07 Thread Erik Jones

Ok, so the following works:

pagila=# select array(select s.i from generate_series(1, 10) s(i));
  ?column?
  
{1,2,3,4,5,6,7,8,9,10}
   (1 row)

but this doesn't:

pagila=# create or replace function testfun() returns void as $$
declare
vals int[];
query text;
begin
query := 'insert into test select s.i from generate_series(1,10)  
s(i) returning i;';

execute query into vals;
raise notice 'vals dim: %', array_upper(vals, 1);
raise notice 'vals[3]: %', vals[3];
end;
$$ language plpgsql;
CREATE FUNCTION
Time: 3.319 ms
pagila=# select testfun();
ERROR:  array value must start with "{" or dimension information
CONTEXT:  PL/pgSQL function "testfun" line 6 at execute statement

Is there any way to do what I'm trying without explicity looping over  
the results of the insert?


Erik Jones, Database Administrator
Engine Yard
Support, Scalability, Reliability
(415) 963-4410 x 260
Location: US/Pacific
IRC: mage2k





--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Way to eliminate pg_dump activity from pg_stat_all ?

2008-12-16 Thread Erik Jones
I doubt it.  From the server's perspective, pg_dump is just a client  
executing queries.  If the db is never used, why are you continually  
backing it up?


On Dec 16, 2008, at 12:55 PM, Bryce Nesbitt wrote:

I've got a bunch of tables in a legacy database that I know are  
never used, and some more I'm not sure about.  So I tried to  
identify and confirm with:

select pg_stat_reset();

-- Wait a long time

select * from pg_stat_all_tables where schemaname='public' order by  
seq_scan,seq_tup_read;
select greatest(heap_blks_read,heap_blks_hit),relname from  
pg_statio_all_tables where schemaname='public' order by 1 desc;
But I think I'm getting clutter from the nightly backups.  Is there  
a way to keep pg_dump activity out of the statistics? I can think of  
several reasons to want such activity excluded, not just this one.


Erik Jones, Database Administrator
Engine Yard
Support, Scalability, Reliability
866.518.9273 x 260
Location: US/Pacific
IRC: mage2k






--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Object create date

2008-12-30 Thread Erik Jones


On Dec 29, 2008, at 12:30 PM, George Pavlov wrote:

1. not exactly what you were looking for, but i answer this  
partially by putting a commented-out CVS expansion tag (e.g. $Id:)  
in the body of the function so that it gets into the catalog and can  
be searched:


 CREATE OR REPLACE FUNCTION foo ()
 RETURNS void AS
 $BODY$
 -- $Id: foo.sql,v 1.6 2008/12/23 00:06:52 gpavlov Exp $
 BEGIN
 ...

and query it by something like this:

 select
   routine_name,
   substring(routine_definition from E'%#\042-- #\044Id: % Exp #\044# 
\042%' for '#') as cvs_id

 from information_schema.routines
 ;

2. you can also make some inference about the relative timing of  
object creation based on the OIDs (query pg_catalog.pg_proc rather  
than information_schema.routines for proc OIDs).


Hmm...  It seems to me that since object creation time, being  
metadata, would be better served being placed in a COMMENT for the  
object.  That would have the added bonus of being able to search in  
one place (pg_description) across all objects of all types for a given  
creation/modification date.


Erik Jones, Database Administrator
Engine Yard
Support, Scalability, Reliability
866.518.9273 x 260
Location: US/Pacific
IRC: mage2k






--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] some howto/theory book/tutorial on practical problem solving in SQL

2009-01-13 Thread Erik Jones


On Jan 11, 2009, at 8:32 AM, Ivan Sergio Borgonovo wrote:


I'm looking to some book/tutorial/sample code that will teach me how
to use SQL to solve some standard problem that goes a bit beyond
using group by and aggregates.

Something like "SQL problem solving" or "SQL design strategies for
selected problems".


O'Reilly's SQL Hacks is a good one that fits the bill you describe.

Erik Jones, Database Administrator
Engine Yard
Support, Scalability, Reliability
866.518.9273 x 260
Location: US/Pacific
IRC: mage2k






--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Derived columns / denormalization

2009-01-16 Thread Erik Jones


On Jan 15, 2009, at 8:06 PM, Tom Lane wrote:


"Jamie Tufnell"  writes:

item_count int -- this is derived from (select count(*) from items
where group_id = id)
...



item_count would be updated by insert/update/delete triggers on the
items table, hopefully that would ensure it is always correct?



I'm wondering is what I'm trying to do here pretty standard and are
there any gotchas I should be aware of?


Concurrent updates to the items table make this much harder than
it might first appear.  If you're willing to serialize all your  
updating

transactions then you can make it work, but ...


That was exactly the caveat I was about to point out.  That being  
said, keeping COUNT() values and other computed statistics based on  
other data in the database *is* a fairly common "tactic".  On method  
that I've used to great success to avoid the serialization problem is  
to have your triggers actually insert the necessary information for  
the update into a separate "update queue" table.  You then have  
separate process that routinely sweeps that update queue, aggregates  
the updates and then updates your count values in the groups table  
with the total update values for each groups entry with updates.


Erik Jones, Database Administrator
Engine Yard
Support, Scalability, Reliability
866.518.9273 x 260
Location: US/Pacific
IRC: mage2k






--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Way to eliminate pg_dump activity from pg_stat_all ?

2009-01-20 Thread Erik Jones


On Jan 20, 2009, at 11:27 AM, Bryce Nesbitt wrote:

It is not the db that's never used.  Various tables in the db are  
never used. This is a database that's been continually added to and  
modified for ten years -- sometimes cruft accumulates.


Understood.

It would be nice to get pg_dump activity out of the stats for  
another reason --- pg_dump adds to the sequential scan activity, in  
a way that does not represent the "typical" use of the database.   
Essentially pg_dump is an atypical user of  the database, so it's  
stats are "different" than the rest of the activity.


It sounds like the proper wording for a feature request here would be  
something like "Disable stats collection on a per-session basis".



Erik Jones wrote:
I doubt it.  From the server's perspective, pg_dump is just a  
client executing queries.  If the db is never used, why are you  
continually backing it up?


On Dec 16, 2008, at 12:55 PM, Bryce Nesbitt wrote:

But I think I'm getting clutter from the nightly backups.  Is  
there a way to keep pg_dump activity out of the statistics? I can  
think of several reasons to want such activity excluded, not just  
this one.


Erik Jones, Database Administrator
Engine Yard
Support, Scalability, Reliability
866.518.9273 x 260
Location: US/Pacific
IRC: mage2k






--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] [GENERAL] pg_restore error - Any Idea?

2009-03-22 Thread Erik Jones


On Mar 22, 2009, at 10:44 PM, DM wrote:


Hi All,

I am facing an error on executing the below command

dump name: pg_dump_FcZ0.pnps_200903041201_1.2.1.0_base_testing
databae name: pnqd_test

$pg_restore -U postgres -p 5433 -d pnqd_test  
pg_dump_FcZ0.pnps_200903041201_1.2.1.0_base_testing


pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 3715; 0 0 ACL  
monitor postgres

WARNING: errors ignored on restore: 1

I am not able to figure out this issue. Any idea guys.



TOC -> Table of Contents

A dump made with pg_dump's -Fc will contain a table of contents of all  
of the database objects in the dump file.  Something in that is  
causing an error for pg_restore.  Does the version of pg_restore match  
up with the version of pg_dump that you used to make the dump?


Erik Jones, Database Administrator
Engine Yard
Support, Scalability, Reliability
866.518.9273 x 260
Location: US/Pacific
IRC: mage2k






--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] bash & postgres

2009-03-22 Thread Erik Jones

On Mar 22, 2009, at 9:03 PM, Greenhorn wrote:


Hi,

I'm trying to pass variables on a bash script embedded with psql  
commands.


cat header.txt

"to1","from1","subject1"
"to2","from2","subject2"
"to3","from3","subject3"
"to4","from4","subject4"

cat b.sh

#!/bin/bash
two="2"

psql -h localhost -U postgres -d mobile -c "create temp table header (

field_1   textnot null,
field_2   textnot null,
field_3   textnot null

);

\\copy header FROM header.txt CSV

SELECT * FROM header limit "$two"; "


When I execute b.sh

ERROR:  syntax error at or near "\"
LINE 10: \copy header FROM header.txt CSV
^

How do I use \c (or any other psql commands beginning with a "\") in a
bash script?


For multi-line input to a psql call in a bash (or any decent shell)  
script, I'd use a here document:


#!/bin/bash

#!/bin/bash
two="2"

psql -d pagila <(P.S. Your quotes around $two in your original are not needed, in fact  
they're straight up broken as $two is already inside of a double- 
quoted string).


Erik Jones, Database Administrator
Engine Yard
Support, Scalability, Reliability
866.518.9273 x 260
Location: US/Pacific
IRC: mage2k






--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [GENERAL] [SQL] bash & postgres

2009-03-23 Thread Erik Jones


On Mar 23, 2009, at 7:05 AM, Tom Lane wrote:


Erik Jones  writes:

On Mar 22, 2009, at 9:03 PM, Greenhorn wrote:
How do I use \c (or any other psql commands beginning with a "\")  
in a

bash script?



For multi-line input to a psql call in a bash (or any decent shell)
script, I'd use a here document:


Or echo/cat the script into psql's stdin, if you prefer that type of
notation.  The reason you have to do this is that psql doesn't  
recognize

backslash commands in a -c string.  There's a school of thought that
doesn't want us to allow multiple commands in a -c string, even.


Hmm... Apparently it does recognize them as long as the backslash is  
the first character in the command string:


$ psql -c '\d' postgres
No relations found.
$ psql -c ' \d' postgres
ERROR:  syntax error at or near "\"
LINE 1:  \d
 ^

Is that expected behavior?

Erik Jones, Database Administrator
Engine Yard
Support, Scalability, Reliability
866.518.9273 x 260
Location: US/Pacific
IRC: mage2k






--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] How to count from a second table in an aggregate query?

2009-04-15 Thread Erik Jones
in ('web', 'detail');

src_contact_id | log_type | contact_id | property_id
+--++-
  1 | detail   |  1 |  20
  1 | detail   |  1 |  20
  1 | web  |  1 |  20
  1 | web  |  1 |  20
  1 | web  |  1 |  20
  1 | detail   |  1 |  21
  1 | detail   |  1 |  21
  1 | web  |  1 |  21
  1 | web  |  1 |  21
  1 | web  |  1 |  21
  1 | detail   |  1 |  22
  1 | detail   |  1 |  22
  1 | web  |  1 |  22
  1 | web  |  1 |  22
  1 | web  |  1 |  22
  2 | web  |  2 |  23
  2 | detail   |  2 |  23
  2 | detail   |  2 |  23
  2 | web  |  2 |  24
  2 | detail   |  2 |  24
  2 | detail   |  2 |  24
  1 | detail   |  1 |  50
  1 | detail   |  1 |  50
  1 | web  |  1 |  50
  1 | web  |  1 |  50
  1 | web  |  1 |  50

That is exactly what gets processed by the aggregates.  Aggregates  
process *a* relation, here the relation produced by the join.  Now,  
that query with the counts makes more sense:


select cl.src_contact_id, count(log_type), count(property_id)
from contact_log cl, contact_property cp
where cl.src_contact_id = cp.contact_id
and cl.log_type in ('web', 'detail')
group by cl.src_contact_id

 src_contact_id | count | count
+---+---
  1 |20 |20
  2 | 6 | 6

Joining against a subquery for the second count does the trick:

select src_contact_id, count(log_type), cp.count
from contact_log ,
(select contact_id, count(property_id)
  from contact_property
  group by contact_id) as cp
where src_contact_id = cp.contact_id
and log_type in ('web', 'detail')
group by src_contact_id, cp.count
order by src_contact_id

 src_contact_id | count | count
+---+---
  1 | 5 | 4
  2 | 3 | 2

Note that you have to add the count pulled from subquery to the group  
by since at that point it's a constant and not an aggregate function  
anymore, else you get an SQL error in the outer group by clause.


Here's the query without the aggregate in the outer query to help make  
that clear:


select src_contact_id, log_type, cp.count
from contact_log ,
(select contact_id, count(property_id)
 from contact_property
 group by contact_id) as cp
where src_contact_id = cp.contact_id
and log_type in ('web', 'detail')
order by src_contact_id;

src_contact_id | log_type | count
+--+---
  1 | web  | 4
  1 | detail   | 4
      1 | web  | 4
  1 | web  | 4
  1 | detail   | 4
  2 | detail   | 2
  2 | detail   | 2
  2 | web  | 2

That says, "Give me each src_contact_id and log_type pair from  
contact_log along with the count from contact_property where  
contact_id = src_contact_id".


Erik Jones, Database Administrator
Engine Yard
Support, Scalability, Reliability
866.518.9273 x 260
Location: US/Pacific
IRC: mage2k






--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] pg_stat_activity return all connections

2009-04-28 Thread Erik Jones


On Apr 28, 2009, at 8:43 AM, Emi Lu wrote:


Good morning,

A question about pg_stat_activity:

If believe that "select * from pg_stat_activity where  
usename='db_user1'" returns all connections based on db_user1, no  
matter the connection is through terminal command or jdbc, am I right?


Yes.

Erik Jones, Database Administrator
Engine Yard
Support, Scalability, Reliability
866.518.9273 x 260
Location: US/Pacific
IRC: mage2k






--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] left join where not null vs. inner join

2009-05-22 Thread Erik Jones


On May 22, 2009, at 1:51 PM, Emi Lu wrote:

Two tables, each contains more than hundreds of thousands records.  
Is there any efficiency differences between (1) and (2)?


(1) T1 inner join T2 using (c1, c2)


(2) T1 left join T2 using (c1, c2) where c2 is not null


Yes, stick with the first.  In the second you're asking the db to  
generate a result set with tuples for every row in T1 and then filter  
it down to where there are only matching T2 rows whereas in the first  
it does the filtering as it goes.  The LEFT JOIN ... WHERE X NOT NULL  
construct is typically used as an alternative to a NOT IN or NOT  
EXISTS ().  So, this:


SELECT *
FROM t1
WHERE id NOT IN (SELECT some_id
FROM T2);

becomes

SELECT t1.*
FROM t1 LEFT JOIN t2 ON (t1.id = t2.some_id)
WHERE t2.id IS NULL;

Basically, it's used in the opposite case of what you're asking about.

Erik Jones, Database Administrator
Engine Yard
Support, Scalability, Reliability
866.518.9273 x 260
Location: US/Pacific
IRC: mage2k






--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql