Re: [GENERAL] Cheapest way to poll for notifications?

2009-12-18 Thread Craig Ringer

On 11/12/2009 11:39 PM, Craig Ringer wrote:


As for the JDBC driver - it turns out that the documentation is
out-of-date and/or misleading. The JDBC driver *does* support reading
notifications the backend has pushed to its receive buffer, and does
*not* have to poll the backend or issue a statement to receive
notifications.


Update: The above is true only for non-SSL connections. For SSL 
connections you still have to send dummy statements.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PL/Perl Performance Problems

2009-12-18 Thread Alex -

Yes I do, but this is the pl/perl function called by a batch job i run. before 
the pl/perl function is called i insert 2x200k records into 2 tables (200k per 
table).
First i thought that it might be a problem with the perl function, but then i 
noticed that it even started earlier with the simple inserts.
after the insert the job will call the function and there i have the same 
issues. runs slow in the morning, and fast in the afternoon. it will pick up 
speed after 5-10k records
thanks for your help

> Date: Sat, 19 Dec 2009 00:10:36 -0700
> Subject: Re: [GENERAL] PL/Perl Performance Problems
> From: scott.marl...@gmail.com
> To: ainto...@hotmail.com
> CC: t...@sss.pgh.pa.us; pgsql-general@postgresql.org
> 
> According to your original post, you do selects in step 1 and 2...  Or
> is this a different job and I've lost the thread (happens to me plenty
> :) )
> 
> 1. Selects about 20 Records from Table A (
>- loops though the list and deletes in total about 50k records in Table B
> 2. For each record form Table A it then selects Records from Table C
>- loops through these records about 50K in total
>- for each runs a query 3 Tables, 10-20M records
>- inserts a record in Table B .. about 50K
> 3. Returns some stats on the whole operation (100 records).
> 
> On Sat, Dec 19, 2009 at 12:07 AM, Alex -  wrote:
> > On a 2nd thought... where does the cach come into play when i only do
> > inserts and no selects.
> > Alex
> >
> >> Date: Fri, 18 Dec 2009 23:45:07 -0700
> >> Subject: Re: [GENERAL] PL/Perl Performance Problems
> >> From: scott.marl...@gmail.com
> >> To: ainto...@hotmail.com
> >> CC: t...@sss.pgh.pa.us; pgsql-general@postgresql.org
> >>
> >> On Fri, Dec 18, 2009 at 11:37 PM, Alex -  wrote:
> >> > Hmm...
> >> > how can that be. This is happening every day, so its not a one off or
> >> > happens once in the morning then in the afternoon. There is also no
> >> > other
> >> > task running on the system, its dedicated to postgres.
> >> > Could the Autovacuum cause problems? Starting to invoke Analyze at the
> >> > beginning of the day but the keep silent till the day timestamp breaks ?
> >> > The think is that I have 4 servers setup in a similar way and all have
> >> > exactly the same problem.
> >>
> >> What cron jobs are on that machine that run at night? Note that on
> >> many OSes, maintenance crons are scheduled in a dir something like
> >> /etc/cron.daily etc... On my laptop they all run at midnight. I'm
> >> wondering if they're blowing out your cache so that you just don't
> >> have the same performance the first time you hit a particular dataset
> >> after they've run. Just a guess. You could try disabling them for a
> >> day and see what happens.
> >>
> >> --
> >> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> >> To make changes to your subscription:
> >> http://www.postgresql.org/mailpref/pgsql-general
> >
> > 
> > Meet singles at ninemsn dating Looking for a great date?
> 
> 
> 
> -- 
> When fascism comes to America, it will be intolerance sold as diversity.
  
_
If It Exists, You'll Find it on SEEK Australia's #1 job site
http://clk.atdmt.com/NMN/go/157639755/direct/01/

Re: [GENERAL] pulling metadata creation strings from metadata tables?

2009-12-18 Thread Craig Ringer

On 19/12/2009 2:24 PM, Gauthier, Dave wrote:

Hi:

Are metadata creation strings, in sql form, stored anywhere in the
metadata tables?


Not AFAIK, no.

How would you handle subsequent ALTER commands and the like? What if you 
ALTERed two tables in a way that could only succeed if done in a 
particular order (such as creating a foreign key relationship between 
two tables) ?


Have a look at pg_dump . It knows how to create appropriate DDL to 
re-create particular schema objects, and can handle the various ordering 
and dependency issues involved.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PL/Perl Performance Problems

2009-12-18 Thread Scott Marlowe
According to your original post, you do selects in step 1 and 2...  Or
is this a different job and I've lost the thread (happens to me plenty
:) )

1. Selects about 20 Records from Table A (
   - loops though the list and deletes in total about 50k records in Table B
2. For each record form Table A it then selects Records from Table C
   - loops through these records about 50K in total
   - for each runs a query 3 Tables, 10-20M records
   - inserts a record in Table B .. about 50K
3. Returns some stats on the whole operation (100 records).

On Sat, Dec 19, 2009 at 12:07 AM, Alex -  wrote:
> On a 2nd thought... where does the cach come into play when i only do
> inserts and no selects.
> Alex
>
>> Date: Fri, 18 Dec 2009 23:45:07 -0700
>> Subject: Re: [GENERAL] PL/Perl Performance Problems
>> From: scott.marl...@gmail.com
>> To: ainto...@hotmail.com
>> CC: t...@sss.pgh.pa.us; pgsql-general@postgresql.org
>>
>> On Fri, Dec 18, 2009 at 11:37 PM, Alex -  wrote:
>> > Hmm...
>> > how can that be. This is happening every day, so its not a one off or
>> > happens once in the morning then in the afternoon. There is also no
>> > other
>> > task running on the system, its dedicated to postgres.
>> > Could the Autovacuum cause problems? Starting to invoke Analyze at the
>> > beginning of the day but the keep silent till the day timestamp breaks ?
>> > The think is that I have 4 servers setup in a similar way and all have
>> > exactly the same problem.
>>
>> What cron jobs are on that machine that run at night? Note that on
>> many OSes, maintenance crons are scheduled in a dir something like
>> /etc/cron.daily etc... On my laptop they all run at midnight. I'm
>> wondering if they're blowing out your cache so that you just don't
>> have the same performance the first time you hit a particular dataset
>> after they've run. Just a guess. You could try disabling them for a
>> day and see what happens.
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>
> 
> Meet singles at ninemsn dating Looking for a great date?



-- 
When fascism comes to America, it will be intolerance sold as diversity.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PL/Perl Performance Problems

2009-12-18 Thread Alex -

On a 2nd thought... where does the cach come into play when i only do inserts 
and no selects.
Alex

> Date: Fri, 18 Dec 2009 23:45:07 -0700
> Subject: Re: [GENERAL] PL/Perl Performance Problems
> From: scott.marl...@gmail.com
> To: ainto...@hotmail.com
> CC: t...@sss.pgh.pa.us; pgsql-general@postgresql.org
> 
> On Fri, Dec 18, 2009 at 11:37 PM, Alex -  wrote:
> > Hmm...
> > how can that be. This is happening every day, so its not a one off or
> > happens once in the morning then in the afternoon. There is also no other
> > task running on the system, its dedicated to postgres.
> > Could the Autovacuum cause problems? Starting to invoke Analyze at the
> > beginning of the day but the keep silent till the day timestamp breaks ?
> > The think is that I have 4 servers setup in a similar way and all have
> > exactly the same problem.
> 
> What cron jobs are on that machine that run at night?  Note that on
> many OSes, maintenance crons are scheduled in a dir something like
> /etc/cron.daily etc...  On my laptop they all run at midnight.  I'm
> wondering if they're blowing out your cache so that you just don't
> have the same performance the first time you hit a particular dataset
> after they've run.  Just a guess.  You could try disabling them for a
> day and see what happens.
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
  
_
Looking for a great date? Meet singles at ninemsn dating
http://clk.atdmt.com/NMN/go/150855801/direct/01/

Re: [GENERAL] PL/Perl Performance Problems

2009-12-18 Thread Alex -

I have the standard ones and no others. 
# run-parts01 * * * * root run-parts /etc/cron.hourly02 4 * * * root run-parts 
/etc/cron.daily22 4 * * 0 root run-parts /etc/cron.weekly42 4 1 * * root 
run-parts /etc/cron.monthly
ls -l /etc/cron*-rw-r--r-- 1 root root0 Apr  6  2009 
/etc/cron.deny-rw-r--r-- 1 root root  255 Jan  6  2007 
/etc/crontab/etc/cron.d:-rw-r--r-- 1 root root 192 Mar 12  2009 
sysstat/etc/cron.daily:-rwxr-xr-x 1 root root  379 Mar 28  2007 
0anacronlrwxrwxrwx 1 root root   39 Apr  6  2009 0logwatch -> 
/usr/share/logwatch/scripts/logwatch.pl-rwxr-xr-x 1 root root  118 Jan 21  2009 
cups-rwxr-xr-x 1 root root  180 Mar 12  2009 logrotate-rwxr-xr-x 1 root root  
418 Jan  6  2007 makewhatis.cron-rwxr-xr-x 1 root root  137 Mar 12  2009 
mlocate.cron-rwxr-xr-x 1 root root 2181 Jun 21  2006 prelink-rwxr-xr-x 1 root 
root  296 Jan 21  2009 rpm-rwxr-xr-x 1 root root  328 Feb 26  2009 
tmpwatch/etc/cron.hourly:-rwxr-xr-x 1 root root 71 Jan 10  2007 
mcelog.cron/etc/cron.monthly:-rwxr-xr-x 1 root root 381 Mar 28  2007 
0anacron/etc/cron.weekly:-rwxr-xr-x 1 root root 380 Mar 28  2007 
0anacron-rwxr-xr-x 1 root root 414 Jan  6  2007 makewhatis.cron
> Date: Fri, 18 Dec 2009 23:45:07 -0700
> Subject: Re: [GENERAL] PL/Perl Performance Problems
> From: scott.marl...@gmail.com
> To: ainto...@hotmail.com
> CC: t...@sss.pgh.pa.us; pgsql-general@postgresql.org
> 
> On Fri, Dec 18, 2009 at 11:37 PM, Alex -  wrote:
> > Hmm...
> > how can that be. This is happening every day, so its not a one off or
> > happens once in the morning then in the afternoon. There is also no other
> > task running on the system, its dedicated to postgres.
> > Could the Autovacuum cause problems? Starting to invoke Analyze at the
> > beginning of the day but the keep silent till the day timestamp breaks ?
> > The think is that I have 4 servers setup in a similar way and all have
> > exactly the same problem.
> 
> What cron jobs are on that machine that run at night?  Note that on
> many OSes, maintenance crons are scheduled in a dir something like
> /etc/cron.daily etc...  On my laptop they all run at midnight.  I'm
> wondering if they're blowing out your cache so that you just don't
> have the same performance the first time you hit a particular dataset
> after they've run.  Just a guess.  You could try disabling them for a
> day and see what happens.
  
_
A world FIRST in property search has arrived! Check out Domain Radar NOW!
http://clk.atdmt.com/NMN/go/157631292/direct/01/

Re: [GENERAL] PL/Perl Performance Problems

2009-12-18 Thread Scott Marlowe
On Fri, Dec 18, 2009 at 11:37 PM, Alex -  wrote:
> Hmm...
> how can that be. This is happening every day, so its not a one off or
> happens once in the morning then in the afternoon. There is also no other
> task running on the system, its dedicated to postgres.
> Could the Autovacuum cause problems? Starting to invoke Analyze at the
> beginning of the day but the keep silent till the day timestamp breaks ?
> The think is that I have 4 servers setup in a similar way and all have
> exactly the same problem.

What cron jobs are on that machine that run at night?  Note that on
many OSes, maintenance crons are scheduled in a dir something like
/etc/cron.daily etc...  On my laptop they all run at midnight.  I'm
wondering if they're blowing out your cache so that you just don't
have the same performance the first time you hit a particular dataset
after they've run.  Just a guess.  You could try disabling them for a
day and see what happens.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PL/Perl Performance Problems

2009-12-18 Thread Alex -

Hmm...how can that be. This is happening every day, so its not a one off or 
happens once in the morning then in the afternoon. There is also no other task 
running on the system, its dedicated to postgres.
Could the Autovacuum cause problems? Starting to invoke Analyze at the 
beginning of the day but the keep silent till the day timestamp breaks ?
The think is that I have 4 servers setup in a similar way and all have exactly 
the same problem.

> Date: Fri, 18 Dec 2009 23:00:16 -0700
> Subject: Re: [GENERAL] PL/Perl Performance Problems
> From: scott.marl...@gmail.com
> To: ainto...@hotmail.com
> CC: t...@sss.pgh.pa.us; pgsql-general@postgresql.org
> 
> Note that you seem to have a lot more IO wait in the first run than in
> the second, which means that the task is hitting the disks more in the
> first run than in the second one.  Once IO wait starts to climb,
> performance starts to dive, generally.
  
_
Use Messenger in your Hotmail inbox Find out how
http://windowslive.ninemsn.com.au/hotmail/article/823454/web-im-for-hotmail-is-here

[GENERAL] pulling metadata creation strings from metadata tables?

2009-12-18 Thread Gauthier, Dave
Hi:

Are metadata creation strings, in sql form, stored anywhere in the metadata 
tables?  So if I...

create table foo (x int);

I would like to pull that same string from somewhere.

Thanks in Advance


Re: [GENERAL] Extracting SQL from logs in a usable format

2009-12-18 Thread Greg Smith

Chris Ernst wrote:

I started writing my own log parser to pull the statements from the
postgres logs, but as I get in to the details, it's not quite as
straight forward as I had thought.  Keeping track of which client
connections have prepared queries, merging the correct arguments and
maintaining concurrency are all critical and far from trivial.
  
I hope you're using the CVS format logs, which should make the job a lot 
easier than the standard text one.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Selecting from table into an array var

2009-12-18 Thread Postgres User
On Fri, Dec 18, 2009 at 9:53 PM, Merlin Moncure  wrote:
> On Fri, Dec 18, 2009 at 11:35 PM, Pavel Stehule  
> wrote:
>> 2009/12/19 Postgres User :
>>> Hi,
>>>
>>> I'm trying to write a very simple function statement to select a
>>> single integer field from a table and save it into an int array. For
>>> some reason I can't seem to find the correct syntax:
>>>
>>> CREATE TABLE sample (
>>>    id  integer
>>> );
>>>
>>> and then within a function:
>>>
>>> my_array int[];
>>> my_array = SELECT ARRAY(id) FROM sample;
>>>
>>>
>>> This syntax and variations of it don't work.  Can anyone show me the
>>> correct approach?
>>>
>>
>> Hello
>>
>> please try SELECT ARRAY(SELECT id FROM sample) or SELECT array_agg(id)
>> FROM sample - if you have 8.4
>
> yup:
>
> array() vs array_agg() vs array[]...which to use?
>
> *) use array[] when building list of scalar values
> *) use array_agg when aggregating (you need to group by something)
> *) use array() everywhere else
>
> merlin
>

Thanks for the replies.  I had already tried array_agg (on 8.4) and
ultimately found that the errors were caused by a recursive query.
When I replace the recursive query with a basic SELECT statement, the
code below works.
Apparently, you cannot combine an aggregate function such as
arrayagg() with a recursive SQL statement.  This may be a PG bug.

For example, this fails:

DECLARE
cat_list integer[];

BEGIN
SELECT array_agg(category_id) INTO cat_list FROM (
 WITH RECURSIVE subcategory AS
(
SELECT * FROM category
WHERE category_id = p_category_id

UNION ALL

SELECT c.*
FROM category AS c
INNER JOIN subcategory AS sc ON (c.category_id = 
sc.parent_id)
)
SELECT category_id FROM subcategory
ORDER BY Coalesce(parent_id, 0) DESC
  ) c;
END;

with this table def

CREATE TABLE "category" (
  "category_id" SERIAL,
  "parent_id" INTEGER,
  "category_name" VARCHAR(50)
) WITHOUT OIDS;

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PL/Perl Performance Problems

2009-12-18 Thread Scott Marlowe
Note that you seem to have a lot more IO wait in the first run than in
the second, which means that the task is hitting the disks more in the
first run than in the second one.  Once IO wait starts to climb,
performance starts to dive, generally.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Selecting from table into an array var

2009-12-18 Thread Merlin Moncure
On Fri, Dec 18, 2009 at 11:35 PM, Pavel Stehule  wrote:
> 2009/12/19 Postgres User :
>> Hi,
>>
>> I'm trying to write a very simple function statement to select a
>> single integer field from a table and save it into an int array. For
>> some reason I can't seem to find the correct syntax:
>>
>> CREATE TABLE sample (
>>    id  integer
>> );
>>
>> and then within a function:
>>
>> my_array int[];
>> my_array = SELECT ARRAY(id) FROM sample;
>>
>>
>> This syntax and variations of it don't work.  Can anyone show me the
>> correct approach?
>>
>
> Hello
>
> please try SELECT ARRAY(SELECT id FROM sample) or SELECT array_agg(id)
> FROM sample - if you have 8.4

yup:

array() vs array_agg() vs array[]...which to use?

*) use array[] when building list of scalar values
*) use array_agg when aggregating (you need to group by something)
*) use array() everywhere else

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Selecting from table into an array var

2009-12-18 Thread Pavel Stehule
2009/12/19 Postgres User :
> Hi,
>
> I'm trying to write a very simple function statement to select a
> single integer field from a table and save it into an int array. For
> some reason I can't seem to find the correct syntax:
>
> CREATE TABLE sample (
>    id  integer
> );
>
> and then within a function:
>
> my_array int[];
> my_array = SELECT ARRAY(id) FROM sample;
>
>
> This syntax and variations of it don't work.  Can anyone show me the
> correct approach?
>

Hello

please try SELECT ARRAY(SELECT id FROM sample) or SELECT array_agg(id)
FROM sample - if you have 8.4

Regards
Pavel Stehule

> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Selecting from table into an array var

2009-12-18 Thread Postgres User
Hi,

I'm trying to write a very simple function statement to select a
single integer field from a table and save it into an int array. For
some reason I can't seem to find the correct syntax:

CREATE TABLE sample (
id  integer
);

and then within a function:

my_array int[];
my_array = SELECT ARRAY(id) FROM sample;


This syntax and variations of it don't work.  Can anyone show me the
correct approach?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PL/Perl Performance Problems

2009-12-18 Thread Alex -

I run AutovacuumI run a Vaccum Full Analyze every SatI re-index the tables 
every Sat

> Date: Fri, 18 Dec 2009 21:20:23 -0300
> From: alvhe...@commandprompt.com
> To: ainto...@hotmail.com
> CC: t...@sss.pgh.pa.us; pgsql-general@postgresql.org
> Subject: Re: [GENERAL] PL/Perl Performance Problems
> 
> Alex - wrote:
> > 
> > Tom, I am logging these stats now, but i am having a similar issue. both 
> > jobs in the morning and after noon insert about 400k records (200k each)
> > In the morning that job takes 450secIn the afternoon only 150; No select, 
> > simple parsing a file and insert the records
> > These two tables don't grow as at night a cleanup job removes about the 
> > same amount of historical data (older about 40 days). 
> > Its kind of the same behavior and i don't really see where the swapping 
> > fits in.
> 
> Are you running vacuum of some sort on this table regularly?
> 
> -- 
> Alvaro Herrerahttp://www.CommandPrompt.com/
> PostgreSQL Replication, Consulting, Custom Development, 24x7 support
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
  
_
A world FIRST in property search has arrived! Check out Domain Radar NOW!
http://clk.atdmt.com/NMN/go/157631292/direct/01/

Re: [GENERAL] PL/Perl Performance Problems

2009-12-18 Thread Alvaro Herrera
Alex - wrote:
> 
> Tom, I am logging these stats now, but i am having a similar issue. both jobs 
> in the morning and after noon insert about 400k records (200k each)
> In the morning that job takes 450secIn the afternoon only 150; No select, 
> simple parsing a file and insert the records
> These two tables don't grow as at night a cleanup job removes about the same 
> amount of historical data (older about 40 days). 
> Its kind of the same behavior and i don't really see where the swapping fits 
> in.

Are you running vacuum of some sort on this table regularly?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extracting SQL from logs in a usable format

2009-12-18 Thread Chris Ernst
Rory Campbell-Lange wrote:
> On 18/12/09, Chris Ernst (cer...@esoft.com) wrote:
>> I have a project where I need to be able to capture every query from a
>> production system into a file such that I can "replay" them on a staging
>> system.  Does such a thing already exist or should I start writing my
>> own log parser?
> 
> I'm sure there are a number of log replay systems already in existence.
> Perhaps you could use Londiste, and introduce a lag to the replication
> process if it is required? 
> http://skytools.projects.postgresql.org/doc/londiste.ref.html#toc17
> 
> A very simple way of doing this is to log all of the SQL statements by
> setting the postgresql.conf parameter log_min_duration_statement to 0.
> You can then easily parse out the SQL statements from the log file.
> I have done this before quite successfully when wishing to replay a long
> set of SQL statements to test un upgrade of a Postgresql server.

Hi Rory,

Thank you for the quick reply.

Londiste isn't really an option as it (apparently) would only get
INSERT, UPDATE and DELETE queries.  I would want to capture every query
that is run, including SELECTs.  Plus, the production master is already
running slony1, and I don't think they will play nice together.

My goal is to be able to replay a set of actual production traffic on
the staging server, starting from a snapshot at the point where the
statement logging began.  Then make some changes (tweak settings,
upgrade versions, make DDL changes, etc.) and rerun the same set of
statements to analyze the results with pgFouine.

I started writing my own log parser to pull the statements from the
postgres logs, but as I get in to the details, it's not quite as
straight forward as I had thought.  Keeping track of which client
connections have prepared queries, merging the correct arguments and
maintaining concurrency are all critical and far from trivial.

Basically I'm curious if anyone has already created something that does
this or am I treading into uncharted waters?  I've been googling around
a bit and haven't come up with anything yet.


Thanks again,

- Chris

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PL/Perl Performance Problems

2009-12-18 Thread Alex -

Tom, I am logging these stats now, but i am having a similar issue. both jobs 
in the morning and after noon insert about 400k records (200k each)
In the morning that job takes 450secIn the afternoon only 150; No select, 
simple parsing a file and insert the records
These two tables don't grow as at night a cleanup job removes about the same 
amount of historical data (older about 40 days). 
Its kind of the same behavior and i don't really see where the swapping fits in.
Here are the stats for this 400k inserts in the morning.
vmstat -s 10 18482752  total memory 18381116  used memory  7014556  
active memory 11040672  inactive memory   101636  free memory   
102168  buffer memory 17757988  swap cache  8193140  total swap 
1468  used swap  8191672  free swap  1781577 non-nice user cpu ticks
   13 nice user cpu ticks52619 system cpu ticks 40813639 idle 
cpu ticks   297259 IO-wait cpu ticks  241 IRQ cpu ticks 
6253 softirq cpu ticks0 stolen cpu ticks 26348975 pages paged 
in 46203834 pages paged out0 pages swapped in  361 
pages swapped out116011398 interrupts 20586721 CPU context switches   
1261071430 boot time27428 forks [08:27:07] ~ $ vmstat 10procs 
---memory-- ---swap-- -io --system-- -cpu-- r  
b   swpd   free   buff  cache   si   sobibo   in   cs us sy id wa st 1  
1   1468 101976 102260 177508360062   108   70   48  4  0 95  1  0 
1  0   1468  98996 102280 1775314800  2090  6692 3066 3742  8  1 75 16  
0 0  1   1468  97376 102276 1775520400  2462  2475 1723 4403  9  1 75 
15  0 0  1   1468 101308 102284 1775064400  2608  3281 1741 4666 10  1 
75 14  0 0  1   1468  96852 102308 1775532800  2567  2376 1707 4658 10  
1 75 14  0 0  1   1468 103104 102308 1774839600  2815  3378 1813 5018 
11  1 75 13  0 0  1   1468  96264 102344 1775558800  3014  2582 1814 
5391 12  2 75 12  0 1  0   1468  92816 102372 1775824400  3141  3358 
1840 5645 12  2 75 12  0 1  0   1468 187184 102408 1766480400  1956  
3802 1622 3920 17  1 75  7  0 0  2   1468 154272 102604 1768470800  
1547 50848 10246  602 12  4 70 15  0 0  2   1468 125416 102692 177166720
0   299 20352 5416  273  3  2 65 31  0 1  0   1468  98716 102908 177399880  
  0  1906 46720 9263  415 14  3 73  9  0 1  0   1468 102608 103000 17703556
00  1932 11837 1339  348 22  1 73  4  0 1  0   1468 107096 103388 17333928  
  00  1164  1328 1234  300 14  1 74 11  0 0  1   1468 105224 103584 
1733604400  1702  2286 1354  362 17  0 75  8  0 0  1   1468 108516 
103628 1733323600  1236  6901 2389  358 19  1 74  6  0 1  0   1468  
92184 103688 1734985600   975  9852 3408  344 15  1 75  9  0 1  0   
1468 101324 103764 1734057200  1042  8904 3174  334 17  1 75  8  0 2  0 
  1468 102996 103616 1704928800  1290 11402 3701  492 33  2 58  8  0 1  
1   1468  92336 103704 1706058400  1091  8918 3076  336 40  1 50 10  0 
1  1   1468 102392 103824 1704470400  1041  7743 2563  364 38  1 50 11  
0 1  2   1468  95864 103956 1703780000  1432  1398 1519  609 28  0 50 
22  0 1  2   1468  94488 104212 1703046000  1610  2448 1685  826 30  0 
48 22  0 1  2   1468 187308 104376 1694067200  1751  3508 1752  994 29  
1 40 31  0 1  2   1468 159824 104708 1696651600  1837  3309 1818  997 
30  0 43 27  0 1  1   1468 149196 105040 1697673600  1966  2930 1812 
1030 29  0 45 26  0 1  1   1468 417112 105240 1700455600  1962  2730 
1727  915 18  0 61 20  0 0  2   1468 389056 105628 1703162000  1988  
2594 1799  776  5  0 74 20  0
FINSELECT [08:32:42] ~ $ vmstat -s 18482752  total memory 18121980  
used memory  7993020  active memory  9782060  inactive memory   
360772  free memory   105860  buffer memory 17059320  swap cache  
8193140  total swap 1468  used swap  8191672  free swap  
1803616 non-nice user cpu ticks   13 nice user cpu ticks53801 
system cpu ticks 40905481 idle cpu ticks   317949 IO-wait cpu ticks 
 245 IRQ cpu ticks 6648 softirq cpu ticks0 stolen cpu 
ticks 26978271 pages paged in 48982990 pages paged out0 
pages swapped in  361 pages swapped out116958150 interrupts 
21299728 CPU context switches   1261071430 boot time27505 
forksFINSELECT [08:32:47] ~ $ vmstat 10procs ---memory-- 
---swap-- -io --system-- -cpu-- r  b   swpd   free   buff  
cache   si   sobibo   in   cs us sy id wa st 1  1   1468 354412 105920 
170652840063   114   72   49  4  0 95  1  0 0  2   1468 327640 
106260 1709136400  1932  2460 1744  743  5  0 75 20  0 0  2   1468 
318956 106472 1709954000  1519  2248 1661  

Re: [GENERAL] Extracting SQL from logs in a usable format

2009-12-18 Thread Rory Campbell-Lange
On 18/12/09, Chris Ernst (cer...@esoft.com) wrote:
> I have a project where I need to be able to capture every query from a
> production system into a file such that I can "replay" them on a staging
> system.  Does such a thing already exist or should I start writing my
> own log parser?

I'm sure there are a number of log replay systems already in existence.
Perhaps you could use Londiste, and introduce a lag to the replication
process if it is required? 
http://skytools.projects.postgresql.org/doc/londiste.ref.html#toc17

A very simple way of doing this is to log all of the SQL statements by
setting the postgresql.conf parameter log_min_duration_statement to 0.
You can then easily parse out the SQL statements from the log file.
I have done this before quite successfully when wishing to replay a long
set of SQL statements to test un upgrade of a Postgresql server.

Rory

-- 
Rory Campbell-Lange
Director
r...@campbell-lange.net

Campbell-Lange Workshop
www.campbell-lange.net
0207 6311 555
3 Tottenham Street London W1T 2AF
Registered in England No. 04551928

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Extended Query, flush or sync ?

2009-12-18 Thread Raimon Fernandez
Hello,

It's not clear for me if I have to issue a flush or sync after each process of 
an extended query.

It's almost working for me only when I send a sync, but not when I send a 
flush. With the flush, the connection seems freezed, or at least, I don't get 
any data from postgre.


- Send the parse command
- sync
- Receive the ParseComplete
-sync
- Send the Bind
- sync
- Receive the BincComplete
- send the Execute 
- receive an error => "portal xxx does not exist"


thanks,

regards,


r.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Extracting SQL from logs in a usable format

2009-12-18 Thread Chris Ernst
Hi all,

I have a project where I need to be able to capture every query from a
production system into a file such that I can "replay" them on a staging
system.  Does such a thing already exist or should I start writing my
own log parser?

Thank you for your help.

Chris Ernst
eSoft, Inc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] defining yuor own commands in PG ?

2009-12-18 Thread Gauthier, Dave
Can you define your own commands in PG.  E.g., if users from other DBs use 
"describe foo" to get the metadata for foo, is there a way I can create a 
command "describe" to = "\d" ?

Thanks in Advance


Re: [GENERAL] PL/Perl Performance Problems

2009-12-18 Thread Alex -

Thank, I will check that out.

> To: ainto...@hotmail.com
> CC: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] PL/Perl Performance Problems 
> Date: Fri, 18 Dec 2009 15:36:15 -0500
> From: t...@sss.pgh.pa.us
> 
> Alex -  writes:
> > I actually looked at that too, but there is no swapping going on.
> 
> If you were only watching for swapping, that wouldn't catch what I'm
> talking about.  Replacing cached disk buffers with other disk data
> doesn't count as swapping in any system I've used.
> 
>   regards, tom lane
  
_
If It Exists, You'll Find it on SEEK Australia's #1 job site
http://clk.atdmt.com/NMN/go/157639755/direct/01/

Re: [GENERAL] PL/Perl Performance Problems

2009-12-18 Thread Tom Lane
Alex -  writes:
> I actually looked at that too, but there is no swapping going on.

If you were only watching for swapping, that wouldn't catch what I'm
talking about.  Replacing cached disk buffers with other disk data
doesn't count as swapping in any system I've used.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PL/Perl Performance Problems

2009-12-18 Thread Alex -

I actually looked at that too, but there is no swapping going on. The system 
also has 16GB memory allocated to postgres and during the processing there is 
not other process really active.

> To: ainto...@hotmail.com
> CC: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] PL/Perl Performance Problems 
> Date: Fri, 18 Dec 2009 15:25:16 -0500
> From: t...@sss.pgh.pa.us
> 
> Alex -  writes:
> > Now here is what I noticed.
> > a) if I run it in the morning, processing starts very slow, but after a few 
> > thousand records it will speed up until I actually get about 100 records 
> > processed per millisecond.
> > b) it sometime takes about 5-10k records till i really get up to speed. 
> > meaning the first few hundreds can take up to 1-2 minutes.
> > c) if i run the same job a few hrs later (we run it twice a day) it 
> > generally runs much faster. Even though we added more data to one of the 
> > big tables it selects from.
> > d) this however starts again the next day. ( not much data has been changed 
> > between the 2nd run of the day and the first one of the next one, but yet 
> > it will start crawling again.
> 
> What this sounds like is it's fast when most of the data has been
> swapped in to kernel disk cache, and slow when the data actually has to
> be read from disk.  Probably the reason it's slow in the morning is
> there are some unrelated tasks that run overnight and cause memory to
> get filled with all their data instead of the PG tables.  You could
> check this theory by watching with vmstat or similar tool to see how
> much actual disk I/O is happening in the slow and fast cases.
> 
>   regards, tom lane
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
  
_
A world FIRST in property search has arrived! Check out Domain Radar NOW!
http://clk.atdmt.com/NMN/go/157631292/direct/01/

Re: [GENERAL] PL/Perl Performance Problems

2009-12-18 Thread Tom Lane
Alex -  writes:
> Now here is what I noticed.
> a) if I run it in the morning, processing starts very slow, but after a few 
> thousand records it will speed up until I actually get about 100 records 
> processed per millisecond.
> b) it sometime takes about 5-10k records till i really get up to speed. 
> meaning the first few hundreds can take up to 1-2 minutes.
> c) if i run the same job a few hrs later (we run it twice a day) it generally 
> runs much faster. Even though we added more data to one of the big tables it 
> selects from.
> d) this however starts again the next day. ( not much data has been changed 
> between the 2nd run of the day and the first one of the next one, but yet it 
> will start crawling again.

What this sounds like is it's fast when most of the data has been
swapped in to kernel disk cache, and slow when the data actually has to
be read from disk.  Probably the reason it's slow in the morning is
there are some unrelated tasks that run overnight and cause memory to
get filled with all their data instead of the PG tables.  You could
check this theory by watching with vmstat or similar tool to see how
much actual disk I/O is happening in the slow and fast cases.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PL/Perl Performance Problems

2009-12-18 Thread Alex -

Hi,
I am experiencing some strange behavior when executing a not too complicated 
pl/perl function.
The Function is not too complicated. It does...
1. Selects about 20 Records from Table A (   - loops though the list and 
deletes in total about 50k records in Table B2. For each record form Table A it 
then selects Records from Table C   - loops through these records about 50K in 
total   - for each runs a query 3 Tables, 10-20M records   - inserts a record 
in Table B .. about 50K3. Returns some stats on the whole operation (100 
records).

I am using PL/Perl for this and everything runs pretty well.
In a single execution i.e. if I only process 1 records the whole process is 
done within a few milliseconds.
The system has 16BG of Memory, and fast disks.
Now here is what I noticed.
a) if I run it in the morning, processing starts very slow, but after a few 
thousand records it will speed up until I actually get about 100 records 
processed per millisecond.
b) it sometime takes about 5-10k records till i really get up to speed. meaning 
the first few hundreds can take up to 1-2 minutes.
c) if i run the same job a few hrs later (we run it twice a day) it generally 
runs much faster. Even though we added more data to one of the big tables it 
selects from.
d) this however starts again the next day. ( not much data has been changed 
between the 2nd run of the day and the first one of the next one, but yet it 
will start crawling again.
e) sometime the 2nd run of the day can also be slow and even though the data in 
the system does not change by a large margin, run times of the jobs vary by a 
large amount. from 17-50 minutes.
Here are my questions:
A) I am running the Auto vacuum. Is it possible that this for some reason slows 
down the system?
B) Are the query planner stats re-set over night i.e. date change? This 
behavior is consistent. i.e. Every morning the processing is slow, afternoon 
generally much faster.
C) Does pl/perl have some memory issues?
D) If i run above job processing 50 records, but stop the process after 10K, 
then run it again it will run fast for the first 10K then slow down.
My assumption is that the query planner keeps the stats for a particular record 
based on a time stamp (used in a select) and every day it will forget about 
that. Is there a way to let the query planner keep stats for a function 
permanently?
Thanks for any advise.
Alex















  
_
If It Exists, You'll Find it on SEEK Australia's #1 job site
http://clk.atdmt.com/NMN/go/157639755/direct/01/

Re: [GENERAL] Justifying a PG over MySQL approach to a project

2009-12-18 Thread Lincoln Yeoh

At 11:28 AM 12/18/2009, Scott Marlowe wrote:

On Thu, Dec 17, 2009 at 7:51 PM, David Boreham  wrote:
> Scott Marlowe wrote:
>>
>> I would recommend using a traffic shaping router (like the one built
>> into the linux kernel and controlled by tc / iptables) to simulate a
>> long distance connection and testing this yourself to see which
>> replication engine will work best for you.
>>
>
> Netem  :
> http://www.linuxfoundation.org/collaborate/workgroups/networking/netem
> We used this to make a test rig for Directory Server replication, to verify
> a
> re-design that added pipelining to the replication protocol.
> It's already in the modern Linuxes--just needs to be configured.

Wow, everytime I turn around someone's built something cool from a set
of small sharp tools.  Thanks!


There's also a livecd with a WebUI to emulate WANs. I think it's 
basically a wrapper around tc/netem, but I find it convenient for 
quick and dirty tests.


http://wanem.sourceforge.net/

It seems you currently can only control outbound traffic from an 
interface, so you'd have to set stuff on both interfaces to "shape" 
upstream and downstream - this is not so convenient in some network topologies.


Regards,
Link.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Justifying a PG over MySQL approach to a project

2009-12-18 Thread David Boreham

Lincoln Yeoh wrote:
It seems you currently can only control outbound traffic from an 
interface, so you'd have to set stuff on both interfaces to "shape" 
upstream and downstream - this is not so convenient in some network 
topologies.

This is more a property of the universe than the software ;)

However, there are tricks that can be used with a virtual nic driver to 
give the effect of 'inbound' shaping

in the case that you don't have control over the sending interface.
In our project we deployed a dedicated shaping machine with a bunch of 
nics that connected to each test hosts.
Then wrote scripts to setup the shaping and the test host routing to 
emulate the desired network characteristics.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Triggers made with plpythonu performance issue

2009-12-18 Thread sabrina miller
Hi everybody,
My requirements was:
 + Made a table charge to be partitioned by carrier and month
 + summarize by charges
 + summarize by users,
 + each summarization must be by month and several others columns.

Here is the database

CREATE TABLE charges.charge
(
  id serial NOT NULL,
  transaction_id uuid NOT NULL,
  carrier_id integer NOT NULL,
  msisdn character varying(60) NOT NULL,
  partner_id integer NOT NULL,
  product_id integer NOT NULL,
  parent_id integer,
  retry_count integer NOT NULL,
  created_at timestamp with time zone NOT NULL,
  CONSTRAINT charge_pkey PRIMARY KEY (id)
);

CREATE TABLE charges.charge_summarized
(
  id serial NOT NULL,
  created_at timestamp with time zone NOT NULL,
  carrier_id integer NOT NULL,
  partner_id integer NOT NULL,
  product_id integer NOT NULL,
  retry_count integer NOT NULL,
  amount integer NOT NULL,
  CONSTRAINT charge_summarized_pkey PRIMARY KEY (id),
CONSTRAINT client_charge_client_id_key UNIQUE (carrier_id, partner_id,
product_id, retry_count)
);

CREATE TABLE charges.client
(
  id serial NOT NULL,
  carrier_id integer NOT NULL,
  msisdn character varying(60) NOT NULL,
  collectibility numeric(5,2) NOT NULL,
  CONSTRAINT client_pkey PRIMARY KEY (id),
);


CREATE TABLE charges.client_charge
(
  id serial NOT NULL,
  client_id integer NOT NULL,
  date date NOT NULL,
  amount integer NOT NULL,
  CONSTRAINT client_charge_pkey PRIMARY KEY (id),
  CONSTRAINT client_charge_client_id_fkey FOREIGN KEY (client_id)
  REFERENCES charges.client (id) MATCH SIMPLE
  ON UPDATE NO ACTION ON DELETE NO ACTION DEFERRABLE INITIALLY DEFERRED,
  CONSTRAINT client_charge_client_id_key UNIQUE (client_id, date),
);

OK, now the functions I made to complete all this, Inserts are made only in
charges.charge

create or replace function charges.insert_into_charges() returns trigger as
$body1$
args = TD['new']
event = TD['event']
table_name = TD['table_name']
from datetime import datetime, timedelta
if event == 'INSERT':
  carrier = args['carrier_id']
  created_at = datetime.strptime(args['created_at'].split(" ")[0],
"%Y-%m-%d")
  month, year = created_at.month, created_at.year
  next_month, next_year = (month + 1, year) if month < 12 else (1, year+1)
  target_table_name = "charge_%s_%04d%02d" % (carrier, year, month)
  while True:
exist_table = len(plpy.execute("select relname from pg_stat_user_tables
where relname = '%s';" % target_table_name))
if not exist_table:
  sql = """create table charges.%(target_table_name)s
(CONSTRAINT charge_%(carrier_id)s_carrier_id_check
CHECK (
  carrier_id = '%(carrier_id)s' AND
  created_at >= '%(from_year)s-%(from_month)s-01' AND
  created_at < '%(to_year)s-%(to_month)s-01'
  )
)
INHERITS (charges.charge)
WITH (
OIDS=FALSE
);

create trigger summarize_%(target_table_name)s
AFTER insert
on charges.%(target_table_name)s
for each row
execute procedure charges.summarize();

create trigger client_charge_sum_%(target_table_name)s
AFTER insert
on charges.%(target_table_name)s
for each row
execute procedure charges.client_charge_sum();
""" % {"carrier_id": carrier, "from_year": year, "from_month":
month,
"to_year": next_year, "to_month": next_month, "target_table_name":
target_table_name}
  try:
# multithreading could have a race condition here. Better to ask for
forgiveness than permission.
plpy.execute(sql)
  except:
continue
break
  keys, values = zip(*tuple([(x,y) for x,y in args.items() if y is not
None]))
  sql = "insert into charges.%(target_table_name)s " \
"(%(keys)s) VALUES (%(values)s);" % \
{"carrier_id": carrier,
"keys"   : ",".join(keys),
"values" : ",".join(["'%s'" % x for x in values]),
"target_table_name": target_table_name
}

  plpy.execute(sql)
return "SKIP"
$body1$
language plpythonu;

create or replace function charges.client_charge_sum() returns trigger as
$body3$
args = TD['new']
event = TD['event']
table_name = TD['table_name']
if event != 'INSERT':
  return
while True:
  # to populate clients if is needed
  sql = "select id from charges.client where msisdn='%s';" % args["msisdn"]
  clients = plpy.execute(sql)
  if len(clients):
client_id = clients[0]['id']
  else:
sql= """INSERT INTO charges.client ( carrier_id, msisdn,collectibility )
VALUES ( %s, '%s',0) RETURNING
CURRVAL('charges.client_id_seq') as id;""" % (args['carrier_id'],
args['msisdn'])
  try:
client_id = plpy.execute(sql)[0]['id']
  except:
continue
  break
group_by_data ={
"date": args['created_at'].split(" ")[0],
"client_id": str(client_id),
}
filter_string = " and ".join(["%s='%s'" % (key, value) for key, value in
group_by_data.items()])
while True:
  sql = "select 1 from charges.client_charge where %s;" % (filter_string,)
  if len(plpy.execute(sql)):
sql = "update charges.client_charge set amount=amount 

[GENERAL] column level encryption & select rules

2009-12-18 Thread Little, Douglas
Hi,
I've implemented a scheme for column level encryption that uses table triggers 
(isrt/update) to encrypt the input data, and a view
To perform the decrypt.   It's working ok, but I'm having trouble altering my 
objects because of the dependents.
To implement the scheme, I have to generate  the view, table trigger 
(isrt/updt), and a trigger function.

Currently the decrypt functions are embedded in the views which I want to get 
rid of.

Can I implement them as a select rule?
If the select rule directs the queries to the same table, does recursion occur?

View is below.
What are the challenges, etc.



Doug Little
Sr. Data Warehouse Architect | Enterprise Data Management | Orbitz Worldwide
500 W. Madison, Suite 1000  Chicago IL 60661| Office 312.260.2588 | Fax 
312.894.5164 | Cell 847-997-5741
douglas.lit...@orbitz.com

 [cid:image002.jpg@01CA7FD5.13BDD510]   orbitz.com | 
ebookers.com | 
hotelclub.com | 
cheaptickets.com | 
ratestogo.com | 
asiahotels.com
-- View: ods_views.customer_payment_profile

-- DROP VIEW ods_views.customer_payment_profile;

CREATE OR REPLACE VIEW ods_views.customer_payment_profile AS
 SELECT customer_payment_profile.customer_payment_profile_id, 
customer_payment_profile.ref_point_of_sale_id, 
customer_payment_profile.last_used_date,
CASE
WHEN owwpiiview() THEN 
dba_work.owwdecrypt(customer_payment_profile.pi2_pii_card_number, 
customer_payment_profile.customer_payment_profile_id::text)::character 
varying(128)
ELSE 
'**'::character 
varying::character varying(128)
END AS pii_card_number,
CASE
WHEN owwpiiview() THEN 
dba_work.owwdecrypt(customer_payment_profile.pi2_pii_cardholder_name, 
customer_payment_profile.customer_payment_profile_id::text)::character 
varying(200)
ELSE 
'**'::character 
varying::character varying(200)
END AS pii_cardholder_name, customer_payment_profile.default_ind, 
customer_payment_profile.ref_payment_type_code, 
customer_payment_profile.expiration_date, customer_payment_profile.active_ind, 
customer_payment_profile.customer_member_id,
CASE
WHEN owwpiiview() THEN 
dba_work.owwdecrypt(customer_payment_profile.pi2_pii_address1, 
customer_payment_profile.customer_payment_profile_id::text)::character 
varying(200)
ELSE 
'**'::character 
varying::character varying(200)
END AS pii_address1, customer_payment_profile.address2, 
customer_payment_profile.address3, customer_payment_profile.address4, 
customer_payment_profile.city, 
customer_payment_profile.ref_state_province_code, 
customer_payment_profile.ref_country_code, 
customer_payment_profile.ref_postal_code, customer_payment_profile.po_box_ind, 
customer_payment_profile.intl_phone_dialing_code,
CASE
WHEN owwpiiview() THEN 
dba_work.owwdecrypt(customer_payment_profile.pi2_pii_phone, 
customer_payment_profile.customer_payment_profile_id::text)::character 
varying(200)
ELSE 
'**'::character 
varying::character varying(200)
END AS pii_phone, customer_payment_profile.phone_extension, 
customer_payment_profile.create_date, customer_payment_profile.modified_date, 
customer_payment_profile.ref_phone_country_code, 
customer_payment_profile.oltp_deleted_timestamp, 
customer_payment_profile.ods_load_timestamp, 
customer_payment_profile.ref_cc_type_code, 
customer_payment_profile.cvn_valid_ind, customer_payment_profile.issue_date, 
customer_payment_profile.pii_issue_number
   FROM customer.customer_payment_profile;

<>

Re: [GENERAL] pg_dump and ON DELETE CASCADE problem

2009-12-18 Thread Tom Lane
Adrian Klaver  writes:
> On Thursday 17 December 2009 6:39:45 pm CG wrote:
>> CREATE INDEX packet_search_trigram_packet_uuid_idx
>> ON dpo.packet_search_trigram
>> USING hash
>> (packet_uuid);

> You might want to take a look at upgrading to 8.4.2 per this from the release 
> notes:

Actually, what I'd recommend is dropping that hash index and replacing
it with regular btree.  And the same for any other hash indexes you
have.  Hash indexes simply are not production grade (yet).

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump and ON DELETE CASCADE problem

2009-12-18 Thread Adrian Klaver
On Thursday 17 December 2009 6:39:45 pm CG wrote:
> --- On Thu, 12/17/09, Adrian Klaver  wrote:
> > Would it be possible to see the table schemas and indices
> > ?
>
> Sure (you asked for it!!) :
>

>
> CREATE TABLE dpo.packet_search_trigram
> (
>   id integer NOT NULL DEFAULT
> nextval('packet_search_trigram_id_seq'::regclass), packet_uuid
> uniqueidentifier NOT NULL,
>   trigram_vector tsvector NOT NULL,
>   CONSTRAINT packet_search_trigram_id_pkey PRIMARY KEY (id),
>   CONSTRAINT packet_search_trigram_puuid_fkey FOREIGN KEY (packet_uuid)
>   REFERENCES dpo.packet (packet_uuid) MATCH SIMPLE
>   ON UPDATE CASCADE ON DELETE CASCADE
> )
> WITH (
>   OIDS=FALSE
> );
> ALTER TABLE dpo.packet_search_trigram OWNER TO postgres;
> GRANT ALL ON TABLE dpo.packet_search_trigram TO postgres WITH GRANT OPTION;
> GRANT ALL ON TABLE dpo.packet_search_trigram TO addgroup;
>
> CREATE INDEX packet_search_trigram_packet_uuid_idx
>   ON dpo.packet_search_trigram
>   USING hash
>   (packet_uuid);
>
> CREATE INDEX packet_search_trigram_trigram_vector_idx
>   ON dpo.packet_search_trigram
>   USING gin
>   (trigram_vector);


You might want to take a look at upgrading to 8.4.2 per this from the release 
notes:


"Fix hash index corruption (Tom)

The 8.4 change that made hash indexes keep entries sorted by hash value failed 
to update the bucket splitting and compaction routines to preserve the 
ordering. So application of either of those operations could lead to permanent 
corruption of an index, in the sense that searches might fail to find entries 
that are present. To deal with this, it is recommended to REINDEX any hash 
indexes you may have after installing this update. "

See also:
http://archives.postgresql.org/pgsql-committers/2009-11/msg2.php




-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to remove non-UTF values from a table?

2009-12-18 Thread Bruce Momjian
Phoenix Kiula wrote:
> On Fri, Dec 18, 2009 at 3:54 AM, Bruce Momjian  wrote:
> > Bruce Momjian wrote:
> >> pg_migrator (not pg_upgrade) has been used by many people to migrate
> >> from 8.3 to 8.4. ?I just helped someone yesterday with a migration.
> >> pg_migrator threw an error because they had reindexed pg_largeobject,
> >> and pg_migrator was not prepared to handle that. ?They also had an old
> >> cluster with a higher autovacuum_freeze_max_age than the new cluster,
> >> causing autovacuum freeze to run during the upgrade and remove clog
> >> files. ?These are unusual cases, so I assume there are many people who
> >> successfully use pg_migrator that I never hear from.
> >>
> >> Both of these issues are fixed in pg_migrator 8.4.6, released yesterday.
> >> I also talked to Alvaro about using fixed oids to remove these migration
> >> restrictions in 8.5:
> >>
> >> ? ? ? ? o ?a user-defined composite data type
> >> ? ? ? ? o ?a user-defined array data type
> >> ? ? ? ? o ?a user-defined enum data type
> >>
> >> So, pg_migrator is a work in progress. ?It current requires migration
> >> testing and should be used by experienced Postgres users. ?Someday
> >> pg_migrator will be easier to use and more reliable.
> >
> > Oh, and pg_migrator 8.4.6 works for migrations from 8.4 to 8.5 CVS.
> 
> 
> Is pg_migrator worth trying between 8.2.9 and 8.4.x?

No, it only handles 8.3 and later.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to remove non-UTF values from a table?

2009-12-18 Thread Phoenix Kiula
On Fri, Dec 18, 2009 at 3:54 AM, Bruce Momjian  wrote:
> Bruce Momjian wrote:
>> pg_migrator (not pg_upgrade) has been used by many people to migrate
>> from 8.3 to 8.4.  I just helped someone yesterday with a migration.
>> pg_migrator threw an error because they had reindexed pg_largeobject,
>> and pg_migrator was not prepared to handle that.  They also had an old
>> cluster with a higher autovacuum_freeze_max_age than the new cluster,
>> causing autovacuum freeze to run during the upgrade and remove clog
>> files.  These are unusual cases, so I assume there are many people who
>> successfully use pg_migrator that I never hear from.
>>
>> Both of these issues are fixed in pg_migrator 8.4.6, released yesterday.
>> I also talked to Alvaro about using fixed oids to remove these migration
>> restrictions in 8.5:
>>
>>         o  a user-defined composite data type
>>         o  a user-defined array data type
>>         o  a user-defined enum data type
>>
>> So, pg_migrator is a work in progress.  It current requires migration
>> testing and should be used by experienced Postgres users.  Someday
>> pg_migrator will be easier to use and more reliable.
>
> Oh, and pg_migrator 8.4.6 works for migrations from 8.4 to 8.5 CVS.


Is pg_migrator worth trying between 8.2.9 and 8.4.x?

Thanks

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Charset Win1250 on Windows and Ubuntu

2009-12-18 Thread Durumdara
Hi!

I have a software that uses Postgresql. This program (and website) developed
and working on Window (XP/2003), with native charset (win1250).

Prior week we got a special request to install this software to a Linux
server.

Yesterday I installed Ubu9.10 on VirtualBox, and tried to moving the
database under Linux.

First big problem is that when I tried to create a database with same
parameters as in Windows, the PGAdmin show an error.
The errormessage is:
"Error: new encoding (Win1250) is incompatible with the encoding of the
template database (UTF8)."

Ok, I changed to "template0".

Then I got error that Win1250 is not good for collation hu_HU.UTF8.

When I tried to insert hungarian chars (to check sort order), the C and
POSIX return wrong result - as I thought before.

The Windows version of PG and Admin is not supports collation, so these two
options are disable (collation, character type).

But in Linux I have only UTF version that can sort rows in good order.

The problem that the client program is win1250 based, and I must rewrite all
things to make same results.

Have anybody some way, some tricky solution for this problem?

Thanks for your help:
dd


Re: [GENERAL] Extended Query using the Frontend/Backend Protocol 3.0

2009-12-18 Thread Raimon Fernandez

On 18/12/2009, at 2:26, John DeSoi wrote:

> 
> On Dec 17, 2009, at 11:13 AM, Raimon Fernandez wrote:
> 
>> I'm trying to integrate the extended query protocol with my libraries.
>> 
>> I'm sending a simple SELECT to validate the method, but I'm getting an 
>> Invalid Message Format.
> 
> I did not add up your byte count, but maybe this will help:
> 
> 
> (write-byte p stream)
> (write-int32 (+ int32-length (length name) 1 (length sql-string) 1 
> int16-length (* int32-length param-count)) stream) 
> (write-cstring name stream)
> (write-cstring sql-string stream)
> (write-int16 param-count stream)

I'm doing as you say:

mystatement => 11
select * from mytable; => 22

> (write-byte p stream) 
50 => P


> (write-int32 (+ int32-length (length name) 1 (length sql-string) 1 
> int16-length (* int32-length param-count)) stream) 

4 + 11 + 1 + 22 + 1 + 2 + 0 (param count=0) => 41

00 00 00 29 => length

> (write-cstring name stream)

6D7973746174656D656E74 00 => mystatement + null

> (write-cstring sql-string stream)

73656C656374202A2066726F6D206D797461626C653B 00 => select * from mytable; + null

> (write-int16 param-count stream)


00 00 => number of parameters, zero


any idea ???

thanks,

regards,

r.





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general