Re: [GENERAL] postgresql 8.1.4 to 8.2.3

2007-04-17 Thread Sven Willenberger
On Sat, 2007-04-14 at 22:01 +0200, Anton Melser wrote:
> On 14/04/07, Alain Roger <[EMAIL PROTECTED]> wrote:
> > After clicking on your link i got "invalid project" page :-(
> > and the whole page is empty...
> 
> Ouch Alain...
> Try
> http://pgfoundry.org/projects/pg-migrator/
> :-)
> But ask a single postgres oldskool cat (which I am patently not!) and
> you will get exactly the same answer "Do you value your data?". This
> is for people who want to tinker, or who simply can't dump/reload.
> Until it gets included in postgres core of course!
> 
> 
> > > > RTFM :-). Between major versions (8.1 -> 8.2) you DO need to dump and
> > > > reload. So do that...
> > >
> > > Actually, this isn't strictly true:
> > http://pgfoundry.org/projects/pg-
> > > migrator/


So from the docs of pg-migrator:
"PostgreSQL version 8.2 changes the layout for values of type INET and
CIDR" - thus any tables with columns of those data types would have to
be dump/restored. 

My questions would be a) are there any other data types that have had
changes made to their on-disk representation? and b) have there been any
changes made to the table infrastucture layout from 8.1.x to 8.2 (that
would cause pg-migrator not to work) ?

Sven


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Anyone know a good opensource CRM that actually installs with Posgtres?

2007-03-09 Thread Sven Willenberger
On Fri, 2007-03-09 at 01:22 +, Bradley Kieser wrote:
> I hope that someone has cracked this one because I have run into a brick 
> wall the entire week and after 3 all-nighters with bad installations, I 
> would appreciate hearing from others!
> 
> I am looking for a decent OpenSource CRM system that will run with 
> Postgres. SugarCRM seems to be the most popular but it's MySQL-centric 
> and its opensource parts are very restricted.
> 
> vTiger is also mySQL-centric.
> 
> I thought that I had a corker of a system with "centricCRM" but when it 
> came to actually installing it, I am 48 hours down and hacking through 
> screen after screen of installation errors. Basically, it relies way too 
> much on ant and Java tools. Nothing against Java but my experience with 
> ant used for installing PG schemas is a dismal track record of error and 
> frustration. centric CRM is no exception. Frankly, it just doesn't work 
> and after trying to hack out the ant into a PG script I have decided to 
> give it up as a bad job.
> 
> XRMS promises to run on PG but... it doesn't. The core system is fine, 
> but useless without the plugins. The Plugins are mySQL-specific again, I 
> spent several all-nighters previously hacking through installation 
> screens attempting to convert mysql to PG, making software patches... 
> you get the picture.
> 
> XLSuite looks very promising. Awesome interface, looks great... only 
> it's just not ready yet. It is a year away from being at full PG 
> production level.
> 
> Compiere doesn't support PG.
> 
> OpenTAPS the demo won't even work. And it's US-centric whereas we are in 
> the UK. A pity that it's so very much tied to the US as it could be very 
> good.
> 
> I have tried numerous other CRMs but all the same - either don't run on 
> PG, claim to but in reality don't or are simply pre-Alpha and not ready 
> for production use.
> 
> So if anyone has actually cracked this, please let me know! I really 
> need a good CRM.
> 
> It has to be OpenSource, not just out of principle, but we need to 
> integrate it into an existing business with established inhouse software 
> so we need to be able to customise the code.
> 

Stumbled across this one: http://www.hipergate.org/ which appears to be
crm and groupware that works with postgresql (requires version 8.x so it
appears to be relatively up to date development-wise). It is java based
apparently so it may not be so palatable for you.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] timestamp and calculations.

2006-07-20 Thread Sven Willenberger
On Thu, 2006-07-20 at 11:57 +0200, Andreas Kretschmer wrote:
> Thor Tall <[EMAIL PROTECTED]> schrieb:
> 
> > Hi,
> > 
> > I am new to postgres sql and have a problem with an
> > sql statement which I hope you can help me with.
> > 
> > I want to do some calculation on the result of a
> > query.
> > 
> > I have a table tb_test which contains a timestamp
> > column my_timestamp.
> > 
> > My sql statement should display my_timestamp  as "00",
> > "20", "40" where all timestamps with minutes between
> > "00" and until "20" should be displayed as "00" and
> > "20"  until "40" as "20" and "40"
> > until "00" as "40"
> 
> Something like this:
> 
> test=# select * from times;
>   t
> -
>  2006-07-20 10:00:00
>  2006-07-20 10:05:00
>  2006-07-20 10:10:00
>  2006-07-20 10:15:00
>  2006-07-20 10:20:00
>  2006-07-20 10:25:00
>  2006-07-20 10:35:00
>  2006-07-20 10:45:00
> (8 rows)
> 
> select t, 
>   extract(minute from t) / 20, 
>   case floor((extract(minute from t) / 20)) 
>   when 0 then '00' 
>   when 1 then '20' 
>   when 2 then '40' 
>   end 
> from times;
> 
>   t  | ?column? | case
> -+--+--
>  2006-07-20 10:00:00 |0 | 00
>  2006-07-20 10:05:00 | 0.25 | 00
>  2006-07-20 10:10:00 |  0.5 | 00
>  2006-07-20 10:15:00 | 0.75 | 00
>  2006-07-20 10:20:00 |1 | 20
>  2006-07-20 10:25:00 | 1.25 | 20
>  2006-07-20 10:35:00 | 1.75 | 20
>  2006-07-20 10:45:00 | 2.25 | 40
> (8 rows)
> 
> 
> 
> 
> HTH, Andreas

Alternatively:

select lpad((floor((extract (minute from my_timestamp) / 20)) * 20)::text,2,'0')

Sven


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] psql seems to hang during delete query

2006-07-20 Thread Sven Willenberger
On Thu, 2006-07-20 at 12:48 +0530, surabhi.ahuja wrote:
> so how much space should be free so that i am able to delete
>  
> i have a hard disk of size 130 G.
>  
>  
> thanks,
> regards
> Surabhi
> 
> 
> __
If you are trying to delete the entire contents of a table (and are not
using slony replication or similar) then a simple TRUNCATE 
will achieve the desired result and free up some diskspace.
Alternatively, if you have indexes on the table, you could try to drop
the indexes (if they are bloated, you can probably regain some diskspace
that way). Delete the rows you need and then create the indexes anew.

Sven


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] VACUUM FULL versus CLUSTER ON

2006-07-10 Thread Sven Willenberger
On Mon, 2006-07-10 at 10:50 -0500, Scott Marlowe wrote:
> On Sat, 2006-07-08 at 10:20, Joshua D. Drake wrote:
> > >
> > > Unfortunately it would appear that I cannot vacuum full either as I get an
> > > out of memory error:
> > >
> > >
> > > # - Memory -
> > >
> > > shared_buffers = 5000   # min 16, at least max_connections*2, 8KB
> > > each work_mem = 131072   # min 64, size in KB
> > > maintenance_work_mem = 524288   # min 1024, size in KB
> > > max_stack_depth = 4096  # min 100, size in KB
> > 
> > You could decrease your maintenance_work_mem, 
> > 
> > But honestly, at this point I would do the backup restore method.
> 
> Also, this kind of points out that you might not have enough swap
> space.  On most database servers there's enough hard drive space laying
> about to have as large a swap space as you'd like, and I can't count the
> number of times a large swap has given me enough to time to catch
> runaway processes and keep an ailing server up and running, albeit
> hobbling along, rather than having to worry about running out of virtual
> memory.
> 
> Unless the memory being allocated here just has to be real memory.  But
> I'm guessing not.  Sure, swapping is slow, but at least it will let some
> memory hungry processes finish.

The box has 8G of RAM and 10G swap space available to it (almost none of
which touched). The problem was that the VACUUM FULL process never
released any memory. With maintenance work mem set to 512MB, I would
think that it would be enforced such that any given connection would
only be allowed 512MB for maintenance. Why it it growing beyond the 1.5G
I allow system-wide for any given process eludes me right now (and why I
suspect a bad memory leak). 

As per the other suggestions, I will end up doing a pg_dump/restore to
reclaim the lost space.

Sven


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] VACUUM FULL versus CLUSTER ON

2006-07-07 Thread Sven Willenberger


Sven Willenberger presumably uttered the following on 07/07/06 13:52:
> On Fri, 2006-07-07 at 10:41 -0700, Joshua D. Drake wrote:
>>>> Sincerely,
>>>>
>>>> Joshua D. Drake
>>> Doing a quick check reveals that the relation in question currently
>>> consumes 186GB of space (which I highly suspect is largely bloat).
>> Good lord.. .186 gig for a 300 million row table? Unless those are seriously
>> large rows, you have a TON of bloat.
>>
>> Joshua D. Drake
>>
> 
> Yes, that number came from the dbsize functions (in contrib) so I don't
> know if that includes the associated indexes as well. The rows are
> fairly large, yes, but not enough (IMO) to account for that size. It
> will be interesting to see the final size after the vacuum full (which
> is the method I have settled on to reclaim space this go round).
> 
> Sven
> 
> 
> ---(end of broadcast)---
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>choose an index scan if your joining column's datatypes do not
>match

Unfortunately it would appear that I cannot vacuum full either as I get an out 
of
memory error:


# - Memory -

shared_buffers = 5000   # min 16, at least max_connections*2, 8KB each
work_mem = 131072   # min 64, size in KB
maintenance_work_mem = 524288   # min 1024, size in KB
max_stack_depth = 4096  # min 100, size in KB

/boot/loader.conf
kern.maxdsiz="1610612736"
kern.dfldsiz="891289600"


I have disabled other connections to the db except for slony (which will not 
access
the table in question). I begin a Vacuum full  and start watching 
memory
use constantly increase (top). It pushes to:

  PID USERNAME  PRI NICE   SIZERES STATE  C   TIME   WCPUCPU COMMAND
61048 pgsql  -40  1640M  1472M getblk 1   6:58 16.75% 16.75% postgres

and then it bails:
ERROR:  out of memory
DETAIL:  Failed on request of size 78.

Server version is 8.03

Is this a known issue?

Sven

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] VACUUM FULL versus CLUSTER ON

2006-07-07 Thread Sven Willenberger
On Fri, 2006-07-07 at 10:41 -0700, Joshua D. Drake wrote:
> > > Sincerely,
> > >
> > > Joshua D. Drake
> >
> > Doing a quick check reveals that the relation in question currently
> > consumes 186GB of space (which I highly suspect is largely bloat).
> 
> Good lord.. .186 gig for a 300 million row table? Unless those are seriously
> large rows, you have a TON of bloat.
> 
> Joshua D. Drake
> 

Yes, that number came from the dbsize functions (in contrib) so I don't
know if that includes the associated indexes as well. The rows are
fairly large, yes, but not enough (IMO) to account for that size. It
will be interesting to see the final size after the vacuum full (which
is the method I have settled on to reclaim space this go round).

Sven


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] VACUUM FULL versus CLUSTER ON

2006-07-07 Thread Sven Willenberger
On Fri, 2006-07-07 at 09:55 -0700, Joshua D. Drake wrote:
> On Friday 07 July 2006 08:19, Sven Willenberger wrote:
> > Postgresql 8.0.4 on FreeBSD 5.4
> >
> > I have a table consisting of some 300million rows that, every couple of
> > months, has 100 million rows deleted from it (an immediately vacuumed
> > afterward). Even though it gets routinely vacuumed (the only
> > deletions/updates are just the quarterly ones), the freespace map was
> > not increased in size to keep up with the growing size of the other
> > tables in the database which do experience many updates,etc.
> 
> Based on the size of the table, you may want to:
> 
> Backup the table
> Drop the table
> Restore the table
> 
> Is is possible that this will be faster in this instance.
> 
> Secondly this sounds like a perfect time for you to consider upgrading to 8.1
> and making use of table partitioning. That way you can just truncate the child
> table containing the old data.
> 
> Sincerely,
> 
> Joshua D. Drake

Doing a quick check reveals that the relation in question currently
consumes 186GB of space (which I highly suspect is largely bloat). The
delete was just run this past weekend as was the recreation of the
indexes. I have 50GB of disk space left; If I vacuum full, it does not
need to create a temporary copy of the relation and indexes like cluster
does, does it? At this point, I think CLUSTER ON is out of the question
due to the need to create the temporary table and indexes (I will run
out of space during the operation).

I do plan on migrating the whole mess to a new server which will run 8.1
(I had looked at inheritance for partitioning, I am glad to see that 8.1
took the concept and ran with it further :)  ) This new server will use
an external SAS array so I should simply be able to add another array as
the need arises and partition to it via tablespace.

Thanks to all who offered suggestions; it would appear that at this
stage my only option to buy some time is try a vacuum full. My final
question: can I leave the indexes in place when I vacuum full? I assume
this will only operate on the table itself?

Sven


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] VACUUM FULL versus CLUSTER ON

2006-07-07 Thread Sven Willenberger
Postgresql 8.0.4 on FreeBSD 5.4

I have a table consisting of some 300million rows that, every couple of
months, has 100 million rows deleted from it (an immediately vacuumed
afterward). Even though it gets routinely vacuumed (the only
deletions/updates are just the quarterly ones), the freespace map was
not increased in size to keep up with the growing size of the other
tables in the database which do experience many updates,etc.

I suspect that the table is suffering from bloat (not the indexes though
as I drop them prior to the huge delete, then create them anew). What
would be the recommended method for reclaiming the disk space lost due
to bloat? Dropping the 5 indexes on the table and doing a VACUUM FULL,
keeping the indexes and doing a VACUUM FULL (does FULL perform the same
disk moving operations on the indexes as it does on the actual table?),
dropping the indexes except the primary key and CLUSTER ON primary key,
keeping the indexes and doing a CLUSTER ON primary key (again, does
CLUSTER ON just operation on the table proper?)

What are the caveats on using one over the other? I imagine any of the
options I listed above will involve a full table lock. Are there any
differences in the amount of free disk space required for each method?

Thanks,

Sven


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] out of memory for query result

2006-05-03 Thread Sven Willenberger
On Wed, 2006-05-03 at 13:16 -0400, Douglas McNaught wrote:
> Sven Willenberger <[EMAIL PROTECTED]> writes:
> 
> > On Sat, 2006-04-22 at 15:08 -0400, Tom Lane wrote:
> >> Francisco Reyes <[EMAIL PROTECTED]> writes:
> >> > What resource do I need to increase to avoid the error above?
> >> 
> >> Process memory allowed to the client; this is not a server-side error.
> >> 
> >
> > I am experiencing an "out of memory" situation as well on large query
> > results, even with allowing 2G process memory to the client:
> 
> The server settings don't have anything to do with this problem.  The
> client is trying to read the entire result set into memory before it
> writes it out (that's just the way it works).  You can do it in
> smaller chunks by using a cursor with DECLARE and FETCH.
> 
> -Doug

OK, that I do see; I guess I never noticed it on the other (i386)
machine as the memory never exceeded the max amount allowed by the
tunables. That raises a question though:

Using identical data and identical queries, why would the amd64 system
using postgresql 8.1.3 be using some 2/3s more memory to store the query
results before output than the i386 system using postgresql 8.0.4?

amd64 system:
  PID USERNAME  THR PRI NICE   SIZERES STATE  C   TIME   WCPU CMD
 8193 svenw   1   50  1516M  1475M ttyin  1   0:15  0.00% psql

i386 system:
  PID USERNAME  THR PRI NICE   SIZERES STATE  C   TIME   WCPU CMD
61263 svenw   1   50   972M   956M ttyin  2   0:27  0.00% psql

These are the final stages after the file has flushed (this was run on
the prior month's data which had fewer results). This would either point
to an issue with a) amd64 memory allocation or b) palloc interacting
with 64bit memory or c) some other change.

For now I will try the cursor technique; I assume that if this is run
from within a function returning setof  that I will run into the
same issue?

Sven


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] out of memory for query result

2006-05-03 Thread Sven Willenberger
On Sat, 2006-04-22 at 15:08 -0400, Tom Lane wrote:
> Francisco Reyes <[EMAIL PROTECTED]> writes:
> > What resource do I need to increase to avoid the error above?
> 
> Process memory allowed to the client; this is not a server-side error.
> 

I am experiencing an "out of memory" situation as well on large query
results, even with allowing 2G process memory to the client:

PostgreSQL 8.1.3, FreeBSD 6.1RC amd64, 8GB RAM.

Relevent configs:
# cat /boot/loader.conf
kern.maxdsiz="2147483648"
kern.dfldsiz="1073741824"

from the kernel config file:
options SYSVSHM # SYSV-style shared memory
options SYSVMSG # SYSV-style message queues
options SYSVSEM # SYSV-style semaphores
options SHMMAXPGS=131072
options SEMMNI=128
options SEMMNS=512
options SEMUME=100
options SEMMNU=256

work_mem = 64MB
maint_work_mem = 512MB

The query result contains about 7.5million rows and I am simply trying
to \o[utput] it to a file:

SELECT callstartdate, callenddate, callduration, calling_number,
called_number, dest_type, sessionrate, sessioncost,
quote_ident(callcenter) as callcenter from cdrs_local where callenddate
between '2006-04-01' and '2006-04-30 23:59:59' order by callstartdate;

When viewing the process in top, I see postgres and the psql client
using relatively little memory (I guess this is the disk read part).
Then I see the psql process eat up memory till it hits the 2G mark
(imposed by the loader.conf tuner) and then "out of memory".

Removing the order by clause doesn't help, nor does reducing work_mem to
8MB. I also tried disabling the bitmap scan and sequence scan to no
avail. I don't know if this is related to the pg_restore memory issues
discussed in another thread or not.

This same query running on FreeBSD i386 (P4 xeon) using PostgreSQL 8.0.x
did not experience this problem.

Any ideas? How can I view the memory allocation and heap management in
the logfiles? (what do I need to set in postgresql.conf).

Sven


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Best way to handle table trigger on update

2006-02-02 Thread Sven Willenberger
On Thu, 2006-02-02 at 10:16 -0500, Sven Willenberger wrote:
> On Thu, 2006-02-02 at 08:58 -0600, Justin Pasher wrote:
> > > -Original Message-
> > > From: Sven Willenberger [mailto:[EMAIL PROTECTED] 
> > > Sent: Wednesday, February 01, 2006 2:13 PM
> > > To: Justin Pasher
> > > Cc: pgsql-general@postgresql.org
> > > Subject: Re: [GENERAL] Best way to handle table trigger on update
> > > 
> > > 
> > > On Tue, 2006-01-31 at 13:45 -0600, Justin Pasher wrote:
> > > > Postgres 7.4.7 (I know, a little old, but we haven't had a chance to
> > > > upgrade)
> > > > 
> > > > I have a table that stores menu items for a side navigation 
> > > menu for a web
> > > > site. Each menu item has a "position" column set that 
> > > determines where to
> > > > put the menu item in the display. At any given time, the 
> > > menu items should
> > > > not have any conflicting positions and should be 
> > > sequential. For example
> > > > 
> > > >  id  |   name| position
> > > > -+---+--
> > > >1 | About Us  |1
> > > >2 | History   |2
> > > >3 | Support   |3
> > > >4 | Job Opportunities |4
> > > >5 | Sitemap   |5
> > > > 
> > > > ...
> > > > 
> > > > I have an UPDATE trigger defined on the table to handle keeping the
> > > > positions correct.
> > > > 
> > > > CREATE TRIGGER "update_menu_item" BEFORE UPDATE ON 
> > > "menu_items" FOR EACH ROW
> > > > EXECUTE PROCEDURE update_menu_item();
> > > > 
> > > > When I update an existing row (say ID 3) with a new 
> > > position (let's say 1),
> > > > the trigger will bump the menu items with a lower position up by one
> > > > (position 2 becomes 3, 1 becomes 2) and everything is back 
> > > to normal. The
> > > > catch is the trigger performs this position bumping by 
> > > making an update on
> > > > the menu items table, thus firing the trigger again for 
> > > each updated row
> > > > (and leading to chaos). Currently, the only workaround I 
> > > have found is to
> > > > drop the trigger at the start of the stored procedure, make 
> > > the updates,
> > > > then recreate the trigger.
> > > 
> > > Rather than using a trigger why not create a function to do 
> > > the update?
> > > The following will do the trick with the only modification needed to
> > > your table is the addition of the boolean column "isupdate" 
> > > which should
> > > default to false. The two arguments taken by the function are the
> > > current position of the intended menu item and its new target 
> > > position:
> > > 
> > > create or replace function update_menu_item(int,int) returns void as '
> > > update menu_items set isupdate = true where position = $1;
> > > update menu_items set position = case when $1 > $2 THEN 
> > > position +1 when
> > > $2 > $1 then position - 1 else position end 
> > > where position <= case when $1 > $2 then $1 else $2 end and 
> > > position >=
> > > case when $1 > $2 then $2 else $1 end and isupdate = false;
> > > update menu_items set position = $2 where position = $1 and isupdate;
> > > update menu_items set isupdate = false where isupdate = true;
> > > '
> > > LANGUAGE sql volatile;
> > > 
> > > Then if you want to move Job Opportunities from position 4 to position
> > > 2, just call the function:
> > > select update_menu_item(4,2);
> > > 
> > > HTH,
> > > 
> > > Sven
> > 
> > 
> > This would work, but my goal is to create something that is transparent to
> > the user that is inserting the data (i.e. they perform a normal
> > INSERT/UPDATE on the table and "It Just Works"). Thanks for the suggestion.
> > 
> > 
> > Justin Pasher
> > 
> In that case you could create a rule: ON INSERT ... DO INSTEAD ... (and
> ON UPDATE ... DO INSTEAD ...) and invoke the function that way. Barring
> that, I think that dropping the trigger and re-adding it the way you
> have done is about the only way to avoid all that recursion.
> 
> Sven

I just realize that this won't work as it suffers the same recursion
problem that the trigger does. Interesting puzzle for which I believe
you have already found the optimal solution.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Best way to handle table trigger on update

2006-02-02 Thread Sven Willenberger
On Thu, 2006-02-02 at 08:58 -0600, Justin Pasher wrote:
> > -Original Message-
> > From: Sven Willenberger [mailto:[EMAIL PROTECTED] 
> > Sent: Wednesday, February 01, 2006 2:13 PM
> > To: Justin Pasher
> > Cc: pgsql-general@postgresql.org
> > Subject: Re: [GENERAL] Best way to handle table trigger on update
> > 
> > 
> > On Tue, 2006-01-31 at 13:45 -0600, Justin Pasher wrote:
> > > Postgres 7.4.7 (I know, a little old, but we haven't had a chance to
> > > upgrade)
> > > 
> > > I have a table that stores menu items for a side navigation 
> > menu for a web
> > > site. Each menu item has a "position" column set that 
> > determines where to
> > > put the menu item in the display. At any given time, the 
> > menu items should
> > > not have any conflicting positions and should be 
> > sequential. For example
> > > 
> > >  id  |   name| position
> > > -+---+--
> > >1 | About Us  |1
> > >2 | History   |2
> > >3 | Support   |3
> > >4 | Job Opportunities |4
> > >5 | Sitemap   |5
> > > 
> > > ...
> > > 
> > > I have an UPDATE trigger defined on the table to handle keeping the
> > > positions correct.
> > > 
> > > CREATE TRIGGER "update_menu_item" BEFORE UPDATE ON 
> > "menu_items" FOR EACH ROW
> > > EXECUTE PROCEDURE update_menu_item();
> > > 
> > > When I update an existing row (say ID 3) with a new 
> > position (let's say 1),
> > > the trigger will bump the menu items with a lower position up by one
> > > (position 2 becomes 3, 1 becomes 2) and everything is back 
> > to normal. The
> > > catch is the trigger performs this position bumping by 
> > making an update on
> > > the menu items table, thus firing the trigger again for 
> > each updated row
> > > (and leading to chaos). Currently, the only workaround I 
> > have found is to
> > > drop the trigger at the start of the stored procedure, make 
> > the updates,
> > > then recreate the trigger.
> > 
> > Rather than using a trigger why not create a function to do 
> > the update?
> > The following will do the trick with the only modification needed to
> > your table is the addition of the boolean column "isupdate" 
> > which should
> > default to false. The two arguments taken by the function are the
> > current position of the intended menu item and its new target 
> > position:
> > 
> > create or replace function update_menu_item(int,int) returns void as '
> > update menu_items set isupdate = true where position = $1;
> > update menu_items set position = case when $1 > $2 THEN 
> > position +1 when
> > $2 > $1 then position - 1 else position end 
> > where position <= case when $1 > $2 then $1 else $2 end and 
> > position >=
> > case when $1 > $2 then $2 else $1 end and isupdate = false;
> > update menu_items set position = $2 where position = $1 and isupdate;
> > update menu_items set isupdate = false where isupdate = true;
> > '
> > LANGUAGE sql volatile;
> > 
> > Then if you want to move Job Opportunities from position 4 to position
> > 2, just call the function:
> > select update_menu_item(4,2);
> > 
> > HTH,
> > 
> > Sven
> 
> 
> This would work, but my goal is to create something that is transparent to
> the user that is inserting the data (i.e. they perform a normal
> INSERT/UPDATE on the table and "It Just Works"). Thanks for the suggestion.
> 
> 
> Justin Pasher
> 
In that case you could create a rule: ON INSERT ... DO INSTEAD ... (and
ON UPDATE ... DO INSTEAD ...) and invoke the function that way. Barring
that, I think that dropping the trigger and re-adding it the way you
have done is about the only way to avoid all that recursion.

Sven


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Best way to handle table trigger on update

2006-02-01 Thread Sven Willenberger
On Tue, 2006-01-31 at 13:45 -0600, Justin Pasher wrote:
> Postgres 7.4.7 (I know, a little old, but we haven't had a chance to
> upgrade)
> 
> I have a table that stores menu items for a side navigation menu for a web
> site. Each menu item has a "position" column set that determines where to
> put the menu item in the display. At any given time, the menu items should
> not have any conflicting positions and should be sequential. For example
> 
>  id  |   name| position
> -+---+--
>1 | About Us  |1
>2 | History   |2
>3 | Support   |3
>4 | Job Opportunities |4
>5 | Sitemap   |5
> 
> ...
> 
> I have an UPDATE trigger defined on the table to handle keeping the
> positions correct.
> 
> CREATE TRIGGER "update_menu_item" BEFORE UPDATE ON "menu_items" FOR EACH ROW
> EXECUTE PROCEDURE update_menu_item();
> 
> When I update an existing row (say ID 3) with a new position (let's say 1),
> the trigger will bump the menu items with a lower position up by one
> (position 2 becomes 3, 1 becomes 2) and everything is back to normal. The
> catch is the trigger performs this position bumping by making an update on
> the menu items table, thus firing the trigger again for each updated row
> (and leading to chaos). Currently, the only workaround I have found is to
> drop the trigger at the start of the stored procedure, make the updates,
> then recreate the trigger.

Rather than using a trigger why not create a function to do the update?
The following will do the trick with the only modification needed to
your table is the addition of the boolean column "isupdate" which should
default to false. The two arguments taken by the function are the
current position of the intended menu item and its new target position:

create or replace function update_menu_item(int,int) returns void as '
update menu_items set isupdate = true where position = $1;
update menu_items set position = case when $1 > $2 THEN position +1 when
$2 > $1 then position - 1 else position end 
where position <= case when $1 > $2 then $1 else $2 end and position >=
case when $1 > $2 then $2 else $1 end and isupdate = false;
update menu_items set position = $2 where position = $1 and isupdate;
update menu_items set isupdate = false where isupdate = true;
'
LANGUAGE sql volatile;

Then if you want to move Job Opportunities from position 4 to position
2, just call the function:
select update_menu_item(4,2);

HTH,

Sven

P.S. The function is pretty rough and I am sure could be better
optimized, but you get the idea of the logic flow there.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] insert serial numbers

2006-01-04 Thread Sven Willenberger



Albert Vernon Smith presumably uttered the following on 01/03/06 13:36:
I figured it out myself.  Not TOO difficult.  I was just having a  hard 
time wading through the documentation before.  Giving the answer  out 
here, just in case any one else wants to see the solution (not  using 
reserved words ;-)).


1.  Made function:

CREATE FUNCTION "return_one_id" () RETURNS "trigger" AS '
DECLARE
my_id bigint;
BEGIN
select into my_id one_id from one where one_text=NEW.one_text;
NEW.one_id := my_id;
return NEW;
END;
' LANGUAGE "plpgsql"

2. Made trigger:

CREATE TRIGGER return_one_id BEFORE INSERT OR UPDATE ON two FOR EACH  
ROW EXECUTE PROCEDURE return_one_id()


Voila!

-albert


On 3.1.2006, at 14:36, Albert Vernon Smith wrote:

I have two tables, listed as below.  I'm inserting values for  "text" 
into table "two" (which must already exist as "text" values  in table 
"one").  When I do that, I'd like to also insert the  associated 
"one_id" value from table "one" into the field  "two.one_id".  How is 
best to go about that?  I imagine this would  be best be done with a 
trigger rather than a rule, but I don't know  enough on how to go 
about that.  Can someone help point me in the  right direction.  (I 
did try it with rules as listed below, but the  serial value 
increments, so the approach doesn't work on a single  row.)


--
My tables:

CREATE TABLE "one" (
"one_id" BIGSERIAL,
"text" text NOT NULL,
CONSTRAINT "iu_text" UNIQUE (text)
)

CREATE TABLE "two" (
"two_id" BIGSERIAL,
"text" text NOT NULL,
"one_id" bigint,
CONSTRAINT "$1" FOREIGN KEY (text) REFERENCES one(text) ON  UPDATE 
SET NULL

)

--

My failed rule approaches:

CREATE RULE two_insert AS ON INSERT TO two DO UPDATE two SET one_id  = 
(SELECT one.one_id FROM one WHERE (new.text = one.text)) WHERE  
(new.two_id = two.two_id);


The following does work, but it updates all rows with the same  text.  
I'd rather be more efficient, and only work with the current  row.:


CREATE RULE two_insert AS ON INSERT TO two DO UPDATE two SET one_id  = 
(SELECT one.one_id FROM one WHERE (new.text = one.text)) WHERE  
(new.text = two.text);




The reason the rule failed, by the way is that the rule gets expanded 
(like a macro). Whereas in a function "new.two_id" actually uses the 
value of two_id that is about to be inserted into the table, in a rule 
situation "new.two_id" gets expanded into its definition, namely 
nextval(sequence name) so the comparison is between a two_id that really 
doesn't exist in the table (its value is created by the rule after the 
two_id that gets created on insert) which will always result in a non-match.


I had this same issue with a pair of table I had where I wanted to 
update a customer information table with a live customer number that was 
created in a different table. As in your case, a trigger solved my 
situation.


Sven

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] find last day of month

2005-12-09 Thread Sven Willenberger
On Fri, 2005-12-09 at 15:27 -0500, Chris Browne wrote:
> "Andrus Moor" <[EMAIL PROTECTED]> writes:
> 
> > I have a table containing month column in format mm.
> >
> > create table months ( tmkuu c(7));
> > insert into months values ('01.2005');
> > insert into months values ('02.2005');
> >
> > How to create select statement which converts this column to date type 
> > containing last day of month like
> >
> > '2005-01-31'
> > '2005-02-28'
> 
> The usual trick is to split it into year and month, add 1 to the
> month, if that's > 12, jump to 1, and add a year.  
> 
> Based on those...  Construct the first day of the NEXT month.
> 
> Thus...  01.2005
>  --> month = 2
>  --> year = 2005
> 
> Construct first day of the next month:
>2005-02-01
> 
> Now, subtract a day from that, and you'll get the final day of the
> present month.
> 
> That approach will nicely cope with leap years and such.

or simply:

test=>select ((split_part('12.2005','.',2) || '-' ||
split_part('12.2005','.',1) || '-01')::date + interval '1 mon' -
interval '1 day')::date;
date

 2005-12-31

test=>select ((split_part('02.2008','.',2) || '-' ||
split_part('02.2008','.',1) || '-01')::date + interval '1 mon' -
interval '1 day')::date;
date

 2008-02-29

Sven


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] About not to see insertion result "INSERT 0 1"

2005-11-21 Thread Sven Willenberger



Emi Lu presumably uttered the following on 11/21/05 15:40:

Greetings,

I tried to run insert command from a .sql file. For example, in a.sql 
file there are 100,000 lines like

insert into t1 values(... ...);
insert into t1 values(... ...);
insert into t1 values(... ...);
... ...

I do not want to see the 100,000 times "INSERT 0 1"  displayed by 
postgreSQL. Is there a way to hide the output "INSERT 0 1" generated by 
postgresql ?


Thanks a lot!
Emi



Depending on how your application works, you would probably see a marked 
speed increase by using the copy command rather than 100k insert lines. 
The file could just be your Values(...) section which could then be 
sucked in via copy.


Sven

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] Function to insert entire row%ROWTYPE into other table

2005-11-02 Thread Sven Willenberger
Postgresql 8.0.4 using plpgsql

The basic function is set up as:
CREATE FUNCTION add_data(t_row mytable) RETURNS VOID AS $func$
DECLARE
newtable text;
thesql text;
BEGIN
INSERT INTO newtable thename from mytable where lookup.id =
t_row.id;
thesql := 'INSERT INTO ' || newtable || VALUES (' || t_row.* ')';
EXECUTE thesql;
RETURN;
END;
$func$ LANGUAGE plpgsql VOLATILE;

SELECT add_data(t.*) FROM mytable t where 
ERROR:  column "*" not found in data type mytable

Now I have tried to drop the * but then there is no concatenation
function to join text to a table%ROWTYPE. So my question is how can I
make this dynamic insert statement without listing out every
t_row.colname?

SVen


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PostgreSQL on 64-bit operating systems

2005-11-01 Thread Sven Willenberger
On Tue, 2005-11-01 at 22:00 +0200, Craig wrote:
> Hi
>  
> I am going to be hosting a PostgreSQL database on a new server. We
> will be purchasing a server with the AMD Athlon 64 3200+ processor. 
> We are now posed with a choice of "FreeBSD 5.4" or "FreeBSD 5.4
> x86_64Bit". My question is: Will PostgreSQL 8.04 run perfectly on a
> 64-bit OS, or should I run the 32 bit OS for now?
>  
> Any help will be greatly appreciated
>  

Runs perfectly fine on 5.4-Release AMD64 here, built straight from
ports.

Sven


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] NEW in Rule makes another nextval call?

2005-10-20 Thread Sven Willenberger
On Thu, 2005-10-20 at 15:01 +0100, Richard Huxton wrote:
> Sven Willenberger wrote:
> >  Is this intended behavior? or is the NEW
> > acting as a macro that is replace by "nextval()" ?
> 
> Well, it's understood behaviour even if not quite "intended".
> 
> You are quite right, rules basically act like macros with all the 
> limitations they have. What is actually happening behind the scenes is 
> that the query is being rewritten to alter the query-plan tree. You 
> should be able to get your rule to work by referring to 
> currval() rather than NEW.custid.
> 
> However, in this particular case I think you want an after insert 
> trigger on customer rather than a rule.
> 

As as AFTER INSERT trigger, I can safely assume here that NEW.custid wil
now properly use the actual value of the custid rather than nextval()? I
have dropped the rule and created the AFTER INSERT trigger so I guess I
will find out shortly enough :-)

Thanks,

Sven


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] NEW in Rule makes another nextval call?

2005-10-20 Thread Sven Willenberger
On a table ("customer") I have a rule set up that is designed to update
a "contacts" table  with a customer id once the customer is added to the
customer table. (Yes, this does seem backwards but it has to do with the
way this system of web-based signups gets translated into a customer
record).

CREATE TABLE customer (
custid serial primary key,
custname text not null,
startdate timestamp,
agent int);

CREATE RULE newcustomer AS
ON INSERT TO customer DO
UPDATE contacts SET custid = NEW.custid
WHERE contact.custname = NEW.custname;

Now when a new record is inserted into customer it gets a custid from
the nextval() call; let's say it gets the value 296. What
I notice is that in the contacts table, the customer id ends up being
297; it's as if the rule's SET command is using the nextval() call
instead of the value retrieved by the actual insert command when
updating the contacts table. Is this intended behavior? or is the NEW
acting as a macro that is replace by "nextval()" ?

Sven


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Restoring Database created on windows on FreeBSD

2005-10-19 Thread Sven Willenberger
On Wed, 2005-10-19 at 15:55 +0200, [EMAIL PROTECTED] wrote:
> Hi Sven
> 
> Thanks for the help. This looks like the kind of info I needed.
> One question:
> You suggest that I use : pkg_add -r postgresql80-server. If this requires
> postgresql80-client, will it automatically download it and install? The
> docs I have read suggest this, but maybe you can confirm?
> 
> Thanks
> Craig
> 



More than likely it is the other way around. If you pkg_add -r
postgresql80-client it will retrieve and install the server package as
well (it looks for any dependencies a package may have). 

Sven


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Restoring Database created on windows on FreeBSD

2005-10-19 Thread Sven Willenberger
On Wed, 2005-10-19 at 08:23 +0200, [EMAIL PROTECTED] wrote:
> Hi
> 
> I am very new to FreeBSD (Windows Background), but am busy trying to
> implement a PostgreSQL database that I have running in the Windows
> environemtn on FreeBSD. Naturally, most of my problems so far have been
> geeting to grips with FreeBSD.
> 
> Anyway, I am having a problem at the moment. Having being previously
> spoilt with the ease of installation on windows, I am sure that I have
> missed the boat on something.

Freebsd ports is an amazingly easy way to install software.


> I downloaded the latest port for PostgreSQL-8.0.4 from postgresql.org. 

the postgresq website does not have Freebsd ports ... just source code
from what I can see. 

> I
> followed the instructions in the documentation and all seemed to work. I
> have successfully created "test" databases and have happily managed to
> connect to the new PostgreSQL server using psql and pgAdmin III(on
> windows). I have now backed up my DB from windows and have created a new
> database on the BSD server. When restoring this DB I got a lot of errors
> related to some of the system installed functions. I then realised that
> the PostgreSQL installation on FreeBSD did not install any of the contrib
> modules. I went to the /contrib folder in the port I used to to install
> postgreSQL (/usr/ports/database/postgresql-8.0.4/contrib). I excluded
> /cube and /mysql from the MakeFile (cube seemed to be failing and why
> would I want mysql stuff on PostgreSQL?). I then did a gmake install. All
> contribs seemed to install fine. I reran the restore on a clean DB and
> most of the errors went away, but I am still getting errors all related to
> system functions looking for '$libdir/admin'.
> 
> Does anyone know what I could have missed here?

It looks like you built from source rather that using the FreeBSD ports
system. Assuming you have a fresh FreeBSD install one would do the
following to achieve your desired end goal.

1) cd /usr/ports/databases/postgresql80-server
2) make && make install
3) pay attention to the notes at the end about tuning your kernel for
shared memory usage (if you plan on having lots of connections)
4) cd /usr/ports/databases/postgresql-contrib
5) make && make install
6) /usr/local/etc/rc.d/010.pgsql.sh initdb
7) /usr/local/etc/rc.d/010.pgsql.sh start

Now if your ports tree is out of date (or even if not) you may want to
do the following instead:

1) pkg_add -r postgresql80-server
2) pkg_add -r postgresql-contrib

Then continue with steps 6 and 7 above. You can verify the package
installation via pkg_info. Also, you may need to build (via ports) or
add (via pkg_add) postgresql80-client.

HTH

Sven


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] strip zeros from fractional part

2005-10-04 Thread Sven Willenberger
On Mon, 2005-10-03 at 16:36 -0300, Giovanni M. wrote:
> Yes! That did it, thanks for the help
> 
> On 10/3/05, Tony Wasson <[EMAIL PROTECTED]> wrote:
> > On 10/3/05, Giovanni M. <[EMAIL PROTECTED]> wrote:
> > > Round and trunc dont provide the functionality I need.
> > >
> > > Say for example I have two values in a column of type numeric as follows:
> > > 23.455
> > > 12.300
> > >
> > > What I need to happen is stripping the "useless" zeros in the
> > > fractional part of numbers so 12.300 would become 12.3 and 23.455
> > > would stay the same
> > >
> > > Round and trunc can´t do this without me first checking if the number
> > > can indeed be "rounded" to a number without losing its precise value
> >
> > As a workaround, you could try using the trim function. You'd need to
> > cats your numbers to text strings, but it looks like it will drop
> > useless 0's for you.
> >
> > test=# SELECT trim(trailing 0 FROM '12.300'::TEXT)::NUMERIC;
> >  rtrim
> > ---
> >   12.3
> > (1 row)
> >
> 
> 
> --

I also found simply casting the column as a float achieves the same
thing:

test=> select 23.510::numeric;
 numeric
-
  23.510

test=> select 23.510::numeric::float;
 float8

  23.51


Sven


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PostgreSQL 8.0.3 limiting max_connections to 64 ?

2005-08-17 Thread Sven Willenberger
On Wed, 2005-08-17 at 13:05 -0300, eu wrote:
> Hello, i need help...
> 
> i have a postgresql 8.0.3 database running on the backend of a postfix 
> setup ( i didn't trust Mysql for the job ) on Linux kernel 2.6.8.1, 
> serving email to a dozen different virtual domains ( including this one 
> i'm using right now )...
> 
> however, this setup takes a whole lot of simultaneous connections to the 
> database
> ( postfix, amavis-new, clamav + spamassassin, apache+squirrelmail, 
> courier-authlib and courier-imap ), all those services were configured 
> to use Unix Domain Sockets, instead of TCP ( netstat -anvp showed too 
> many short-lived connections were kept on CLOSE_WAIT for too long while 
> connecting via TCP ), and, despite i have made max_connections on the 
> postgresql.conf as high as 500, after ( around ) 64 simultaneous 
> connections i start having a lot of "sorry too many clients already" 
> errors on my postgresql logfile...
> then, my users go nuts complaining about how slow the server is and/or 
> having to retype their passwords too many times ( of course, since 
> authlib can't pick a connection to authenticate against the database ).
> 
> max_connections, shared_buffers, shmmax were tweaked, but the server 
> does not seems to respect the 500 max_connections...
> i *know* i'm missing something obvious, but, what could be wrong ?... 
> i'm lost... any help would be most appreciated... please.
> 

Can you post the relevent portions of your postgresql.conf file? Do you
see any error messsages when starting the database up (perhaps about
shared memory or such)? We also run a postfix mailserver (with maildrop,
courier-imap/vauthchkpw, etc) against a postgresql database with
max_connections set to 512 (FreeBSD 5.x machine). On the delivery end of
things we pool the connections from the postfix server using proxymap(8)
(which helped ease the connections load).

Sven


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Serial Unique question

2005-08-15 Thread Sven Willenberger
On Mon, 2005-08-15 at 11:48 -0400, Madison Kelly wrote:
> Douglas McNaught wrote:
> > Madison Kelly <[EMAIL PROTECTED]> writes:
> > 
> > 
> >>   I want to use a 'serial uniue' column in a table but there is
> >>   likely to be many, many inserts and deletes from this column. I was
> >>   wondering, what happens when the serial value reaches
> >>   2,147,483,647'? Does it roll back over to '1' and keep going or
> >>   will the database start erroring out? This isn't likely to be a
> >>   problem any time soon, but over the course of a year or more it
> >>   might be.
> > 
> > 
> > Use a "bigserial" instead?
> > 
> > -Doug
> > 
> 
> Same issue, delaying a potential problem, not addressing it. :)
> 
> Madison
> 
\h CREATE SEQUENCE
Command: CREATE SEQUENCE
Description: define a new sequence generator
Syntax:
CREATE [ TEMPORARY | TEMP ] SEQUENCE name [ INCREMENT [ BY ] increment ]
[ MINVALUE minvalue | NO MINVALUE ] [ MAXVALUE maxvalue | NO
MAXVALUE ]
[ START [ WITH ] start ] [ CACHE cache ] [ [ NO ] CYCLE ]

Based on the last argument, a sequence can be created to either cycle or
not; it would appear as though the default, when created automagically
by invoking the [big]serial "data type", is to not cycle; my guess in
that case would be that an error would be raised if you have reached
2^31 in your serial field. 

Sven

Sven


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Index not being used unless enable_seqscan=false

2005-08-10 Thread Sven Willenberger
On Wed, 2005-08-10 at 13:31 -0700, Shane wrote:
> On Wed, Aug 10, 2005 at 04:24:51PM -0400, Sven Willenberger wrote:
> > On Wed, 2005-08-10 at 12:58 -0700, Shane wrote:
> > > On Wed, Aug 10, 2005 at 03:31:27PM -0400, Sven Willenberger wrote:
> > > > Right off the bat (if I am interpreting the results of your explain
> > > > analyze correctly) it looks like the planner is basing its decision to
> > > > seqscan as it thinks that it needs to filter over 1 million rows (versus
> > > > the 29,000 rows that actually are pulled). Perhaps increasing stats on
> > > > msgtime and then analyzing the table may help. Depending on your
> > > > hardware, decreasing random_page_cost in your postgresql.conf just a
> > > > touch may help too.
> >
> > Try increasing stats to 100 on just the msgtime column, not the default
> > (changing the default will only have an effect on newly created columns
> > -- you may want to change the default back to 10):
> 
> Hi,
> 
> I brought the statistics on msgtime up to 100, vacuum
> analyzed and brought random_page_cost down to 2. 
> Unfortunately, explain analyze still wants to seqscan and
> estimates 1m returned rows.
> 
> Is there a way to simply force an index usage for this
> particular query?
> 
> S
> 
What version of PostgreSQL are you running? Also, what happens if you
explain analyze choosing where msgtime > cast(now() - interval '6
months' as timestamp(0) without time zone); (instead of less than).
Depending on how you are connecting to run this query (script, webpage,
psql) you could always set enable_seq_scan=off; select ; set
enable_seq_scan=on; scriptomagically.

Sven


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Index not being used unless enable_seqscan=false

2005-08-10 Thread Sven Willenberger
On Wed, 2005-08-10 at 12:58 -0700, Shane wrote:
> On Wed, Aug 10, 2005 at 03:31:27PM -0400, Sven Willenberger wrote:
> > Right off the bat (if I am interpreting the results of your explain
> > analyze correctly) it looks like the planner is basing its decision to
> > seqscan as it thinks that it needs to filter over 1 million rows (versus
> > the 29,000 rows that actually are pulled). Perhaps increasing stats on
> > msgtime and then analyzing the table may help. Depending on your
> > hardware, decreasing random_page_cost in your postgresql.conf just a
> > touch may help too.
> 
> Thanks for the pointers.
> 
> I tried increasing the stats from the default of 10 to 25
> with no change.  How high would you bring it?  Also, I've
> never played with the various cost variables.  The database
> sits on a raid5 partition composed of 4 15k u320 SCSI
> drives, dual xeon 2.8(ht enabled) 2gb ram.  I suppose this
> might actually increase the cost of fetching a random disk
> page as it may well be on another physical disk and
> wouldn't be in the readahead cache.  Any idea as to what it
> should be on this sort of system?
> 
> 
> ---(end of broadcast)---

Try increasing stats to 100 on just the msgtime column, not the default
(changing the default will only have an effect on newly created columns
-- you may want to change the default back to 10):

ALTER TABLE seen ALTER msgtime SET STATISTICS 100;

After running that command, analyze the table again and see if that
helps.

I am assuming the culprit is this particular column as your index and
search criteria is based on that one. The default random_page_cost I
believe is 4.0; on your system you could probably easily drop it to 3,
possibly lower, and see how that performs.

Sven


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Index not being used unless enable_seqscan=false

2005-08-10 Thread Sven Willenberger
On Wed, 2005-08-10 at 12:01 -0700, Shane wrote:
> Hello all,
> 
> I am working with a simple table and query abut cannot seem
> to get it to use the index I have created.  However, if I
> set enable_seqscan=false, the index is used and the query
> is much faster.  I have tried a vacuum analyze but to no
> avail.
> 
> Table layout:
>   Table "public.seen"
>   Column  |  Type  | Modifiers
> --++---
>  group_id | integer| not null
>  msgid| text   | not null
>  msgtime  | timestamp(0) without time zone | not null
> Indexes:
> "seen_group_id_key" unique, btree (group_id, msgid)
> "seen_msgtime" btree (msgtime)
> Foreign-key constraints:
> "$1" FOREIGN KEY (group_id) REFERENCES groups(id) ON UPDATE CASCADE ON 
> DELETE CASCADE
> 
> explain analyze with enable_seqscan=true
> explain analyze select msgid from seen where msgtime < cast(now() - interval 
> '6 months' as timestamp(0) without time zone);
>  QUERY PLAN   
>   
> -
>  Seq Scan on seen  (cost=0.00..107879.45 rows=1081044 width=46) (actual 
> time=7597.387..27000.777 rows=28907 loops=1)
>Filter: (msgtime < ((now() - '6 mons'::interval))::timestamp(0) without 
> time zone)
>  Total runtime: 27096.337 ms
> (3 rows)
> 
> Same query with enable_seqscan=false
>   QUERY PLAN  
>   
> --
>  Index Scan using seen_msgtime on seen  (cost=0.00..3818325.78 rows=1081044 
> width=46) (actual time=0.140..156.222 rows=28907 loops=1)
>Index Cond: (msgtime < ((now() - '6 mons'::interval))::timestamp(0) 
> without time zone)
>  Total runtime: 248.737 ms
> (3 rows)
> 
> Any ideas on how I can fix this.  I get this problem now
> and again with other databases but a vacuum usually fixes
> it.

Right off the bat (if I am interpreting the results of your explain
analyze correctly) it looks like the planner is basing its decision to
seqscan as it thinks that it needs to filter over 1 million rows (versus
the 29,000 rows that actually are pulled). Perhaps increasing stats on
msgtime and then analyzing the table may help. Depending on your
hardware, decreasing random_page_cost in your postgresql.conf just a
touch may help too.

Sven


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] weirdness with the a sql update

2005-08-04 Thread Sven Willenberger
On Thu, 2005-08-04 at 09:19 -0500, Tony Caduto wrote:
> Hi,
> 
> I just noticed this, if I do a update like this:
> 
> update new_requests set name = 'tony' where request_id = 2
> 
> If I do a select * from new_requests that record I just updated is now 
> at the bottom , before the update it was at the top?
> 
> Why is Postgresql changing the ordering of the results after a simple 
> update?
> It almost looks like the record is being dropped and then readded to the 
> end.
> 
> Thanks,
> 
If I understand MVCC correctly, a "new" tuple is actually created and
the "old" tuple is marked with an effective "end" transaction id (or
marked "dead" for any transactions with an id greater than the
transaction id that updated that particular tuple). Your subsequent
select then sees the "new" tuple and, in natural order, it would appear
at the bottom. Vacuuming is the process by which these "old" tuples are
examined and, if there are no transactions open with an id less than the
"end" transaction id associated with that dead tuple, it is removed (or
the space is marked as available for a new tuple to be written).

Sven


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] COnsidering a move away from Postgres

2005-06-30 Thread Sven Willenberger
On Thu, 2005-06-30 at 10:18 -0500, Jason Tesser wrote:
> HI
> 
> On Thursday 30 June 2005 9:20 am, Tom Lane wrote:
> > Jason Tesser <[EMAIL PROTECTED]> writes:
> > > 1. Our dev plan involves alot of stored procedures to be used and we have
> > > found the way this is done in PG to be painful. (ie.  To return multiple
> > > record from different tables you have to define a type.
> >
> > FWIW, this won't be essential any more in 8.1.  See the examples in the
> > development documentation:
> > http://developer.postgresql.org/docs/postgres/xfunc-sql.html#XFUNC-OUTPUT-P
> >ARAMETERS
> > http://developer.postgresql.org/docs/postgres/plpgsql-declarations.html#PLP
> >GSQL-DECLARATION-ALIASES
> > http://developer.postgresql.org/docs/postgres/plpgsql-control-structures.ht
> >ml#PLPGSQL-STATEMENTS-RETURNING
> 
> I might be missing it but how does this help me.  What I would like is to be 
> able to return multiple records from a select statement that return multiple 
> columns from different tables without having to create a type.  This is why 
> it is painful for us.  The management of types is bad because as far as I 
> know there is no alter type and the depencies become a nightmane if you ever 
> need to change something. 
> 
> 
If I understand the new features correctly, rather than:
CREATE FUNCTION foo(i int) RETURNS custom_type AS 
and custom_type is (int,text,text)
you will be able to do the following instead:
CREATE FUNCTION foo(IN i int, OUT x int, OUT y text, OUT z text) AS ...

As far as hard coding the OUT datatypes, if I understand the docs
correctly you can even:
CREATE FUNCTION foo(IN i int, OUT x anyelement, OUT y anyelement, OUT z
anyelement) AS ...

No custom type needed .. you specify how the output format in the
argument section itself.

Sven


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] PostgreSQL's vacuumdb fails to allocate memory for

2005-06-29 Thread Sven Willenberger



Tom Lane presumably uttered the following on 06/29/05 19:12:

Sven Willenberger <[EMAIL PROTECTED]> writes:


I have found the answer/problem. On a hunch I increased maxdsiz to 1.5G
in the loader.conf file and rebooted. I ran vacuumdb and watched top as
the process proceeded. What I saw was SIZE sitting at 603MB (which was
512MB plus another 91MB which corresponded nicely to the value of RES
for the process. A bit into the process I saw SIZE jump to 1115 -- i.e.
another 512 MB of RAM was requested and this time allocated. At one
point SIZE dropped back to 603 and then back up to 1115. I suspect the
same type of issue was occuring in regular vacuum from the psql client
connecting to the backend, for some reason not as frequently. I am
gathering that maintenance work mem is either not being recognized as
having already been allocated and another malloc is made or the process
is thinking the memory was released and tried to grab a chunk of memory
again.



Hmm.  It's probably a fragmentation issue.  VACUUM will allocate a 
maintenance work mem-sized chunk during command startup, but that's

likely not all that gets allocated, and if any stuff allocated after
it is not freed at the same time, the process size won't go back down.
Which wouldn't be a killer in itself, but unless the next iteration
is able to fit that array in the same space, you'd see the above
behavior.

So maintenance work mem is not a measure of the max that can allocated 
by a maintenance procedure but rather an increment of memory that is 
requested by a maintenance process (which currently are vacuum and 
index, no?), if my reading of the above is correct.



BTW, do you have any evidence that it's actually useful to set
maintenance work mem that high for VACUUM?  A quick and dirty solution
would be to bound the dead-tuples array size at something more sane...



I was under the assumption that on systems with RAM to spare, it was 
beneficial to set main work mem high to make those processes more 
efficient. Again my thinking was that the value you set for that 
variable determined a *max* allocation by any given maintenance process, 
not a memory allocation request size. If, as my tests would indicate, 
the process can request and receive more memory than specified in 
maintenance work mem, then to play it safe I imagine I could drop that 
value to 256MB or so.


Sven

---(end of broadcast)---
TIP 6: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] PostgreSQL's vacuumdb fails to allocate memory for

2005-06-29 Thread Sven Willenberger
On Wed, 2005-06-29 at 16:58 -0400, Sven Willenberger wrote:
> On Wed, 2005-06-29 at 16:40 -0400, Charles Swiger wrote:
> > On Jun 29, 2005, at 4:12 PM, Sven Willenberger wrote:
> > [ ... ]
> > > Something I have noticed,
> > > when the memory error occurs during the psql session (after a failed
> > > vacuumdb attempt) the memory stays at 600+MB in top (under size) until
> > > the psql session is closed -- that may just be the way top reports it
> > > though.
> > 
> > Double-check your system limits via "ulimit -a" or "ulimit -aH".  By  
> > default, FreeBSD will probably restrict the maximum data size of the  
> > process to 512MB, which may be what you are running into.  You can  
> > rebuild the kernel to permit a larger data size, or else tweak /boot/ 
> > loader.conf:
> > 
> >  echo 'kern.maxdsiz="1024M"' >> /boot/loader.conf
> > 
> 
> :>ulimit -a
> cpu time   (seconds, -t)  unlimited
> file size   (512-blocks, -f)  unlimited
> data seg size   (kbytes, -d)  1048576
> stack size  (kbytes, -s)  65536
> core file size  (512-blocks, -c)  unlimited
> max memory size (kbytes, -m)  unlimited
> locked memory   (kbytes, -l)  unlimited
> max user processes  (-u)  5547
> open files  (-n)  11095
> virtual mem size(kbytes, -v)  unlimited
> sbsize   (bytes, -b)  unlimited
> :> cat /boot/loader.conf
> kern.maxdsiz="1073741824"
> kern.dfldsiz="891289600"
> 
> and if I don't run vacuumdb at all, but rather connect to the backend
> via psql and run vacuum, it works ok with full memory allocation. Still
> testing RAM to see if the issue is physical.
> 
> Sven
> 
> 
I have found the answer/problem. On a hunch I increased maxdsiz to 1.5G
in the loader.conf file and rebooted. I ran vacuumdb and watched top as
the process proceeded. What I saw was SIZE sitting at 603MB (which was
512MB plus another 91MB which corresponded nicely to the value of RES
for the process. A bit into the process I saw SIZE jump to 1115 -- i.e.
another 512 MB of RAM was requested and this time allocated. At one
point SIZE dropped back to 603 and then back up to 1115. I suspect the
same type of issue was occuring in regular vacuum from the psql client
connecting to the backend, for some reason not as frequently. I am
gathering that maintenance work mem is either not being recognized as
having already been allocated and another malloc is made or the process
is thinking the memory was released and tried to grab a chunk of memory
again. This would correspond to the situation where I was size stuck at
603MB after a failed memory allocation (when maxdsiz was only 1G). Now I
am not sure if I will run into the situation where yet another 512MB
request would be made (when already 1115 appears in SIZE) but if so,
then the same problem will arise. I will keep an eye on it ...

Sven


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] PostgreSQL's vacuumdb fails to allocate memory for

2005-06-29 Thread Sven Willenberger
On Wed, 2005-06-29 at 16:40 -0400, Charles Swiger wrote:
> On Jun 29, 2005, at 4:12 PM, Sven Willenberger wrote:
> [ ... ]
> > Something I have noticed,
> > when the memory error occurs during the psql session (after a failed
> > vacuumdb attempt) the memory stays at 600+MB in top (under size) until
> > the psql session is closed -- that may just be the way top reports it
> > though.
> 
> Double-check your system limits via "ulimit -a" or "ulimit -aH".  By  
> default, FreeBSD will probably restrict the maximum data size of the  
> process to 512MB, which may be what you are running into.  You can  
> rebuild the kernel to permit a larger data size, or else tweak /boot/ 
> loader.conf:
> 
>  echo 'kern.maxdsiz="1024M"' >> /boot/loader.conf
> 

:>ulimit -a
cpu time   (seconds, -t)  unlimited
file size   (512-blocks, -f)  unlimited
data seg size   (kbytes, -d)  1048576
stack size  (kbytes, -s)  65536
core file size  (512-blocks, -c)  unlimited
max memory size (kbytes, -m)  unlimited
locked memory   (kbytes, -l)  unlimited
max user processes  (-u)  5547
open files  (-n)  11095
virtual mem size(kbytes, -v)  unlimited
sbsize   (bytes, -b)  unlimited
:> cat /boot/loader.conf
kern.maxdsiz="1073741824"
kern.dfldsiz="891289600"

and if I don't run vacuumdb at all, but rather connect to the backend
via psql and run vacuum, it works ok with full memory allocation. Still
testing RAM to see if the issue is physical.

Sven



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PostgreSQL's vacuumdb fails to allocate memory for

2005-06-29 Thread Sven Willenberger
On Wed, 2005-06-29 at 14:59 -0400, Vivek Khera wrote:
> On Jun 29, 2005, at 9:01 AM, Sven Willenberger wrote:
> 
> > Unix user root (and any psql superuser) the vacuum runs fine. It is  
> > when
> > the unix user is non-root (e.g. su -l pgsql -c "vacuumdb -a -z") that
> > this memory error occurs. All users use the "default" class for
> > login.conf purposes which has not been modified from its installed
> > settings. Any ideas on how to a) troubleshoot this or b) fix this  
> > (if it
> > is something obvious that I just cannot see).
> 
> This doesn't make sense: the actual command is executed by the  
> backend postgres server, so the uid of the client program doens't  
> make a bit of difference.
> 
> You need to see exactly who is generating that error.  It certainly  
> is not the Pg backend.
> 
The issue being tied to a certain "login" user has been negated by
further testing (the illusion that it was based on user happened as a
result of the order in which I ran tests to try and find out what was
going on ) -- it does seem tied to invoking vacuumdb at this point. As a
point of clarification, when maxdsiz and dfldsiz are set, those values
are per "process" not per "user", correct? Something I have noticed,
when the memory error occurs during the psql session (after a failed
vacuumdb attempt) the memory stays at 600+MB in top (under size) until
the psql session is closed -- that may just be the way top reports it
though.

Sven

Sven


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PostgreSQL's vacuumdb fails to allocate memory for

2005-06-29 Thread Sven Willenberger
On Wed, 2005-06-29 at 11:21 -0400, Tom Lane wrote:
> Sven Willenberger <[EMAIL PROTECTED]> writes:
> > ERROR:  out of memory
> > DETAIL:  Failed on request of size 536870910.
> 
> That's a server-side failure ...
> 
> > Again, if I log in as myself and try to run
> > the command vaccumdb -a -z it fails; if I su to root and repeat it works
> > fine. I am trying to narrow this down to a PostgreSQL issue vs FreeBSD
> > issue.
> 
> That's fairly hard to believe, assuming that you are talking to the same
> server in both cases (I have seen trouble reports that turned out to be
> because the complainant was accidentally using two different servers...)
> The ulimit the backend is running under couldn't change depending on
> where the client is su'd to.
> 
> Is it possible that you've got per-user configuration settings that
> affect this, like a different maintenance_mem value for the root user?
> 
>   regards, tom lane
> 
I have done some more tests and tried to keep the results of vacuumdb
distinct from connecting to the backend (psql -U pgsql ...) and running
vaccum analyze. Apparently the hopping back and forth from both methods
interfered with my original interpretations of what appeared to be
happening. Anyway, here is what I see:
First test psql connection version:
psql then vacuum analyze => works fine whether the current unix user is
root or plain user. (ran this a couple times via new psql connections to
verify).
Then quit psql and move to command line
vacuumdb => whether running as su -l pgsql -c "vacuumdb -a -z" (or
specifying a dbname instead of all) or directly as a user the out of
memory error occurs.
If I then connect via psql to the backend and try to run vacuum analyze
I receive an out of memory error.

This last connection to psql after a failed vacuumdb was confabulating
my interpretations earlier of the error being based on unix user. top
shows:
  PID USERNAME  PRI NICE   SIZERES STATE  C   TIME   WCPUCPU
COMMAND
 6754 pgsql   40   602M 88688K sbwait 0   0:03  0.00%  0.00%
postgres
until I disconnect the psql session. I can then psql again and the same
error happens (out of memory) and top shows the same again. At this
point I am not sure if it is a memory issue of vacuumdb, vacuum itself,
or the FreeBSD memory management system. Again, if enough time passes
(or some other events) since I last try vacuumdb, then running vacuum
[verbose][analyze] via a psql connection works fine.

Sven


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] PostgreSQL's vacuumdb fails to allocate memory for

2005-06-29 Thread Sven Willenberger
On Wed, 2005-06-29 at 09:43 -0400, Douglas McNaught wrote:
> Sven Willenberger <[EMAIL PROTECTED]> writes:
> 
> > FreeBSD 5.4-Release
> > PostgreSQL 8.0.3
> >
> > I noticed that the nightly cron consisting of a vacuumdb was failing due
> > to "unable to allocate memory". I do have maintenance_mem set at 512MB,
> > and the /boot/loader.conf file sets the max datasize to 1GB (verified by
> > limit). The odd thing is that if I run the command (either vacuumdb from
> > the command line or vacuum verbose analyze from a psql session) as the
> > Unix user root (and any psql superuser) the vacuum runs fine. It is when
> > the unix user is non-root (e.g. su -l pgsql -c "vacuumdb -a -z") that
> > this memory error occurs. All users use the "default" class for
> > login.conf purposes which has not been modified from its installed
> > settings. Any ideas on how to a) troubleshoot this or b) fix this (if it
> > is something obvious that I just cannot see).
> 
> Is the out-of-memory condition occurring on the server or client side?
> Is there anything in the Postgres logs?

In this case they are one and the same machine ... i.e whether invoked
from the command-line as vacuumdb or invoked from psql (connecting to
localhost) as "vacuum analyze;" the memory error occurs. The logfile
reveals: 
ERROR:  out of memory
DETAIL:  Failed on request of size 536870910.


> You might put a 'ulimit -a' command in your cron script to make sure
> your memory limit settings are propagating correctly...

I created a cron that consisted of just that command (ulimit -a) and the
output revealed nothing abnormal (i.e. max dataseg still 1G, etc). This
occurs outside of cron also, (it was just the failing cronjob that
brought it to my attention). Again, if I log in as myself and try to run
the command vaccumdb -a -z it fails; if I su to root and repeat it works
fine. I am trying to narrow this down to a PostgreSQL issue vs FreeBSD
issue.

Sven


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] PostgreSQL's vacuumdb fails to allocate memory for non-root users

2005-06-29 Thread Sven Willenberger
FreeBSD 5.4-Release
PostgreSQL 8.0.3

I noticed that the nightly cron consisting of a vacuumdb was failing due
to "unable to allocate memory". I do have maintenance_mem set at 512MB,
and the /boot/loader.conf file sets the max datasize to 1GB (verified by
limit). The odd thing is that if I run the command (either vacuumdb from
the command line or vacuum verbose analyze from a psql session) as the
Unix user root (and any psql superuser) the vacuum runs fine. It is when
the unix user is non-root (e.g. su -l pgsql -c "vacuumdb -a -z") that
this memory error occurs. All users use the "default" class for
login.conf purposes which has not been modified from its installed
settings. Any ideas on how to a) troubleshoot this or b) fix this (if it
is something obvious that I just cannot see).

Thanks,

Sven


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] PL/pgSQL function to validate UPC and EAN barcodes

2005-06-25 Thread Sven Willenberger



Miles Keaton presumably uttered the following on 06/25/05 01:44:

I've made a PL/pgSQL function to validate UPC and EAN barcodes.

It works correctly, but is a little ugly.

Wondering if any PL/pgSQL experts can offer some suggestions.  (I'm
new to PL/pgSQL.)

Main questions:
#1 - I wanted to add a "0" to the front of the barcode if it was only
12 characters long.  Seems that the incoming "barcode" variable was
immutable, so I had to create a new variable ("b") to hold the
possibly-new version.   Any more elegant way to do this?

#2 - The big ugly : having to cast every digit in the substring into
an integer so I could add them together.   Any shorter way to do this?

For details on how it's validated, see "COMPUTING THE CHECKSUM DIGIT", here:
http://www.barcodeisland.com/ean13.phtml

Thanks!



CREATE OR REPLACE FUNCTION valid_barcode(barcode text) RETURNS boolean
AS $function$
DECLARE
  b text;
  odd int;
  even int;
  s int;
BEGIN
  IF LENGTH(barcode) < 12 OR LENGTH(barcode) > 13 THEN
return false;
  END IF;
  -- normalize UPC and EAN to both be 13 digits
  IF LENGTH(barcode) = 12 THEN
b = '0' || barcode;
  ELSE
b = barcode;
  END IF;
  -- sum of odd digits times 3, plus sum of even digits
  even = CAST(SUBSTR(b, 1, 1) AS int) + CAST(SUBSTR(b, 3, 1) AS int) +
CAST(SUBSTR(b, 5, 1) AS int) + CAST(SUBSTR(b, 7, 1) AS int) +
CAST(SUBSTR(b, 9, 1) AS int) + CAST(SUBSTR(b, 11, 1) AS int);
  odd = CAST(SUBSTR(b, 2, 1) AS int) + CAST(SUBSTR(b, 4, 1) AS int) +
CAST(SUBSTR(b, 6, 1) AS int) + CAST(SUBSTR(b, 8, 1) AS int) +
CAST(SUBSTR(b, 10, 1) AS int) + CAST(SUBSTR(b, 12, 1) AS int);
  s = (3 * odd) + even;
  -- remainder to nearest 10 should be same as last check digit
  IF (CAST((CEIL(CAST(s AS float8) / 10) * 10) AS int) % s) =
CAST(SUBSTR(b, 13, 1) AS int) THEN
return true;
  ELSE
return false;
  END IF;
END;
$function$ LANGUAGE plpgsql;



Perhaps it would be faster to accept the UPC code as a bigint in your 
function then your initial tests would be by value (if barcode < 
10,000,000,000,000,000 ... etc); you would only have to cast once in the 
case of a short barcode (b = '0' || barcode)::bigint


Then you could create a loop, with a counter i starting at 
1,000,000,000,000;

i := 1
WHILE i > 0 LOOP
   odd := odd + (b/i); b := b - ((b/i) *i); i = i/10;
   even := even + (b/i); b := b - ((b/i)*i); i = i/10;
END LOOP;
-- and so on

The math may be a bit off there, but the idea is to play with the number 
as a number instead of a lot of casting. Just a thought.


Sven

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Postfix/Maildrop and too many connections issues

2005-06-23 Thread Sven Willenberger
On Thu, 2005-06-23 at 03:39 +, Karl O. Pinc wrote:
> On 06/22/2005 08:23:43 AM, Sven Willenberger wrote:
> > On Wed, 2005-06-22 at 01:30 +0200, Roman Neuhauser wrote:
> > > # [EMAIL PROTECTED] / 2005-06-21 15:00:12 -0400:
> > > > We have a system set up whereby postfix and maildrop gather user
> > info
> > > > from a pg database (7.4.2 on FreeBSD 5.2.1) to do local mail
> > acceptance
> > > > and delivery. I have configured max connections at 512 but I find
> > that
> > > > this is not enough and I get "connection limit exceeded for
> > > > non-superusers" errors. I see upon ps ax that there are hundreds
> > of idle
> > > > connections (state I).
> > > >
> > > > Is there any way on the server end to close these connections (the
> > > > process is apparently for postfix and then maildrop to open a
> > > > connection, run one select statement, and supposedly close the
> > > > connection).
> > >
> > > You are barking up the wrong software. See proxymap(8). (It's
> > > mentioned in http://www.postfix.org/PGSQL_README.html)
> > >
> > Thanks for the tip ... edited my main.cf so that transport,aliases,
> > etc
> > use proxymap so this should help somewhat. Apparently in version 2.2.
> > it
> > is safe to use for UIDs/maildirs/paths etc so I may have to make the
> > upgrade leap.
> 
> I _believe_ 2.2 is safe to _configure_, but the various daemons
> will do lookups directly without going through proxymap.  So
> there's no "extra" help, you still need to limit the number
> of these running daemons.  (Although maybe they eache cache
> the connection themselves reducing the number of opens and
> closes? ?)
> 
> That's how I read the docs.
> 

I guess my understanding was that whichever portions in main.cf I would
prefix with proxy: would send their requests through proxymap. I just
realized however that once postifx looks up the user in transport it
hands the mail off to maildrop which then has to do its own db lookup
for the home directory, etc and I cannot see how I would configure this
to use proxymap. Well at least I have consolidated some of the
lookups ...

Sven


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Postfix/Maildrop and too many connections issues

2005-06-22 Thread Sven Willenberger
On Wed, 2005-06-22 at 01:30 +0200, Roman Neuhauser wrote:
> # [EMAIL PROTECTED] / 2005-06-21 15:00:12 -0400:
> > We have a system set up whereby postfix and maildrop gather user info
> > from a pg database (7.4.2 on FreeBSD 5.2.1) to do local mail acceptance
> > and delivery. I have configured max connections at 512 but I find that
> > this is not enough and I get "connection limit exceeded for
> > non-superusers" errors. I see upon ps ax that there are hundreds of idle
> > connections (state I).
> > 
> > Is there any way on the server end to close these connections (the
> > process is apparently for postfix and then maildrop to open a
> > connection, run one select statement, and supposedly close the
> > connection).
> 
> You are barking up the wrong software. See proxymap(8). (It's
> mentioned in http://www.postfix.org/PGSQL_README.html)
> 
Thanks for the tip ... edited my main.cf so that transport,aliases, etc
use proxymap so this should help somewhat. Apparently in version 2.2. it
is safe to use for UIDs/maildirs/paths etc so I may have to make the
upgrade leap. Now to see if maildrop (the local mailer) has a similar
feature.

Sven


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Postfix/Maildrop and too many connections issues

2005-06-21 Thread Sven Willenberger
On Tue, 2005-06-21 at 13:49 -0700, Steve Crawford wrote:
> On Tuesday 21 June 2005 12:00 pm, Sven Willenberger wrote:
> > We have a system set up whereby postfix and maildrop gather user
> > info from a pg database (7.4.2 on FreeBSD 5.2.1) to do local mail
> > acceptance and delivery. I have configured max connections at 512
> > but I find that this is not enough and I get "connection limit
> > exceeded for non-superusers" errors. I see upon ps ax that there
> > are hundreds of idle connections (state I).
> 
> I have a vague recollection from the Postfix list that under some 
> circumstances Postfix does not explicitly close the connection to PG 
> so the back-end may stick around till the tcp connection times out.
> 
> In addition to ps, try "select * from pg_stat_activity()" and look at 
> connections via netstat (check from both ends) to hunt for clues.
> 
> I know that using old hardware and limiting Postfix to 200 connections 
> I've handled a constant rate of over 9 deliveries/second. Is your 
> mailserver load such that the need for 500+ simultaneous connections 
> passes the "sniff test"?
> 
> Cheers,
> Steve
> 
This happens only occasionally (during severe mail floods) that the
connections top 500+ .. usually they hang around 50-150 connections. The
problem is that when the connection limit is reached, postfix/maildrop
does not fail/defer the delivery gracefully. A cursory look through the
code would indicate that the connection is closed, so trying to find out
where it hangs open would involve some debugging/time. I was hoping that
a way of the server closing the connetion would be possible or that
pgpool could handle and pool 1000 connections ...

Sven


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Postfix/Maildrop and too many connections issues

2005-06-21 Thread Sven Willenberger
We have a system set up whereby postfix and maildrop gather user info
from a pg database (7.4.2 on FreeBSD 5.2.1) to do local mail acceptance
and delivery. I have configured max connections at 512 but I find that
this is not enough and I get "connection limit exceeded for
non-superusers" errors. I see upon ps ax that there are hundreds of idle
connections (state I).

Is there any way on the server end to close these connections (the
process is apparently for postfix and then maildrop to open a
connection, run one select statement, and supposedly close the
connection). If not, would pgpool help in this case? Since all the
connections are basically 2 users (one for the mail delivery system and
another for the mail retrieval), are there any such "connections"
limitation by pgpool (i.e. if we get a spike of 700-1000 queries each
with their own "connection")?

Thanks,

Sven


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] adding columns with defaults is not implemented

2005-06-02 Thread Sven Willenberger
On Thu, 2005-06-02 at 17:47 -0400, Marcelo wrote:
> Hi,
> Thanks for your reply, but I have some doubts.
> 
> Are yoy sugesting I create the column as an Integer then change it to
> Serial? in Pgsql 7 you cant change a column type.
> 
> If I create the column as an int then add a default value, how can I make
> this default value increment with each insert?
> 
> Thanks again for your help.
> Marcelo
> 
> 
> - Original Message - 
> From: "Scott Marlowe" <[EMAIL PROTECTED]>
> To: "Marcelo" <[EMAIL PROTECTED]>
> Cc: 
> Sent: Thursday, June 02, 2005 4:43 PM
> Subject: Re: [GENERAL] adding columns with defaults is not implemented
> 
> 
> > On Thu, 2005-06-02 at 15:29, Marcelo wrote:
> > > Hello,
> > > Using Postgres 7.4, I am trying to perform an "alter table
> > > temptable add column "myCol" serial"
> > >
> > > It gives the following msg
> > > ERROR:  adding columns with defaults is not implemented
> > >
> > > You cannot add a column that is serial in a table which already has
> > > data in postgres 7.
> > >
> > > Is there a way I can create a serial column on a table which already
> > > has data? Or is the only solution upgrading to postgres 8 ?
> >
> > You can add a default after you add the column with a separate alter
> > table statement...
> >
> > ---(end of broadcast)---
> > TIP 2: you can get off all lists at once with the unregister command
> > (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
> 
> 
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
>   subscribe-nomail command to [EMAIL PROTECTED] so that your
>   message can get through to the mailing list cleanly

[Bottom posting to the top-posted reply] 

You would have to do this in steps: Assuming that "mytable" exists and
"mycol" is currently of type int and currently has as its max value 100:

create sequence mytable_mycol_seq start with 101;
alter table mytable alter mycol set default
nextval('mytable_mycol_seq'::text);

At this point any new inserts will start autoincrementing the mycol
field starting with value 101.

Sven


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] CSV delim quoting differences PgCOPY, Excel etc...

2005-05-17 Thread Sven Willenberger
On Tue, 2005-05-17 at 15:58 -0400, Jerry Sievers wrote:
> Hello.
> 
> I confess knowing nothing about MS Excel, don't run Windows, never use
> it.  
> 
> Anyway, I am often enough having to load Pg databases using SQL COPY
> from CSV output written by Excel, that I've had to write a script to
> change the quoting behavior from Excel's putting double quotes around
> a field having embedded delim chars, to Pg's backslash quoting.
> 
> I've got a hunch that one or both of the aforementioned softwares can
> change their quoting behavior.  (IT just seems reasonable... famous
> last words.)
> 
> Is it true that neither of Pg SQL \copy and/or Excel can adapt as I
> wish?  What about psql \copy?  I fail to read in the docs for the Pg
> products any support for this.
> 
> If Excel can adapt, at least one experienced user I've asked, doesn't
> know about it.
> 
> Thanks.
> 
As of version 8.0 PostgreSQL's COPY (or \copy) command includes a CSV
flag (in addition to the normal DELIMITER AS, etc) that will
automagically deal with the double quotes.

COPY tablename [ ( column [, ...] ) ]
FROM { 'filename' | STDIN }
[ [ WITH ]
  [ BINARY ]
  [ OIDS ]
  [ DELIMITER [ AS ] 'delimiter' ]
  [ NULL [ AS ] 'null string' ]
  [ CSV [ QUOTE [ AS ] 'quote' ]
[ ESCAPE [ AS ] 'escape' ]
[ FORCE NOT NULL column [, ...] ]

Sven


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Error when try installing pgbench ?

2005-05-15 Thread Sven Willenberger

Andre Nas presumably uttered the following on 05/15/05 05:48:
Hello …
Im using source  postgresql 8.0.3 under FreeBSD and already install, the 
database is running well.

 

I want to install a pgbench, but I can’t install it, coz an error occur.
I try to “make all” in directory ~/src/interfaces/lipq/
The messages are :
. . . . .
. . . . .
"../../../src/Makefile.global", line 546: Need an operator
"../../../src/Makefile.global", line 553: Missing dependency operator
"../../../src/Makefile.global", line 554: Missing dependency operator
"/home/postgres/postgresql-8.0.3/src/nls-global.mk", line 29: Need an 
operator

"/home/postgres/postgresql-8.0.3/src/nls-global.mk", line 31: Need an 
operator

"/home/postgres/postgresql-8.0.3/src/nls-global.mk", line 33: Need an 
operator

"/home/postgres/postgresql-8.0.3/src/nls-global.mk", line 38: Need an 
operator

"/home/postgres/postgresql-8.0.3/src/nls-global.mk", line 40: Need an 
operator

Error expanding embedded variable.
 

Any body can help me
Sorry Im newbie
Thanks

For FreeBSD there is a postgresql-contrib ports that you should use. It 
will build the contrib section of the postgresql and should make just fine.

Sven
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[GENERAL] plperl function fails to "fire" Slony trigger

2005-04-22 Thread Sven Willenberger
We have a replication set up between 2 servers using Slony; both are
runnind PostgreSQL 8.0.1. The issue is that when updates/inserts are
made to a replicated table, the replication does not occur; apparently
this is due to spi_exec somehow not allowing/causing the slony trigger
function to fire. The following 2 queries achieve the same result (one
in SQL and the other in plperl) however only the one in SQL ends up
being replicated:

create or replace function test_insert(int) returns text as $func$
my $query = "insert into inter_rootdecks(id,npanxx,carrier,inter) select
$_[0],npanxx,carrier,inter from inter_rootdecks where id = 20;";
$rv = spi_exec_query($query);
return "done";
$func$ LANGUAGE plperl;

create function test_insert(int) RETURNS text as $func$
INSERT INTO inter_rootdecks (id,npanxx,carrier,inter) select
$1,npanxx,carrier,inter from inter_rootdecks where id = 20;
SELECT 'done'::text;
$func$ LANGUAGE SQL;


Both are very basic insert statements, the difference being that the one
written in SQL ends up being replicated, the one written in plperl
utilizing spi_exec_query does not. In both cases the insert succesfully
completes on the source server.

Is there anything else that needs to happen in a plperl function such
that applicable triggers fire?

Sven


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] plperl function fails to "fire" Slony trigger

2005-04-22 Thread Sven Willenberger
On Fri, 2005-04-22 at 14:43 -0400, Alvaro Herrera wrote:
> On Fri, Apr 22, 2005 at 02:24:57PM -0400, Jan Wieck wrote:
> > On 4/22/2005 2:08 PM, Tom Lane wrote:
> > 
> > >Sven Willenberger <[EMAIL PROTECTED]> writes:
> > >>We have a replication set up between 2 servers using Slony; both are
> > >>runnind PostgreSQL 8.0.1. The issue is that when updates/inserts are
> > >>made to a replicated table, the replication does not occur; apparently
> > >>this is due to spi_exec somehow not allowing/causing the slony trigger
> > >>function to fire.
> > >
> > >Yuck :-(.  The only idea that comes to mind is that 8.0 changed the
> > >timing of trigger firing --- the triggers are probably firing while your
> > >function still has control, whereas in earlier releases they'd only fire
> > >after it returns.  Could this be breaking some assumption Slony makes
> > >about the order of operations?
> > 
> > Slony triggers are AFTER ROW triggers. All they do is one SPI_execp() to 
> > insert the log row. The only way that could possibly be suppressed is by 
> > bypassing the executor and doing direct heap_ access.
> > 
> > So how does plperl manage that?
> 
> It doesn't; it only uses SPI.  I guess we would need the original
> function to see what is really happening.
> 

If by the "original function" you mean the plperl function, it was in
the thread parent; in essense it runs an "insert into tablename (cols)
select vals from " query. The Slony trigger is:
"_T2_logtrigger_10" AFTER INSERT OR DELETE OR UPDATE ON inter_rootdecks
FOR EACH ROW EXECUTE PROCEDURE "_T2".logtrigger('_T2', '10', 'kkkv')

If by "orginal function" you mean the logtrigger (slony) function it
would appear from a cursory glance at the source that logtrigger
accesses the heap (HeapTuple). It would appear the same manipulation is
utilized by plperl's SPI calls. How this all interacts with the
"executor" is, admittedly, beyond the scope of my comprehension at this
point ...

Sven


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] sequence advances on failed insert

2005-04-07 Thread Sven Willenberger

David Fetter presumably uttered the following on 04/07/05 20:16:
On Thu, Apr 07, 2005 at 07:59:52PM -0400, Matthew Terenzio wrote:
I'm noticing that a sequence is advancing even if the insertion
fails.  Is this weird or expected?

It's expected.  Sequences are guaranteed to generate unique IDs.
These happen to be an increasing sequence of integers, but there is no
attempt to make this a gap-free sequence, and your apps should not
depend on the actual value of said ID.
I assume by "not depend on the actual value" that one should not assume 
that the next value will be one increment higher than the current 
highest value in the table; because it is guaranteed to be unique, I 
would think it to be an excellent way to assign a customer id, for 
example, which can then be referenced (foreign key, etc) by other tables 
after a new record is added. Unless there is some other reason one 
should not use a sequence value as any type of identifier?

Sven
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] pg_restore

2005-04-05 Thread Sven Willenberger
On Tue, 2005-04-05 at 12:10 +0530, Nageshwar Rao wrote:
> Hi,
> 
> I took a dump using the following command
> 
> Pg_dump  -f  bk.dmp
> 
> 
> When I tried to restore  a table from the dump it just hangs there
> 
> Pg_restore -f bk.dump -t table_name
> 
> 
> Also tried by giving username also
> 
> Any light on this.

pg_restore [OPTION]... [FILE]

in this case don't use -f (as that represents "output" file name)
try pg_restore -d db_name -t table_name -U username bk.dmp

Sven


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] plperl doesn't release memory

2005-03-24 Thread Sven Willenberger
On Thu, 2005-03-24 at 15:52 -0500, Tom Lane wrote:
> Sven Willenberger <[EMAIL PROTECTED]> writes:
> > Yes, on these systems, perl was build with -Dusemymalloc (and
> > concurrently -Ui_malloc) so there could very well be an issue with
> > malloc pools going awry. Doing a quick dig reveals that Linux tends to
> > build perl (by default) with the system malloc which may explain why
> > your script did not display this same behavior.
> 
> I can confirm that my copy is not using mymalloc:
> 
> $ perl -V | grep -i alloc
> usemymalloc=n, bincompat5005=undef

I went ahead and rebuilt perl using the system malloc instead, and what
I found was that on the function that started this whole topic, the
memory allocation went overboard and postgresql bailed with "out of
memory". Using the perl malloc, apparently postgres/plperl/libperl was
able to manage the memory load although it got "stuck" for the session.

Closing the session (psql) did end up freeing all the memory, at least
from top's perspective (since the process no longer existed) -- running
the query from the command line (psql -c "select function()") worked
over several iterations so I do believe that the memory does get freed
upon closing the connection. In fact we were able to run the query using
this method with 4 simulaneous connections and, although we went heavy
into swap , all four did complete. So the issue can be narrowed down to
a per-connection basis where the amount of memory needed by the function
would normally exceed available memory; the handler for this "overflow"
does something with the extra memory needed such that subsequent
invocations of the function during the same connection end up eventually
creating a malloc error.

(This is inconsistent with the behavior on the 8G opteron system ... but
I will reevaluate the issue I saw there and see if it is related to
something else).

Sven



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] plperl doesn't release memory

2005-03-24 Thread Sven Willenberger
On Thu, 2005-03-24 at 13:51 -0500, Tom Lane wrote:
> Sven Willenberger <[EMAIL PROTECTED]> writes:
> > On Thu, 2005-03-24 at 11:34 -0500, Tom Lane wrote:
> >> The first thing to figure out is whether the leak is inside Perl or in
> >> Postgres proper.  If I were trying to do this I'd run the function a
> >> couple times, then attach to the (idle) backend with gdb and do
> >> call MemoryContextStats(TopMemoryContext)
> 
> > Not sure entirely how to interpret the results ... a cursory examination
> > shows 516096 total in cachememory but I don't know if that reflects the
> > state of "unfreed" memory (or perhaps the 354728 used is unfreed?):
> 
> That looks like the normal steady-state condition.  The leak must be
> inside Perl then.
> 
> [ thinks for a bit... ]  Actually it seems possible that there's a
> problem with poor interaction between Postgres and Perl.  During the SPI
> query they will both be making pretty substantial memory demands, and it
> could be that the underlying malloc library isn't coping gracefully and
> is ending up with very fragmented memory.  That could result in
> out-of-memory problems when in fact neither package is leaking anything
> per se.
> 
> What you probably ought to do next is build Postgres with a debugging
> malloc library to learn more about who's eating up what.  I am not sure
> whether libperl will automatically use the malloc attached to the main
> executable or whether you need to whack it around too.  (Come to think
> of it, doesn't Perl normally use its very own private malloc?  Maybe
> there's an issue right there ...)
> 
>   regards, tom lane
> 
Yes, on these systems, perl was build with -Dusemymalloc (and
concurrently -Ui_malloc) so there could very well be an issue with
malloc pools going awry. Doing a quick dig reveals that Linux tends to
build perl (by default) with the system malloc which may explain why
your script did not display this same behavior. I will try to rebuild
perl using system malloc and see how that affects things. 

Sven


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] plperl doesn't release memory

2005-03-24 Thread Sven Willenberger
On Thu, 2005-03-24 at 11:34 -0500, Tom Lane wrote:
> Sven Willenberger <[EMAIL PROTECTED]> writes:
> > Any suggestions on how to trace what is going on? Debug output methods?
> 
> The first thing to figure out is whether the leak is inside Perl or in
> Postgres proper.  If I were trying to do this I'd run the function a
> couple times, then attach to the (idle) backend with gdb and do
>   call MemoryContextStats(TopMemoryContext)
> to dump a summary of Postgres' memory usage to stderr.  If that doesn't
> show any remarkable bloat then the problem is inside Perl (and beyond my
> ability to do much with).
> 
> One thing I'm still a bit baffled about is why my test didn't show a
> problem; it sure looks identical to yours.  Maybe the issue is Perl
> version specific?  I tested using the current FC3 version, which is
> perl-5.8.5-9.
> 
>   regards, tom lane

Not sure entirely how to interpret the results ... a cursory examination
shows 516096 total in cachememory but I don't know if that reflects the
state of "unfreed" memory (or perhaps the 354728 used is unfreed?):

TopMemoryContext: 32768 total in 3 blocks; 7392 free (51 chunks); 25376
used
MessageContext: 8192 total in 1 blocks; 7912 free (1 chunks); 280 used
PortalMemory: 8192 total in 1 blocks; 8176 free (1 chunks); 16 used
CacheMemoryContext: 516096 total in 6 blocks; 161368 free (1 chunks);
354728 used
lookup_state_idx: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
lookup_ocn_idx: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
lookup_lata_idx: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
locate_npanxx_idx: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
matall_intra_idx: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
matall_inter_idx: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
matall_npanxx_idx: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
offshore_pkey: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_index_indrelid_index: 1024 total in 1 blocks; 912 free (0 chunks);
112 used
pg_attrdef_adrelid_adnum_index: 1024 total in 1 blocks; 848 free (0
chunks); 176 used
pg_namespace_nspname_index: 1024 total in 1 blocks; 912 free (0 chunks);
112 used
pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 848 free (0
chunks); 176 used
pg_shadow_usesysid_index: 1024 total in 1 blocks; 912 free (0 chunks);
112 used
pg_aggregate_fnoid_index: 1024 total in 1 blocks; 912 free (0 chunks);
112 used
pg_class_relname_nsp_index: 1024 total in 1 blocks; 848 free (0 chunks);
176 used
pg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 848 free (0
chunks); 176 used
pg_inherits_relid_seqno_index: 1024 total in 1 blocks; 848 free (0
chunks); 176 used
pg_cast_source_target_index: 1024 total in 1 blocks; 848 free (0
chunks); 176 used
pg_type_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_language_name_index: 1024 total in 1 blocks; 912 free (0 chunks); 112
used
pg_class_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112
used
pg_operator_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112
used
pg_operator_oprname_l_r_n_index: 1024 total in 1 blocks; 712 free (0
chunks); 312 used
pg_statistic_relid_att_index: 1024 total in 1 blocks; 848 free (0
chunks); 176 used
pg_proc_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
pg_amop_opc_strat_index: 1024 total in 1 blocks; 776 free (0 chunks);
248 used
pg_opclass_am_name_nsp_index: 1024 total in 1 blocks; 776 free (0
chunks); 248 used
pg_index_indexrelid_index: 1024 total in 1 blocks; 912 free (0 chunks);
112 used
pg_conversion_name_nsp_index: 1024 total in 1 blocks; 848 free (0
chunks); 176 used
pg_conversion_oid_index: 1024 total in 1 blocks; 912 free (0 chunks);
112 used
pg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 848 free (0
chunks); 176 used
pg_amproc_opc_proc_index: 1024 total in 1 blocks; 776 free (0 chunks);
248 used
pg_language_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112
used
pg_conversion_default_index: 1024 total in 1 blocks; 712 free (0
chunks); 312 used
pg_shadow_usename_index: 1024 total in 1 blocks; 912 free (0 chunks);
112 used
pg_attribute_relid_attnam_index: 1024 total in 1 blocks; 848 free (0
chunks); 176 used
pg_namespace_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112
used
pg_group_sysid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112
used
pg_proc_proname_args_nsp_index: 1024 total in 1 blocks; 712 free (0
chunks); 312 used
pg_group_name_index: 1024 total in 1 blocks; 912 free (0 chunks); 112
used
pg_type_typname_nsp_index: 1024 total in 1 blocks; 848 free (0 chunks);
176 used
pg_opclass_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112
used
pg_amop_opr_opc_index: 1024 total in 1 blocks; 848 free (0 chunks); 176
used
MdSmgr: 8192 total in 1 blocks; 7000 free (0 chunks); 1192 used
DynaHash: 8192 total in 1 block

Re: [GENERAL] plperl doesn't release memory

2005-03-24 Thread Sven Willenberger
On Wed, 2005-03-23 at 18:25 -0500, Tom Lane wrote:
> Sven Willenberger <[EMAIL PROTECTED]> writes:
> > I have been experiencing an issue with plperl and PostgreSQL 8.0.1 in
> > that after calling a plperl function memory does not get released.
> 
> AFAICT the result of spi_exec_query gets released fine, as soon as it's
> no longer referenced within perl.  Perhaps your perl code is written in
> such a way that a reference to the hash result value remains live after
> the function exit?
> 
> I tried this:
> 
> create or replace function nrows(text) returns int as $$
>   my ($tabname) = @_;
>   my $rv = spi_exec_query("select * from $tabname");
>   return $rv->{processed};
> $$ LANGUAGE plperl;
> 
> and ran it repeatedly against a large table.  The memory usage went
> up as expected, but back down again as soon as the function exited.
> 
> If you think it's actually a plperl bug, please show a self-contained
> example.
> 

The query in question that we used for testing is:
-
CREATE OR REPLACE FUNCTION f_dom_e_lcr() RETURNS text AS $$


my $on_shore = "select
root_decks.id,material_all.npanxx,material_all.carrier,material_all.inter
from root_decks, material_all, lookup
where lookup.state not in (select state from offshore)
and lookup.npanxx = material_all.npanxx
and root_decks.type = 'ie'
and root_decks.carrier = material_all.carrier;";

my $rv = spi_exec_query($on_shore);
my $nrows = $rv->{processed};

return $nrows ;
$$ LANGUAGE plperl;
-

Now thinking that perhaps the $nrows variable was getting stuck we tried
the following which resulted in the exact same memory issue:
-
CREATE OR REPLACE FUNCTION f_dom_e_lcr() RETURNS text AS $$


my $on_shore = "select
root_decks.id,material_all.npanxx,material_all.carrier,material_all.inter
from root_decks, material_all, lookup
where lookup.state not in (select state from offshore)
and lookup.npanxx = material_all.npanxx
and root_decks.type = 'ie'
and root_decks.carrier = material_all.carrier;";

my $rv = spi_exec_query($on_shore);

return $rv->{processed};
$$ LANGUAGE plperl;
-

The result set is just under 1 million rows and top shows postgres using
some 600MB of memory. After the 3rd run of this function on a 1GB RAM
box, the error mentioned in the original part of this thread occurs and
the database restarts.

Any suggestions on how to trace what is going on? Debug output methods?

Sven


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] plperl doesn't release memory

2005-03-23 Thread Sven Willenberger
I have been experiencing an issue with plperl and PostgreSQL 8.0.1 in
that after calling a plperl function memory does not get released. Two
different systems and each show different symptoms:

1) system: FreeBSD 5.3-Stable i386 with 1 GB RAM, dual Xeon P4
processors.
   script: plperl issues an SPI_EXEC_QUERY('select rows from table where
condition');
this loads the result set into memory (to the tune of some 600MB based
on top output). The function iterates through each row to grab some
totals information and spits back a number.
On the 2nd iteration of this function the connection is lost :

Out of memory during request for 1012 bytes, total sbrk() is 291207168
bytes!
Callback called exit.
LOG: server process (PID 12672) exited with exit code 12
LOG: terminating any other active server processes
LOG: received immediate shutdown request
LOG: all server processes terminated; reinitializing
LOG: database system was interrupted at 2005-03-23 17:17:23 EST
LOG: checkpoint record is at 2/4D7F206C
LOG: redo record is at 2/4D7F206C; undo record is at 0/0; shutdown TRUE
LOG: next transaction ID: 4913594; next OID: 60798748
LOG: database system was not properly shut down; automatic recovery in
progress

2) system: FreeBSD 5.3-Stable amd64 with 8GB RAM, dual opteron
processors
   script: plperl issues an SPI_EXEC_QUERY('select rows from table where
condition') which fetches roughly 18k tuples of rather small size. Each
row is acted up and if criteria are met, a reference to the row is
pushed onto an array (reference).
   after several iterations of this script (a dozen or so), a file is
COPYed into the database consisting of some 38k rows and each row is
acted upon by a trigger (plpgsql) -- this process normally takes just
under a minute, but after running the plperl function a dozen or so
times, the run time for the COPY exceeds 3 minutes. Restarting the
PostgreSQL backend (restart) brings the COPY time back down to
sub-minute range.

Is it normal for plperl to *not* release any memory? Or perhaps plperl
is not pfreeing or SPI_FINISHing cleanly?

Sven Willenberger


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] PostgreSQL training

2005-03-14 Thread Sven Willenberger

Alex Adriaanse presumably uttered the following on 03/13/05 15:23:
Hi,
I'm working on an application for a client that uses PostgreSQL as its 
database backend.  The client wants to train their team on PostgreSQL so 
that they can maintain the application and the database themselves after 
it goes live should they need to.  As far as I know the majority of them 
don't have any experience with databases (other than using applications 
that rely on them of course).  The type of training that they would need 
would need to cover generic database concepts (database design, SQL), as 
well as PostgreSQL-specific information (mostly database administration, 
maybe some performance tuning).

They can travel anywhere within the US to take the training.  I noticed 
there were a few training events posted on postgresql.org, but only one 
of them is within the US (PostgreSQL Bootcamp, next month), and I'm not 
sure if that course would offer the right type of training for them (I 
think they'd need something more elementary).

Do you guys have any suggestions as far as training is concerned 
(training events throughout the year, companies that do PostgreSQL 
training, etc.)?

Thanks a lot,
Alex
I would like to mention that the folks at the Big Nerd Ranch (tm) who 
run the PostgreSQL boot camp also do on-site training which would then 
be tailored to your own organization's needs. (www.bignerdranch.com) 
Having attended their week-long training I can certainly attest to the 
benefits such training has.

Sven
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Referencing created tables fails with message that

2005-02-28 Thread Sven Willenberger

Tommy Svensson presumably uttered the following on 02/27/05 18:50:
I have just installed Postgresql and tried it for the first time.
One very serious problem I ran into was when actually trying to use 
created tables.
Creating a simple table without any foreign keys works OK, but after 
creating the
table it is not possible to do a select on it! I tried the following 
variants:

SELECT * FROM ;
SELECT * FROM public.;
SELECT * FROM .public.;
All result in the message "The relation  does not exist!" or "The 
relation public. does not exist!".

Creating a new table with a foreign key referencing the first table is 
also impossible due to exactly the same error message!

This behaviour is the same using  DBVisualizer/jdbc or psql.
So the question is how do you actually reference the tables you have 
created so that postgres will find them ?
The tables do actually get created. I can se them in DBVisualizer.

What database did you create the tables in? and what database are you 
connecting to when trying to issue your SELECT statements? Are you 
trying to SELECT from the same application you are using to CREATE 
TABLE? It sounds like you created the tables in  and perhaps 
are trying to SELECT * FROM while connected to template1.

Sven
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [GENERAL] basic trigger using OLD not working?

2005-02-26 Thread Sven Willenberger

[EMAIL PROTECTED] presumably uttered the following on 02/25/05 19:14:
Yes, thank you, I corrected my function from statement level to row level.
This did get rid of the error message. However, I still get no output from
an OLD variable that should contain data: see the test variable in the
simple case below.
How else can I test OLD variables? This is the simplest test case I can
think of. Any suggestions would be appreciated!
Thanks,
Rick

I think you have created a statement level trigger (If they existed in
7.4.7...)  by not including FOR EACH ROW in your create statement.  In
statement level triggers, there is no OLD or NEW.

Rick Casey <[EMAIL PROTECTED]> 02/24/05 1:22 PM >>>
Hello all,
I am trying to a simple thing: create a log history of deletes, and
updates; but which I am having trouble getting to work in PG 7.4.7
(under Debian Linux 2.6.8).
I have reduced my code to the following trivial case:
Here is the code that creates the delete trigger:
create trigger PEDIGREES_hist_del_trig
AFTER DELETE
on PEDIGREES
EXECUTE PROCEDURE logPedigreesDel();
Here is the trigger code: (famindid is an integer field in the Pedigrees
table):
CREATE OR REPLACE FUNCTION logPedigreesDel() RETURNS TRIGGER AS '
DECLARE
   test integer;
begin
   test := OLD.famindid;
   RAISE EXCEPTION ''OLD.famindid = '', test;
   return OLD;
end;
' LANGUAGE plpgsql;


Need a place holder for your variable in your RAISE expression (like a 
printf syntax):

RAISE EXCEPTION ''OLD.famindid = %'', test;
btw, if you just want to see the variable without having your function 
bail on you, try RAISE NOTICE ''OLD.famindid = %'', test;

Sven
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] maximum size

2005-02-24 Thread Sven Willenberger
On Wed, 2005-02-23 at 15:34 +0100, Molinet Sylviane wrote:
> Hello,
> 
> could you be so kind to help me with following requests ?
> 
> I need the maximum size of the following types :
> varchar(n)
> numeric(n,p)
> text
> bytea
> 
> Could you please give me the name of articles or publications and where I 
> can find it for these requests ?
> 
> I want to transfer a oracle database 8i to postgreSQL 7.3.2. Is there an 
> article or publication which related about this ?

Out of curiosity, why are you transferring to such an old version of
postgresql instead of 8.0?

Sven


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] databases/p5-postgresql-plperl links to wrong

2005-02-11 Thread Sven Willenberger
On Fri, 2005-02-11 at 19:54 -0500, Tom Lane wrote:
> Sven Willenberger <[EMAIL PROTECTED]> writes:
> > The rpath solution you provided to patch GNUmakefile did not
> > work however, bailing with:
> 
> > gcc -O2 -fno-strict-aliasing -Wmissing-declarations, -Wl,
> > -rpath, /usr/local/lib/perl5/5.8.6/mach/CORE -fpic -DPIC -I.
> > -I/usr/local/lib/perl5/5.8.6/mach/CORE -I../../../src/include   -c -o
> > plperl.o plperl.c
> > gcc: cannot specify -o with -c or -S and multiple compilations
> 
> If that's an accurate cut-and-paste, then you let a space sneak into
> the -Wl,-rpath,path construct; perhaps that's the problem?
> 
>   regards, tom lane

ahh, the linewrap issue of mailclients snuck in ... yes removing the
space did result in a successful build:

cc -O2 -pipe -march=pentiumpro -fno-strict-aliasing
-Wmissing-declarations, -Wl,-rpath,/usr/local/lib/perl5/5.8.6/mach/CORE
-fPIC -DPIC -I. -I/usr/local/lib/perl5/5.8.6/mach/CORE
-I../../../src/include   -c -o plperl.o plperl.c
plperl.c: In function `compile_plperl_function':
plperl.c:541: warning: cast to pointer from integer of different size
plperl.c:730: warning: cast from pointer to integer of different size
cc: -rpath: linker input file unused since linking not done
cc: /usr/local/lib/perl5/5.8.6/mach/CORE: linker input file unused since
linking not done
cc -O2 -pipe -march=pentiumpro -fno-strict-aliasing
-Wmissing-declarations, -Wl,-rpath,/usr/local/lib/perl5/5.8.6/mach/CORE
-fPIC -DPIC -I. -I/usr/local/lib/perl5/5.8.6/mach/CORE
-I../../../src/include   -c -o eloglvl.o eloglvl.c
cc: -rpath: linker input file unused since linking not done
cc: /usr/local/lib/perl5/5.8.6/mach/CORE: linker input file unused since
linking not done
/usr/bin/perl /usr/local/lib/perl5/5.8.6/ExtUtils/xsubpp
-typemap /usr/local/lib/perl5/5.8.6/ExtUtils/typemap SPI.xs >SPI.c
cc -O2 -pipe -march=pentiumpro -fno-strict-aliasing
-Wmissing-declarations, -Wl,-rpath,/usr/local/lib/perl5/5.8.6/mach/CORE
-fPIC -DPIC -I. -I/usr/local/lib/perl5/5.8.6/mach/CORE
-I../../../src/include   -c -o SPI.o SPI.c
cc: -rpath: linker input file unused since linking not done
cc: /usr/local/lib/perl5/5.8.6/mach/CORE: linker input file unused since
linking not done
ar cr libplperl.a `lorder plperl.o eloglvl.o SPI.o | tsort`
ranlib libplperl.a
cc -O2 -pipe -march=pentiumpro -fno-strict-aliasing
-Wmissing-declarations, -Wl,-rpath,/usr/local/lib/perl5/5.8.6/mach/CORE
-fPIC -DPIC -shared -Wl,-x,-soname,libplperl.so.0 plperl.o eloglvl.o
SPI.o -L../../../src/port -L/usr/local/lib -Wl,-E
-L/usr/local/lib /usr/local/lib/perl5/5.8.6/mach/auto/DynaLoader/DynaLoader.a 
-L/usr/local/lib/perl5/5.8.6/mach/CORE -lperl -lm -lcrypt -lutil  
-R/usr/local/lib -o libplperl.so.0
rm -f libplperl.so
ln -s libplperl.so.0 libplperl.so

# cd work/postgresql-7.4.7/src/pl/plperl/
# ldd ./libplperl.so
./libplperl.so:
libperl.so => /usr/local/lib/perl5/5.8.6/mach/CORE/libperl.so
(0x2810b000)
libm.so.2 => /usr/lib/libm.so.2 (0x28207000)
libcrypt.so.2 => /usr/lib/libcrypt.so.2 (0x28222000)
libutil.so.3 => /usr/lib/libutil.so.3 (0x2823b000)


I have submitted a patch containing the other solution .. for what it's
worth attached is this solution as a patch. I will leave it to the
maintainer(s) to determine the "cleaner" solution :-)

Svem
--- src/pl/plperl/GNUmakefile.orig  Wed Jan 21 14:25:11 2004
+++ src/pl/plperl/GNUmakefile   Fri Feb 11 20:06:31 2005
@@ -15,7 +15,7 @@
 
 # The code isn't clean with regard to these warnings.
 ifeq ($(GCC),yes)
-override CFLAGS := $(filter-out -Wall -Wmissing-declarations 
-Wmissing-prototypes, $(CFLAGS))
+override CFLAGS := $(filter-out -Wall -Wmissing-declarations 
-Wmissing-prototypes, $(CFLAGS), -Wl,-rpath,$(perl_archlibexp)/CORE)
 endif
 
 override CPPFLAGS := -I$(srcdir) -I$(perl_archlibexp)/CORE $(CPPFLAGS)

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] databases/p5-postgresql-plperl links to wrong

2005-02-11 Thread Sven Willenberger
On Fri, 2005-02-11 at 18:35 -0500, Tom Lane wrote:
> Sven Willenberger <[EMAIL PROTECTED]> writes:
> > it looks like the sed line is stripping out pgac_tmp2 from the pgac_tmp1
> > leaving the system to use the default perl libperl.so.
> > when I modified perl_embed_ldflags to not remove the -R line, the
> > compile works and links to the proper libperl.so
> 
> It worked for you, but not necessarily for other people.  This thread
> may be instructive:
> http://archives.postgresql.org/pgsql-bugs/2004-11/msg00181.php
> 
>   regards, tom lane

I see they used the same solution that I did, namely remove the part of
the sed statement that removes the ccdflags frmo the ExtUtils::Embed's
output. The rpath solution you provided to patch GNUmakefile did not
work however, bailing with:

gcc -O2 -fno-strict-aliasing -Wmissing-declarations, -Wl,
-rpath, /usr/local/lib/perl5/5.8.6/mach/CORE -fpic -DPIC -I.
-I/usr/local/lib/perl5/5.8.6/mach/CORE -I../../../src/include   -c -o
plperl.o plperl.c
gcc: cannot specify -o with -c or -S and multiple compilations

(As you said it may not work on all platforms). 

At this point I will try to submit or suggest a patch for the freebsd
ports system to simply change the lines in the configure config/perl.m4
files to strip out the replacement part of the sed statement.

Thanks,

Sven


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] databases/p5-postgresql-plperl links to wrong libperl.so

2005-02-11 Thread Sven Willenberger
On Fri, 2005-02-11 at 23:20 +0100, Anton Berezin wrote:
> On Fri, Feb 11, 2005 at 11:09:35PM +0100, Anton Berezin wrote:
> > On Fri, Feb 11, 2005 at 04:41:55PM -0500, Sven Willenberger wrote:
> > > On Fri, 2005-02-11 at 21:35 +0100, Anton Berezin wrote:
> > > > On Fri, Feb 11, 2005 at 11:10:15AM -0500, Sven Willenberger wrote:
> > > > > On Fri, 2005-02-11 at 16:46 +0100, Palle Girgensohn wrote:
> > > > > > --On fredag, februari 11, 2005 10.24.22 -0500 Sven Willenberger 
> > > > > > <[EMAIL PROTECTED]> wrote:
> > > > 
> > > > > > > FreeBSD 4.10
> > > > > > > Postgresql 7.4.7
> > > > > > > Perl 5.8.6_2 (from ports)
> > > > 
> > > > > > > When building databases/p5-postgresql-plperl the resultant 
> > > > > > > plperl.so
> > > > > > > (/usr/local/lib/postgresql/plperl.so) links to the libperl.so
> > > > > > > in /usr/lib instead of /usr/local/lib/perl5/5.8.6/mach/CORE/.
> > > > 
> > > > > > > ldd /usr/local/lib/postgresql/plperl.so
> > > > > > > /usr/local/lib/postgresql/plperl.so:
> > > > > > > libperl.so => /usr/lib/libperl.so (0x2810b000)
> > > > > > > libm.so.2 => /usr/lib/libm.so.2 (0x281a3000)
> > > > > > > libcrypt.so.2 => /usr/lib/libcrypt.so.2 (0x281be000)
> > > > > > > libutil.so.3 => /usr/lib/libutil.so.3 (0x281d7000)
> > 
> > > > 2. _Or_ plperl does not go all the way to be a conformant perl-embedding
> > > >application.  It looks at $Config{archlibexp}, but it does not follow
> > > >directions described in perlembed(1).  In this case it's linking
> > > >should be fixed to respect that.
> > 
> > > This does seem to be the case. I built postgresqql from source this time
> > > rather than ports with ./configure --with-perl --with-openssl and, as
> > > you point out, the congigure does find its way to the CORE directory but
> > > the end product still links to the /usr/lib/libperl.so location. 
> > 
> > Alright.  It is not plperl folks fault, shared libraries and binaries do
> > things differently.  Consider:
> > 
> > $ cat >binary.c
> > int main() {}
> > ^D
> > 
> > $ gcc binary.c -Wl,-E -L/usr/local/lib \
> > /usr/local/lib/perl5/5.6.2/mach/auto/DynaLoader/DynaLoader.a \
> > -L/usr/local/lib/perl5/5.6.2/mach/CORE -lperl -lm -lcrypt -lutil \
> > -o binary
> > 
> > $ ldd ./binary
> > ./binary:
> > libperl.so => /usr/lib/libperl.so (0x28066000)
> > libm.so.2 => /usr/lib/libm.so.2 (0x280fe000)
> > libcrypt.so.2 => /usr/lib/libcrypt.so.2 (0x28119000)
> > libutil.so.3 => /usr/lib/libutil.so.3 (0x28132000)
> > libc.so.4 => /usr/lib/libc.so.4 (0x2813b000)
> 
> G.  What I wrote does not make any sense at all.  :-(
> 
> It is plperl fault after all,
> 
> perl -MExtUtils::Embed -le ldopts
> 
> clearly states
> 
>   -Wl,-R/usr/local/lib/perl5/5.6.2/mach/CORE -Wl,-E  -L/usr/local/lib
>   /usr/local/lib/perl5/5.6.2/mach/auto/DynaLoader/DynaLoader.a
>   -L/usr/local/lib/perl5/5.6.2/mach/CORE -lperl -lm -lc -lcrypt -lutil
> 
> Note the -R here, and the absense of it in the lines you posted.
> 
> Anton 'need more coffee or more sleep' Berezin

Ok, I have found a couple lines in configure (4354-4356) and
config/perl.m4 (31-33) that confuse me a bit as it seems that their sole
purpose is to actually strip *out* the
-R/usr/local/lib/perl5/5.8.6/mach/CORE portion of the linker args:

pgac_tmp1=`$PERL -MExtUtils::Embed -e ldopts`
pgac_tmp2=`$PERL -MConfig -e 'print $Config{ccdlflags}'`
perl_embed_ldflags=`echo X"$pgac_tmp1" | sed "s/^X//;s%$pgac_tmp2%%"`

now pgac_tmp1 contains what should really be the ld args for the final
build/linking, pgac_tmp2 contains
-Wl,-R/usr/local/lib/perl5/5.8.6/mach/CORE

it looks like the sed line is stripping out pgac_tmp2 from the pgac_tmp1
leaving the system to use the default perl libperl.so.

when I modified perl_embed_ldflags to not remove the -R line, the
compile works and links to the proper libperl.so

Sven

P.S. From this point forward I will direct any followups to just the
postgresql mailing list as it does not appear to FreeBSD specific at
this point).


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] databases/p5-postgresql-plperl links to wrong libperl.so

2005-02-11 Thread Sven Willenberger
On Fri, 2005-02-11 at 21:35 +0100, Anton Berezin wrote:
> On Fri, Feb 11, 2005 at 11:10:15AM -0500, Sven Willenberger wrote:
> > On Fri, 2005-02-11 at 16:46 +0100, Palle Girgensohn wrote:
> > > --On fredag, februari 11, 2005 10.24.22 -0500 Sven Willenberger 
> > > <[EMAIL PROTECTED]> wrote:
> 
> > > > FreeBSD 4.10
> > > > Postgresql 7.4.7
> > > > Perl 5.8.6_2 (from ports)
> 
> > > > When building databases/p5-postgresql-plperl the resultant plperl.so
> > > > (/usr/local/lib/postgresql/plperl.so) links to the libperl.so
> > > > in /usr/lib instead of /usr/local/lib/perl5/5.8.6/mach/CORE/.
> 
> > > > ldd /usr/local/lib/postgresql/plperl.so
> > > > /usr/local/lib/postgresql/plperl.so:
> > > > libperl.so => /usr/lib/libperl.so (0x2810b000)
> > > > libm.so.2 => /usr/lib/libm.so.2 (0x281a3000)
> > > > libcrypt.so.2 => /usr/lib/libcrypt.so.2 (0x281be000)
> > > > libutil.so.3 => /usr/lib/libutil.so.3 (0x281d7000)
> 
> > > > the configure script used by postgresql itself tests for the lib
> > > > directory via:
> > > >|> perl -MConfig -e 'print $Config{archlibexp}'
> > > > /usr/local/lib/perl5/5.8.6/mach
> 
> > > > so it appears to find it ... is something in ports overriding this
> > > > location or is there something I can -Define to have it use the correct
> > > > libperl.so?
> 
> > > I'd say this is a bug in the perl port. Just like it relinks the perl 
> > > binary, it should ultimately relink the libperl.so file.
> 
> I don't think so.  All symlinking that is done with /usr/bin/perl* does
> not disrupt functioning of the base system perl, only modifies the
> defaults used.  One can still do /usr/bin/perl5.005_03 and it will work
> perfectly.  Destroying /usr/lib/libperl.so will change that.
> 
> So I'd say, it is one of two things:
> 
> 1. _Either_ Sven has LD_LIBRARY_PATH set in his or global environment in
>such a way that it includes /usr/lib in there.  If this is the case,
>removing it would solve the problem.  The reason to not have /usr/lib
>in LD_LIBRARY_PATH and for the described error to occur is two-fold:
>first, /usr/lib is already in ldconfig cache, so having it in
>LD_LIBRARY_PATH has no purpose;  secondly, LD_LIBRARY_PATH takes
>precedence over any libraries linked with explicit directory
>information, which is an intended behavior.
> 
This is not the case, so this one can be ruled out as a cause.

> 2. _Or_ plperl does not go all the way to be a conformant perl-embedding
>application.  It looks at $Config{archlibexp}, but it does not follow
>directions described in perlembed(1).  In this case it's linking
>should be fixed to respect that.
> 
> \Anton.

This does seem to be the case. I built postgresqql from source this time
rather than ports with ./configure --with-perl --with-openssl and, as
you point out, the congigure does find its way to the CORE directory but
the end product still links to the /usr/lib/libperl.so location. 

The output from the lines building plperl are:

plperl.c: In function `compile_plperl_function':
plperl.c:541: warning: cast to pointer from integer of different size
plperl.c:730: warning: cast from pointer to integer of different size
gcc -O2 -fno-strict-aliasing -fpic -DPIC -I.
-I/usr/local/lib/perl5/5.8.6/mach/CORE -I../../../src/include   -c -o
eloglvl.o eloglvl.c
/usr/bin/perl /usr/local/lib/perl5/5.8.6/ExtUtils/xsubpp
-typemap /usr/local/lib/perl5/5.8.6/ExtUtils/typemap SPI.xs >SPI.c
gcc -O2 -fno-strict-aliasing -fpic -DPIC -I.
-I/usr/local/lib/perl5/5.8.6/mach/CORE -I../../../src/include   -c -o
SPI.o SPI.c
ar cr libplperl.a `lorder plperl.o eloglvl.o SPI.o | tsort`
ranlib libplperl.a
gcc -O2 -fno-strict-aliasing -fpic -DPIC -shared
-Wl,-x,-soname,libplperl.so.0 plperl.o eloglvl.o SPI.o
-L../../../src/port -Wl,-E
-L/usr/local/lib /usr/local/lib/perl5/5.8.6/mach/auto/DynaLoader/DynaLoader.a 
-L/usr/local/lib/perl5/5.8.6/mach/CORE -lperl -lm -lcrypt -lutil  
-R/usr/local/pgsql/lib -o libplperl.so.0
rm -f libplperl.so
ln -s libplperl.so.0 libplperl.so

So somewhere in there it is preferentially picking the /usr/lib location
rather than the mach/CORE location. I am cc'ing the postgresql list on
this as well; at any rate it does not seem to be a port-specific or
perl-installation specific error here.

Sven Willenberger


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Error in trigger after upgrading to 8.0.1?

2005-02-05 Thread Sven Willenberger

Bjørn T Johansen wrote:
I have just upgraded to 8.0.1 from 7.4.6 and now my triggers doesn't 
work. When I insert a row in a table that has an insert trigger, I get 
the following error msg:

ERROR:  INSERT is not allowed in a non-volatile function
What does this mean?
Regards,
BTJ
It would appear as though the function in question was not created as a 
VOLATILE function (i.e. it was created with the STABLE or IMMUTABLE 
attribute).

Can you try and:
CREATE OR REPLACE FUNCTION function_name .. AS $$

$$ LANGUAGE plpgsql VOLATILE;
Sven Willenberger
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Dereferencing a 2-dimensional array in plpgsql

2005-01-31 Thread Sven Willenberger

Tom Lane wrote:
Sven Willenberger <[EMAIL PROTECTED]> writes:
The problem stems from being unable to assign values to an array without 
first initializing the array in plpgsql.

I think we changed this in 8.0.  Before 8.0, trying to assign to an
element of a NULL array yielded a NULL result array, but I think now
we let you produce a one-element array that way.

Using a 8.0 testbox I find that the arrays still need to be initialized:
DECLARE
blah varchar[];
foo varchar;
BEGIN
blah = ''{}'';
blah[1] := ''bar'';
foo := blah[1];
RAISE NOTICE ''blah[1] = %'',foo;
RETURN NULL;
END;
Will raise notice containing "bar".
DECLARE
blah varchar[];
foo varchar;
BEGIN
blah[1] := ''sven'';
foo := blah[1];
RAISE NOTICE ''blah[1] = %'',foo;
RETURN NULL;
END;
Will raise notice containing .
Leaving the subscript off will initialize the variable with empty braces 
or values within the braces; failure to have them results in "array 
value must start with "{" or dimension information". Also, this only 
applies to single-dimension arrays; I cannot find how to initialize 
2-dimension arrays. As as a result, the only way I have seen to do this 
then is to create 2 arrays, and having one array point to each row, one 
by one, of the large master array. Keep in mind this is all in plpgsql.

Sven

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Dereferencing a 2-dimensional array in plpgsql

2005-01-31 Thread Sven Willenberger

Tom Lane wrote:
Sven Willenberger <[EMAIL PROTECTED]> writes:
I am having an issue with trying to dereference a 2-dimensional array in 
plpgsql. The idea is to have an setup like:

DECLARE
myarray varchar[][];
myvar   char;
BEGIN
--stuff
myarray[1] := ''{value1,value2,value3}'';
myarray[2] := ''{valuea,valueb,valuec}'';

--If I then:

myvar := array[1][1];

--I get a subscript error generated.

That isn't a two-dimensional array, it's a one-dimensional array with
some curly braces in the element values.  Keep in mind that the number
of []s you write in the DECLARE is just decoration --- it's not enforced.
What's determining the actual array shape in this example is the
subscripts you write in the assignments.
The problem I seem to be having is initializing the array. For example 
the following function:
create or replace function temp_keys() returns setof key_tuple as '
DECLARE
myarray varchar[][];
myother varchar;
mytuple key_tuple;
counter int;
BEGIN
myarray[1][1] := ''sven'';
myarray[1][2] := ''key18'';
myarray[1][3] := ''A'';
myarray[2][1] := ''dave'';
myarray[2][2] := ''key18'';
myarray[2][3] := ''B'';
for counter in 1 .. 2 LOOP
myother := myarray[1][2];
RAISE NOTICE ''myother = %'',myother;
END LOOP;
mytuple.carrier := myarray[1][1];
mytuple.prefix := myarray[1][2];
mytuple.rate := myarray[1][3];
RETURN NEXT mytuple;
RETURN;
END;
' LANGUAGE plpgsql;

returns:
select * from temp_keys();
NOTICE:  myother = 
NOTICE:  myother = 
 holder | keynum | rating
++
||
(1 row)
However I have found that the following construct works, albeit very slowly:
DECLARE
myarray varchar[][];
subarray varchar[];
BEGIN
--initialize the arrays
myarray := ''{}'';
subarray := ''{}'';
myarray[1] := ''{sven,key18,A}'';
myarray[2] := ''{dave,key18,b}'';
subarray := myarray[1];
RAISE NOTICE ''subarray = %'',subarray;
--snip
running this will return 'sven' in the NOTICE section.
The problem stems from being unable to assign values to an array without 
first initializing the array in plpgsql. I can initialize 
single-dimenstion arrays as noted, but any attempt to initaliaze and 
populate 2-dimension arrays results in subscript and or  entry issues.

This is an offshoot of the moving backward/rewinding a cursor issue 
about which I had inquired earlier and trying to load a table into an 
array rather than reopening and closing a cursor thousands of times. 
Turns out that using the construct above (with 2 arrays) works, but is 
actually slower (??!!) than opening a cursor thousands of times.

Sven
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[GENERAL] Dereferencing a 2-dimensional array in plpgsql

2005-01-31 Thread Sven Willenberger
I am having an issue with trying to dereference a 2-dimensional array in 
plpgsql. The idea is to have an setup like:

DECLARE
myarray varchar[][];
myvar   char;
BEGIN
--stuff
myarray[1] := ''{value1,value2,value3}'';
myarray[2] := ''{valuea,valueb,valuec}'';
--If I then:
myvar := array[1][1];
--I get a subscript error generated. I have tried everycombination of 
array[1:1][1], etc all to no avail.

I have also tried pre-initializing myarray with
myarray := ''{{}}'';
The docs seem to to indicate that a simple myarray[1][2] for example 
should work, but that does not hold true in the plpgsql. Is there 
another type of initialization that is needed to be done? Any advice?

Sven
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Howto determin the number of elemnts of an array

2005-01-31 Thread Sven Willenberger

Együd Csaba wrote:
Hi,
how can I determin the number of elements of an array? I saw function
array_dims() which returns a string value representing the dimensions of the
array. Is there a function which returns only the number of elements as an
integer.  

-- Csaba
array_upper(arrayname,dimension) will return the number of elements in 
the specified dimension of array arrayname.

Sven
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] FreeBSD 5.2.1, postgresql 7.4.5 and shared memory settings

2005-01-31 Thread Sven Willenberger

Rick Apichairuk wrote:
On Sun, 30 Jan 2005 21:12:08 -0500, Sven Willenberger <[EMAIL PROTECTED]> wrote:

I assume you added these variables to either the GENERIC or a custom
kernel in /usr/src/sys/i386/conf. If you created a custom kernel called
CUSTOM, then you would:

That's what I did.
 

cd /usr/src
make buildkernel KERNCONF=CUSTOM
make installkernel KERNCONF=CUSTOM

The exact commands I used were:
KERN_CONF=CUSTOM make buildkernel > make.buildkernel.log 2>&1 & tail
-f make.buildkernel.log
KERN_CONF=CUSTOM make installkernel > make.installkernel.log 2>&1 &
tail -f make.installkernel.log
 
KERNCONF not KERN_CONF ... you are simply rebuilding GENERIC which is 
why your settings are not "sticking" (see the /usr/src/UPDATING file).

Sven
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] FreeBSD 5.2.1, postgresql 7.4.5 and shared memory settings

2005-01-30 Thread Sven Willenberger

Rick Apichairuk wrote:
I tried the settings you gave and they worked (with some caveats).
After I recompiled the FreeBSD kernel with the options:
options SYSVSHM
options SYSVMSG
options SYSVSEM
options SHMMAXPGS=131072
options SEMMNI=128
options SEMMNS=512
options SEMUME=100
options SEMMNU=256
I rebooted and attempted to start postgresql and it failed again. I
then ran sysctl -a to verify that the settings were correct. They were
not. Regardless of what's configured in the kernel conf file, the
default values were loaded. 
I assume you added these variables to either the GENERIC or a custom 
kernel in /usr/src/sys/i386/conf. If you created a custom kernel called 
CUSTOM, then you would:

cd /usr/src
make buildkernel KERNCONF=CUSTOM
make installkernel KERNCONF=CUSTOM
Just wanted to verify that you built and installed the kernel using the 
config file that has the settings you listed above (your sysctl output 
would indicate otherwise).

Sven
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] PL/pgSQL functions and RETURN NEXT

2005-01-30 Thread Sven Willenberger

Craig Bryden wrote:
Hi
Firstly, let me say that I am a newbie to PostgreSQL.
I have written a PL/pgSQL function that will return a set of results. I have
included the code below

***
CREATE TYPE pr_SomeFunction_ReturnType as (ID smallint,
TypeID smallint,
Name varchar(50),
Description varchar(500),
TypeName varchar(20));
CREATE OR REPLACE FUNCTION pr_SomeFunction (p_TypeID smallint)
RETURNS setof pr_SomeFunction_ReturnType
AS
$$
DECLARE
r_Return pr_SomeFunction_ReturnType;
BEGIN
SELECT l.ID, l.TypeID, l.Name, l.Description, lt.Name as TypeName
INTO r_Return
FROM tb_Item l
JOIN tb_ItemType lt
ON l.TypeID = lt.TypeID;
RETURN NEXT r_Return;
RETURN;
END;
$$ LANGUAGE 'plpgsql';

***
When I run "select * from pr_SomeFunction(1::smallint);", I only get one
record back, instead of two. 
You need a loop construct here:
FOR r_return IN SELECT l.ID, l.TypeID, l.Name, l.Description, lt.Name as 
TypeName FROM tb_Item l JOIN tb_ItemType lt USING (TypeID) LOOP
	RETURN NEXT r_Return;
END LOOP;
RETURN;

HTH,
Sven
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Moving from Sybase to Postgres - Stored Procedures

2005-01-29 Thread Sven Willenberger

Andre Schnoor wrote:
"Joshua D. Drake" wrote:
Andre Schnoor wrote:
Hi,
I am moving from Sybase to pgsql but have problems with stored procedures.
The typical procedure uses
a) named parameters,
b) local variable declarations and assignments
c) transactions
d) cursors, views, etc.
I can't seem to find these things in the Postgres function syntax.

[...]

Perhaps if you provided the actual problem? Is there a specific 
procedure that you are trying to port that you do not understand in the 
PgSQL sense?

Thank you for asking, Joshua. I've put an example procedure skeleton here:
CREATE PROCEDURE do_something
@song_id int, 
@user_id int, 
@method int, 
@length int = 0, 
@date_exact datetime,
@default_country int = null
AS  
   -- temporary variables
   DECLARE 
  @artist int, 
  @sample int, 
  @date varchar(32), 
  @country int
BEGIN 
-- assign temporary variables
select @date = convert(varchar(32),@date_exact,101) 
select @artist = user_id, @sample = is_sample from sto_song where song_id = @song_id 
-- perform conditional code
if (@sample = 1) begin
begin transaction
... do something ...
commit transaction
end else begin
... do something else ...
end
-- return results
select 
result1 = ... some expression ...,
result2 = ... another expression ...
END

I could not yet translate this to PgSQL, as I can't find any control structures, variable declaractions, etc. 

I think what you want is plpgsql (which needs to instantiated on the 
database in question)
createlang -U postgres plgsql dbname (for example)

The documentation is pretty decent on the language itself: for example:
CREATE FUNCTION somefunction (integer, integer, timestamp) RETURNS 
[setof] datatype AS $$
DECLARE
	-- alias the passed arguments
	thesong_id ALIAS FOR $1;
	theuser_id ALIAS FOR $2;
	datetime ALIAS FOR $3;
	-- temporary variables
	artist int;
	sample int;
	thedate date;
BEGIN
	thedate := datetime::date;
	SELECT INTO artist user_id from sto_song where song_id = thesong_id;
	SELECT INTO sample is_sample from sto_song where song_id = thesong_id;
	IF sample = 1 THEN
		-- do stuff
	ELSE
		-- do other stuff
	END IF;
	RETURN something;
END;
$$ LANGUAGE plpgsql;

See if that helps you ... it really looks as though the languages are 
similar enough that moving the stored procedures should a fairly decent 
proposition.

Sven
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[GENERAL] Moving within a cursor declared in plpgsql function

2005-01-28 Thread Sven Willenberger
I have searched the web/archives for a solution to this and have found a
few entries to my question but nothing definitive. In essence I have a
function that opens a cursor (refcursor). It would appear though that
one can only traverse the cursor forward-only and only one record at a
time; is there any way (or has the ability been added in 8.0) to open
the cursor, fetch into variable until NOT FOUND and then reposition the
pointer to the beginning of the cursor?

In essence what I am doing is a type of best match algorithm; a key
(firstkey) of length x  and owner y and rating (firstrating) represents
the first element. A best fit is considered the  key  (newkey) of
shortest length belonging to a different owner such that newkey and
substring(firstkey,1,length(newkey) match and newrating is better than
firstrating.

The initial pre-match table contains all the owners, keys, and ratings
and this is loaded in as a cursor in order of key desc (key is an
integer so naturally sorts from longest to shortest here) and value.
Right now, I have to reload the cursor for each iteration through the
table, offsetting by the iteration:

DECLARE
curs CURSOR (key integer) FOR SELECT * FROM foo ORDER BY key desc,
rating offset key;

Ideally I could see saving tons of time if I could simply:
FETCH ABSOLUTE iteration FROM curs
for each iteration without having to use offsets, etc.

Any ideas on this or on a better way of trying this?

Sven


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] querying two connections in one query

2005-01-24 Thread Sven Willenberger
On Mon, 2005-01-24 at 14:48 +, Abdul-Wahid Paterson wrote:
> Hi,
> 
> Is it possible, perhaps through some extension, to query two database
> through one SQL select statement where the two databases have
> different connections? Something similar to how you can link databases
> in Oracle.
> 

I believe contrib/dblink may be what you are seeking ...

Sven


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] handing created and updated fields

2005-01-10 Thread Sven Willenberger
On Mon, 2005-01-10 at 15:45 +0100, Daniel Martini wrote:
> Hi,
> 
> Citing "Jim C. Nasby" <[EMAIL PROTECTED]>:
> > ON INSERT: force created and updated to be current_timestamp
> > ON UPDATE: deny updated created. force updated to be set to
> > current_timestamp
> [snip]
> > Does anyone have an example of the best way to handle this scenario?
> 
> Something along the lines of the following should work (but test first
> anyways, though I have copied smaller parts of this from the definitions
> in one of my databases here, I have made modifications to fit your 
> specific task, so typos/errors might have sneaked in):
> 
> create function update_trigger() returns trigger as
> 'begin
> new.created := old.created;
> new.updated := CURRENT_TIMESTAMP;
> return new;
> end;'
> language 'plpgsql';
> 
> create trigger update_trigger BEFORE UPDATE ON your_table_name
> FOR EACH ROW EXECUTE PROCEDURE update_trigger();
> 
> create function insert_trigger() returns trigger as
> 'begin
> new.created := CURRENT_TIMESTAMP;
> new.updated := CURRENT_TIMESTAMP;
> return new;
> end;'
> language 'plpgsql';
> 
> create trigger insert_trigger BEFORE INSERT ON your_table_name
> FOR EACH ROW EXECUTE PROCEDURE insert_trigger();
> 
> HTH,
> Regards,
> Daniel

These could also be combined into one trigger since they are nearly
identical anyway:

CREATE FUNCTION combined_trigger() RETURNS TRIGGER AS '
BEGIN
   NEW.update := CURRENT_TIMESTAMP;
   IF TG_OP = ''INSERT'' THEN
  NEW.created := CURRENT_TIMESTAMP;
   ELSE
  NEW.created := OLD.created;
   END IF;
   RETURN NEW;
END;
' LANGUAGE plpgsql;

CREATE TRIGGER combined_trigger BEFORE INSERT OR UPDATE on
your_table_name FOR EACH ROW EXECUTE PROCEDURE combined_trigger();

Sven


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] Overhead of dynamic query in trigger

2004-12-03 Thread Sven Willenberger
I am curious as to how much overhead building a dynamic query in a
trigger adds to the process. The example:

Have a list of subcontractors, each of which gets unique pricing. There
is a total of roughly 100,000 items available and some 100
subcontractors. The 2 design choices would be 100 tables (one for each
sub) at 100,000 rows or 1 table with 10,000,000 rows.

Choice 1:
table has item number (indexed) and price

Choice 2:
table has subcontractor id, item number, and price; index on
(subcontractorid, item number).

Table of orders would have a trigger to insert line item cost:
---
Trigger Choice 1:
Select into thetable lookupprice from subcontractors where
subcontractorid = NEW.subcontractorid;

thequery := ''Select price from '' || thetable.lookupprice || '' where
itemnumber = '' || NEW.itemnumber;

FOR therow IN EXECUTE thequery LOOP
NEW.itemcost := therow.price;
END LOOP;
RETURN NEW;
---
Trigger Choice 2:
Select into thetable lookupprice from subcontractors where
subcontractorid = NEW.subcontractorid;

Select into therow price from mastertable where subcontractorid =
NEW.subcontractorid and itemnumber = NEW.itemnumber;

NEW.itemcost := therow.price;
RETURN NEW;
---

Doing a select from the command line, the mastertable method (with id
and partno index) is faster than looking up a single item in a named
table (with partno index). At what point would Trigger Choice 2 fall
behind performance with Trigger Choice 1 (if ever)? Is there a way to
analyze the performance of dynamic queries? If I had only 10
subcontractors or if I had 1000 subcontractors, at what point is the
overhead of building/executing a dynamic query negated by the amount of
time to look up both the subid and part number in one massive table?

Thanks,

Sven


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] pg_restore with Fc fails with [archiver] out of

2004-08-12 Thread Sven Willenberger
On Tue, 2004-08-10 at 19:28 -0400, Sven Willenberger wrote:
> 
> Tom Lane wrote:
> > Sven Willenberger <[EMAIL PROTECTED]> writes:
> > 
> >>pg_restore in malloc(): error: allocation failed
> >>Abort (core dumped)
> > 
> > 
> > A stack trace from that core file would be useful in figuring out what
> > the problem is...  also, what PG version is this?
> > 
> > regards, tom lane
> 
> psql -V
> psql (PostgreSQL) 7.4.3
> contains support for command-line editing
> 


> Frame 6 - 11:
> (gdb) up 6
> #6  0x0804c9a1 in ReadStr (AH=0x805e000) at pg_backup_archiver.c:1530
> 1530pg_backup_archiver.c: No such file or directory.
>  in pg_backup_archiver.c
> (gdb) up
> #7  0x0804d2d8 in ReadToc (AH=0x805e000) at pg_backup_archiver.c:1878
> 1878in pg_backup_archiver.c
> (gdb) up
> #8  0x0804fa94 in InitArchiveFmt_Custom (AH=0x805e000) at 
> pg_backup_custom.c:196
> 196 pg_backup_custom.c: No such file or directory.
>  in pg_backup_custom.c
> (gdb) up
> #9  0x0804cf7b in _allocAH (FileSpec=0xbfbfed19 "./callingdata", 
> fmt=archCustom, compression=0, mode=archModeWrite)
>  at pg_backup_archiver.c:1743
> 1743pg_backup_archiver.c: No such file or directory.
>  in pg_backup_archiver.c
> (gdb) up
> #10 0x0804a628 in OpenArchive (FileSpec=0x0, fmt=archUnknown) at 
> pg_backup_archiver.c:103
> 103 in pg_backup_archiver.c
> (gdb) up
> #11 0x0804a22a in main (argc=5, argv=0xbfbfebe8) at pg_restore.c:333
> 333 pg_restore.c: No such file or directory.
>  in pg_restore.c
> 
> Let me know if you want any more details, a disassembled frame, what 
> have you.

For future searches,etc, the problem was a corrupted dump file due to
ftp issues when transferring the file. Thanks, Tom, for helping diagnose
this.

Sven


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] pg_restore with Fc fails with [archiver] out of memory

2004-08-10 Thread Sven Willenberger

Tom Lane wrote:
Sven Willenberger <[EMAIL PROTECTED]> writes:
pg_restore in malloc(): error: allocation failed
Abort (core dumped)

A stack trace from that core file would be useful in figuring out what
the problem is...  also, what PG version is this?
			regards, tom lane
psql -V
psql (PostgreSQL) 7.4.3
contains support for command-line editing
gdb:
Reading symbols from /usr/local/lib/libpq.so.3...done.
Loaded symbols for /usr/local/lib/libpq.so.3
Reading symbols from /usr/local/lib/libintl.so.6...done.
Loaded symbols for /usr/local/lib/libintl.so.6
Reading symbols from /usr/local/lib/libssl.so.3...done.
Loaded symbols for /usr/local/lib/libssl.so.3
Reading symbols from /usr/local/lib/libcrypto.so.3...done.
Loaded symbols for /usr/local/lib/libcrypto.so.3
Reading symbols from /lib/libz.so.2...done.
Loaded symbols for /lib/libz.so.2
Reading symbols from /lib/libreadline.so.4...done.
Loaded symbols for /lib/libreadline.so.4
Reading symbols from /lib/libcrypt.so.2...done.
Loaded symbols for /lib/libcrypt.so.2
Reading symbols from /lib/libm.so.2...done.
Loaded symbols for /lib/libm.so.2
Reading symbols from /lib/libutil.so.4...done.
Loaded symbols for /lib/libutil.so.4
Reading symbols from /lib/libc.so.5...done.
Loaded symbols for /lib/libc.so.5
Reading symbols from /usr/local/lib/libiconv.so.3...done.
Loaded symbols for /usr/local/lib/libiconv.so.3
Reading symbols from /lib/libncurses.so.5...done.
Loaded symbols for /lib/libncurses.so.5
Reading symbols from /libexec/ld-elf.so.1...done.
Loaded symbols for /libexec/ld-elf.so.1
#0  0x282a3dcf in kill () from /lib/libc.so.5
(gdb) bt
#0  0x282a3dcf in kill () from /lib/libc.so.5
#1  0x28298878 in raise () from /lib/libc.so.5
#2  0x28310f82 in abort () from /lib/libc.so.5
#3  0x2830f6fe in tcflow () from /lib/libc.so.5
#4  0x2830ff9b in tcflow () from /lib/libc.so.5
#5  0x28310356 in malloc () from /lib/libc.so.5
#6  0x0804c9a1 in ReadStr (AH=0x805e000) at pg_backup_archiver.c:1530
#7  0x0804d2d8 in ReadToc (AH=0x805e000) at pg_backup_archiver.c:1878
#8  0x0804fa94 in InitArchiveFmt_Custom (AH=0x805e000) at 
pg_backup_custom.c:196
#9  0x0804cf7b in _allocAH (FileSpec=0xbfbfed19 "./callingdata", 
fmt=archCustom, compression=0, mode=archModeWrite)
at pg_backup_archiver.c:1743
#10 0x0804a628 in OpenArchive (FileSpec=0x0, fmt=archUnknown) at 
pg_backup_archiver.c:103
#11 0x0804a22a in main (argc=5, argv=0xbfbfebe8) at pg_restore.c:333
#12 0x08049b52 in _start ()

Frame 6 - 11:
(gdb) up 6
#6  0x0804c9a1 in ReadStr (AH=0x805e000) at pg_backup_archiver.c:1530
1530pg_backup_archiver.c: No such file or directory.
in pg_backup_archiver.c
(gdb) up
#7  0x0804d2d8 in ReadToc (AH=0x805e000) at pg_backup_archiver.c:1878
1878in pg_backup_archiver.c
(gdb) up
#8  0x0804fa94 in InitArchiveFmt_Custom (AH=0x805e000) at 
pg_backup_custom.c:196
196 pg_backup_custom.c: No such file or directory.
in pg_backup_custom.c
(gdb) up
#9  0x0804cf7b in _allocAH (FileSpec=0xbfbfed19 "./callingdata", 
fmt=archCustom, compression=0, mode=archModeWrite)
at pg_backup_archiver.c:1743
1743pg_backup_archiver.c: No such file or directory.
in pg_backup_archiver.c
(gdb) up
#10 0x0804a628 in OpenArchive (FileSpec=0x0, fmt=archUnknown) at 
pg_backup_archiver.c:103
103 in pg_backup_archiver.c
(gdb) up
#11 0x0804a22a in main (argc=5, argv=0xbfbfebe8) at pg_restore.c:333
333 pg_restore.c: No such file or directory.
in pg_restore.c

Let me know if you want any more details, a disassembled frame, what 
have you.

Thanks,
Sven
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] pg_restore with Fc fails with [archiver] out of memory error

2004-08-10 Thread Sven Willenberger
Created a pg_dump with Fc (custom format compression) that resulted in a
300+MB file. Now trying to pg_restore this thing fails with either an
out of memory error (as in the subject line) on FreeBSD 4.10-STABLE or
a: 
pg_restore in malloc(): error: allocation failed
Abort (core dumped)
error on FreeBSD 5.2.1-P9 (RELEASE)

In both cases I have increased the max data segment size to 1.5GB and
the max stack size to 768M or so.

So obviously we have some error message when malloc'ing for the AR
structure. Any ideas here? Is it trying to stuff the 300MB file into
memory and uncompress and sort it there? Has anyone found a workaround
for this? (I have found similar questions and zero answers doing google
searches and mailing list searches). Is there any way of uncompressing
this file *other* than using pg_restore?

Sven

p.s. Although I will check the archives, I would appreciate a CC on any
responses.



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster