[GENERAL] Auto Vaccume- Time based

2007-06-27 Thread Ashish Karalkar
Hello all,
 I want to set Auto vaccunme in such a way that it will start in the Mid night 
(12:00 A.M.)
Is there any way to do this except Cron Job for Linux

Thanks In advance

With Regards
ashish ...

Re: [GENERAL] AutoVacuum Behaviour Question

2007-06-27 Thread Bruce McAlister
Alvaro Herrera wrote:
 Bruce McAlister wrote:
 
 I have just checked the pg_stat_all_tables in the pg_catalog schema and
 I can see the index scans etc table values incrementing. The data in the
 tables seems to be updating. Just an FYI, I've enabled manual vacuum
 analyze runs on the blueface-service database up until we've found whats
 going wrong here. The output from the select query you suggested is as
 follows:

  datname | datconfig
 -+---
  postgres|
  blueface-webmail|
 [etc]
 
 Ok so it's not that you manually disabled autovacuum.  And pgstat is
 working on those databases.  And all databases share the postgresql.conf
 autovacuum configuration.
 

Yes, thats correct, all those databases fall under a single
postgresql.conf configuration file.

 I enabled the log_line_prefix option and put in a %p in the value, and I
 only get the following output from the logs:

 Jun 26 23:46:07 bfiedb01 postgres[2836]: [ID 748848 local0.debug] [5-1]
2836 DEBUG:  autovacuum: processing database blueface-crm
 Jun 26 23:47:32 bfiedb01 postgres[2863]: [ID 748848 local0.debug] [5-1]
2863 DEBUG:  autovacuum: processing database blueface-crm
 
 I assume that there is no other line for process 2836 before the line
 for process 2863.  Can you recheck that?
 

No, there are no other lines between the two autovacuum runs. This is a
more recent snippet, straight from the log file, no modifications:

Jun 27 08:01:03 bfiedb01 postgres[15801]: [ID 748848 local0.debug] [5-1]
   15801 DEBUG:  autovacuum: processing database blueface-crm
Jun 27 08:02:57 bfiedb01 postgres[15826]: [ID 748848 local0.debug] [5-1]
   15826 DEBUG:  autovacuum: processing database blueface-crm
Jun 27 08:04:55 bfiedb01 postgres[15871]: [ID 748848 local0.debug] [5-1]
   15871 DEBUG:  autovacuum: processing database blueface-crm



 What does
 select datname, age(datfrozenxid) from pg_database;
 show?
 

select datname, age(datfrozenxid) from pg_database;
 datname |age
-+---
 postgres| 103837746
 blueface-webmail| 103851569
 blueface-billingreports | 103943960
 blueface-service| 12166
 blueface-cards  | 103948279
 template1   | 103831712
 template0   | 387945736
 blueface-crmsupport | 103933017
 blueface-qualmon| 103881267
 asterisk-cdrgw  | 103959639
 hylafax | 103847354
 thelab-sipswitch| 103827152
 whitelabel-ibb  | 103813843
 whitelabel-pleasant | 103796261
 whitelabel-rapid| 103791708
 whitelabel-test | 103787680
 whitelabel-worlddest| 103782784
 blueface-crm| 441746613
 blueface-billedcalls| 100127483
 asterisk-cdr| 14575
 mysipswitch | 103842683
 whitelabel-ice  | 103805834


 
 I've also just run a manual 'VACUUM ANALYZE FULL VERBOSE;' just to
 ensure it goes through manually.
 
 And it does finish successfully?
 

Yes, the full vacuum completed successfully.

 
 What does the DEBUG1 output of a normal autovacuum run look like in the
 log file?
 
 Nothing interesting shows up:
 
 LOG:  autovacuum: processing database test1
 LOG:  autovacuum: processing database test2
 
 If you try with debug2, it looks a bit more interesting:
 
 LOG:  autovacuum: processing database test2
 DEBUG:  autovac: will VACUUM foo
 DEBUG:  vacuuming public.foo
 DEBUG:  foo: removed 1 row versions in 55 pages
 DETAIL:  CPU 0.00s/0.00u sec elapsed 0.00 sec.
 DEBUG:  foo: found 1 removable, 0 nonremovable row versions in 55 pages
 DETAIL:  0 dead row versions cannot be removed yet.
 There were 0 unused item pointers.
 0 pages are entirely empty.
 CPU 0.00s/0.00u sec elapsed 0.00 sec.
 DEBUG:  foo: truncated 55 to 0 pages
 DETAIL:  CPU 0.00s/0.00u sec elapsed 0.00 sec.
 

I will run with DEBUG2 for a while and see if my output looks anything
like this :)

 
 The only thing I can think of right now is that pgstats does not have
 entries for the other databases for some reason.  How can that happen
 escapes me.
 

If you need any information to try and get to the bottom of it all, then
please let me know. It would be nice to return to autovacuum runs :)



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] problem importing data with psql

2007-06-27 Thread Richard Huxton
Try and remember to CC: the list - there are plenty of other people who 
can help too.



chandresh rana wrote:

Hi Richard,

Let me explain you exactly what am trying to achieve.

Am trying to export the database of the sever(ver 7.4) to my local
server(ver 8.2).The problem in exporting ,is that certain tables are of very
huge size which will be take weeks to complete if they are included.So need
to eliminate those tables.


OK. How big are we talking about? What is the size:
1. Of the large table(s)?
2. Of the database as a whole?


After having exporting/importing of the schema and the included tables,now
need to get certain/limited set of record/rows from excluded tables to get
the database start locally.


I would copy the rows I want to a new table (still in the 7.4 database) 
and then dump that.


You can have pg_dump dump individual tables (use pg_dump from 8.2, it's 
smarter) and then either:

1. rename the table in the dumped file
2. rename it after importing

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Auto Vaccume- Time based

2007-06-27 Thread Richard Huxton

Ashish Karalkar wrote:

Hello all,
 I want to set Auto vaccunme in such a way that it will start in the Mid night 
(12:00 A.M.)
Is there any way to do this except Cron Job for Linux


What's wrong with cron? It's the tool that's designed to run timed jobs.

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] LC_CTYPE and matching accented chars

2007-06-27 Thread Michael Fuhr
On Wed, Jun 27, 2007 at 09:28:24AM +1200, Martin Langhoff wrote:
 Alvaro Herrera wrote:
  I think it would be much easier if you did something like
  
  select * from test where lower(to_ascii(value)) = lower(to_ascii('martín'));
  
  When to_ascii doesn't work (for example because it doesn't work in UTF8)
  you may want to use convert() to recode the text to latin1 or latin9.
 
 Well, with the example above to_ascii doesn't work.
 
   select to_ascii(value) from test ;
   ERROR:  encoding conversion from UTF8 to ASCII not supported
 
 And neither does convert
 
   select convert(value using utf8_to_ascii) from test ;
   ERROR:  character 0xc3 of encoding MULE_INTERNAL has no equivalent
in SQL_ASCII

As Alvaro suggested, try converting to latin1 or latin9 and then
calling to_ascii:

select 'martin' = to_ascii(convert('martín', 'latin1'), 'latin1');
 ?column? 
--
 t
(1 row)

For other possibilities search the list archives for examples of
unaccent functions that normalize text to NFD (Unicode Normalization
Form D) and remove nonspacing marks.  Here's a message with a couple
of PL/Perl functions:

http://archives.postgresql.org/pgsql-general/2007-01/msg00702.php

-- 
Michael Fuhr

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Quota

2007-06-27 Thread Hannes Dorbath
What would be a solution to prevent a single user/schema to fill the 
disk using PostgreSQL? Is it a good idea to use separate table spaces?


--
Regards,
Hannes Dorbath

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] postgres and webmin

2007-06-27 Thread Danyelle Gragsone

Greetings,

I am trying to setup postgres and postfix using webmin.  I am
following this guide:
http://gentoo-wiki.com/HOWTO_Email:_A_Complete_Virtual_System_-_Postfix_to_Postgres

I received the following in the postgres db section of webmin:
select * from pg_database order by datname failed : no pg_hba.conf
entry for host [local], user postgres, database template1, SSL
off

I sadly have no clue on how to go by fixing this.  Please help :)  It
talks about this error in the guide.  But it really does not tell you
how to fix it specifically.  I don't even see a section in the area
specified for the authentication they are suggesting.

Operating systemGentoo Linux
Webmin version  1.350

I believe the problem is in this section.
# local is for Unix domain socket connections only
host all all 10.2.0.202 255.255.255.255 trust
# IPv4 local connections:
hostall all 127.0.0.1/32  trust
# IPv6 local connections:
hostall all ::1/128   trust

I could be wrong.  Please help the newb.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Auto Vaccume- Time based

2007-06-27 Thread Ashish Karalkar

Thnkas Richard for your replay.
Actually ,I was looking for some setting in postgresql.conf file
Is there any one?

With Regards
Ashish
- Original Message - 
From: Richard Huxton [EMAIL PROTECTED]

To: Ashish Karalkar [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org
Sent: Wednesday, June 27, 2007 1:00 PM
Subject: Re: [GENERAL] Auto Vaccume- Time based



Ashish Karalkar wrote:

Hello all,
 I want to set Auto vaccunme in such a way that it will start in the Mid 
night (12:00 A.M.)

Is there any way to do this except Cron Job for Linux


What's wrong with cron? It's the tool that's designed to run timed jobs.

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly 



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Auto Vaccume- Time based

2007-06-27 Thread Richard Huxton

Ashish Karalkar wrote:

Thnkas Richard for your replay.
Actually ,I was looking for some setting in postgresql.conf file
Is there any one?


Not to run at just a specific time.

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] passing long array of integers to dblink remote call

2007-06-27 Thread valgog
Hi everybody,

I could not find any information on the passing arrays to the dblink
remote call.

By now I am using something like:

select t.r from dblink( connection_name, 'select r from
remote_stored_procedure( ARRAY[' || array_to_string( my_id_array,
',' ) || '] )' ) as t(r text);

The construct works, but actually I want to be sure, that it will work
even for arrays as long as 5 values.

And I do not know what is the limitation on the statement text size,
when we pass it to the dblink call!

Is there any better way to call a remote procedure that needs a large
integer array as a parameter?

If not, does dblink team suppose making dblink aware of remote call
parameters?

With best regards,

Valentine Gogichashvili


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] postgres and webmin

2007-06-27 Thread William Leite Araújo

2007/6/27, Danyelle Gragsone [EMAIL PROTECTED]:


Greetings,

I am trying to setup postgres and postfix using webmin.  I am
following this guide:

http://gentoo-wiki.com/HOWTO_Email:_A_Complete_Virtual_System_-_Postfix_to_Postgres

I received the following in the postgres db section of webmin:
select * from pg_database order by datname failed : no pg_hba.conf
entry for host [local], user postgres, database template1, SSL
off

I sadly have no clue on how to go by fixing this.  Please help :)  It
talks about this error in the guide.  But it really does not tell you
how to fix it specifically.  I don't even see a section in the area
specified for the authentication they are suggesting.

Operating systemGentoo Linux
Webmin version  1.350

I believe the problem is in this section.
# local is for Unix domain socket connections only



# DO NOT DISABLE!
# If you change this first entry you will need to make sure that the
# database
# super user can access the database using some other method.
# Noninteractive
# access to all databases is required during automatic maintenance
# (autovacuum, daily cronjob, replication, and similar tasks).
#
# Database administrative login by UNIX sockets
local   all postgres  ident sameuser


host all all 10.2.0.202 255.255.255.255 trust

# IPv4 local connections:
hostall all 127.0.0.1/32  trust
# IPv6 local connections:
hostall all ::1/128   trust

I could be wrong.  Please help the newb.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq





--
William Leite Araújo
Estudante de paternidade - 13a semana


Re: [GENERAL] Does slonik EXECUTE SCRIPT call waits for comands termination?

2007-06-27 Thread Dmitry Koterov

The work-around for old Slony versions:

replication_wait() {
   echo Waiting for all slaves are in sync with the master...;
   echo 
   `slonik_print_preamble`
# Hack for old Slony: this is a dummy operator which generates a
SYNC
# event and saves its ID for later waiting only, nothing more.
DROP PATH (SERVER = $MASTER_NODE_ID, CLIENT = $MASTER_NODE_ID);
WAIT FOR EVENT (
ORIGIN = ALL,
CONFIRMED = ALL,
WAIT ON = $MASTER_NODE_ID
);
| slonik
   echo All slaves are in sync.;
}

This script waits until all slaves are in sync with the master.


On 6/1/07, Dmitry Koterov [EMAIL PROTECTED] wrote:


Hello.

Seems when I use EXECUTE 
SCRIPThttp://slony.info/documentation/stmtddlscript.htmland slonik reports 
PGRES_TUPLES_OK updates may NOT be finished yet on all
slaves.
I ran a long ALTER TABLE statement (about 3 minutes), master updated
immediately after I had seen PGRES_TUPLES_OK, but slave - 10 or more minutes
later.

So, the questions are:

1. THE MAIN question: is it possible to ask slonik to wait untill all
scheme changes were propogated to all slaves after a slonik call?

2. If slonik updates slaves not immediately, but via event creation, why
does it still need to know an information about ALL database hosts, not only
about the master database? I have to enumerate all slave hosts in slonik
calls:

cluster name = my_cluster;
 node 1 admin conninfo='host=host1 dbname=m user=slony port=5432
password=**';
 node 2 admin conninfo='host=host2 dbname=m user=slony port=5432
password=**';
 node 3 admin conninfo='host=host3 dbname=m user=slony port=5432
password=**';
 ...
  execute script (
set id = 1,
filename = '/tmp/e0H7Aa03Fh',
event node = 1
  );

But if a schema changes are propogated via events, theoretically we have
to know only master's address...



Re: [GENERAL] Quota

2007-06-27 Thread Albe Laurenz
Hannes Dorbath wrote:
 
 What would be a solution to prevent a single user/schema to fill the 
 disk using PostgreSQL? Is it a good idea to use separate table spaces?

Table spaces are the only solution I can think of.
If you create a table space on a different file system, then a table in
that table space can only fill up that file system.

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] COPY to/from non-local file

2007-06-27 Thread Jaime Silvela

I've been looking around for this functionality:
Is it possible to use COPY with a file that doesn't reside in the DB's 
filesystem? I know there is sort of a solution in using COPY from stdin/ 
COPY to stdout, however that depends on calling the psql command, which 
breaks the flow of control of the programs I'm trying to write.


Here's the story: I have a Ruby on Rails application which sits on 
server A, and connects to a Postgres Database running on B. Using the 
regular flow, the rails code writes into various tables, and then 
generates a table that needs to be exported into a file which will be 
used as input into a separate program, a simulator, and kept in the 
filesystem of the Rails server. Using the COPY command would entail 
either of
a) Using COPY to generate the file on server B, then transfer to A - but 
how? ftp?  I want to reduce this type of coupling
b) Using COPY TO STDOUT from psql, called in the Rails code with a 
backtick, then gathering the output  and filing it. - but this solution 
depends on having psql in the path of the Rails server, reintroducing 
the server credentials, and from a programming point of view is kind of 
ugly.
c) The Postgres driver for Rails tries to give an interface to the COPY 
command using raw_connection, getline and endcopy, but it's quite 
brittle, so I'm avoiding it altogether.


At the moment I'm avoiding those solutions, and instead get the table 
into Rails space with a CURSOR on a SELECT, then simply write the file 
in Rails, one line at a time. However, the tables I'm writing are pretty 
big, and  the performance is so much worse than with COPY...


Any suggestions?
Thanks
Jaime


***
Bear Stearns is not responsible for any recommendation, solicitation,
offer or agreement or any information about any transaction, customer
account or account activity contained in this communication.

Bear Stearns does not provide tax, legal or accounting advice.  You
should consult your own tax, legal and accounting advisors before
engaging in any transaction. In order for Bear Stearns to comply with
Internal Revenue Service Circular 230 (if applicable), you are notified
that any discussion of U.S. federal tax issues contained or referred to
herein is not intended or written to be used, and cannot be used, for
the purpose of:  (A) avoiding penalties that may be imposed under the
Internal Revenue Code; nor (B) promoting, marketing or recommending to
another party any transaction or matter addressed herein.
***

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Quota

2007-06-27 Thread Dawid Kuroczko

On 6/27/07, Hannes Dorbath [EMAIL PROTECTED] wrote:

What would be a solution to prevent a single user/schema to fill the
disk using PostgreSQL? Is it a good idea to use separate table spaces?


I am afraid currently you are stuck with tablespaces as a quoting tool.

Of course having a filesystem per user per quota is not feasible in
most circumstances.  I am contemplating using XFS filesystem's
quota to achieve per-directory quota.  Basically what you need is
use xfs_quota command.  Here's manual excerpt about enabling it:

  Enabling project quota on an XFS filesystem (restrict files in
log file directories to only using 1 gigabyte of space).

   # mount -o prjquota /dev/xvm/var /var
   # echo 42:/var/log  /etc/projects
   # echo logfiles:42  /etc/projid
   # xfs_quota -x -c 'projects -c logfiles' /home
   # xfs_quota -x -c 'limit -p bhard=1g logfiles' /home

I haven't used it yet, but it does look promising (other than that,
there's ZFS if you are a Sun shop ;-))


Nooow, as we are saying, XFS has yet another nice thing: xfs_fsr
command which does online filesystem level defragmentation (for
example as a nightly job).  It does mix nicely with PostgreSQL's
1-GB table files.. :)

  Regards,
 Dawid

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] postgres and webmin

2007-06-27 Thread Danyelle Gragsone

I am sorry William but you lost me.  I haven't disabled anything have I?

On 6/27/07, William Leite Araújo [EMAIL PROTECTED] wrote:

2007/6/27, Danyelle Gragsone [EMAIL PROTECTED]:
 Greetings,

 I am trying to setup postgres and postfix using webmin.  I am
 following this guide:

http://gentoo-wiki.com/HOWTO_Email:_A_Complete_Virtual_System_-_Postfix_to_Postgres

 I received the following in the postgres db section of webmin:
 select * from pg_database order by datname failed : no pg_hba.conf
 entry for host [local], user postgres, database template1, SSL
 off

 I sadly have no clue on how to go by fixing this.  Please help :)  It
 talks about this error in the guide.  But it really does not tell you
 how to fix it specifically.  I don't even see a section in the area
 specified for the authentication they are suggesting.

 Operating systemGentoo Linux
 Webmin version  1.350

 I believe the problem is in this section.
 # local is for Unix domain socket connections only

# DO NOT DISABLE!
# If you change this first entry you will need to make sure that the
# database
# super user can access the database using some other method.
# Noninteractive
# access to all databases is required during automatic maintenance
# (autovacuum, daily cronjob, replication, and similar tasks).
#
# Database administrative login by UNIX sockets
local   all postgres  ident
sameuser


 host all all 10.2.0.202 255.255.255.255 trust
 # IPv4 local connections:
 hostall all 127.0.0.1/32  trust
 # IPv6 local connections:
 hostall all ::1/128   trust

 I could be wrong.  Please help the newb.

 ---(end of
broadcast)---
 TIP 3: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faq




--
William Leite Araújo
Estudante de paternidade - 13a semana


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Quota

2007-06-27 Thread Hannes Dorbath

On 27.06.2007 16:58, Dawid Kuroczko wrote:

On 6/27/07, Hannes Dorbath [EMAIL PROTECTED] wrote:

What would be a solution to prevent a single user/schema to fill the
disk using PostgreSQL? Is it a good idea to use separate table spaces?


I am contemplating using XFS filesystem's quota to achieve per-directory quota. 
 Basically what you need is use xfs_quota command.


That is exactly what I have now, I was just wondering if that's a bad idea.


Nooow, as we are saying, XFS has yet another nice thing: xfs_fsr
command which does online filesystem level defragmentation (for
example as a nightly job).  It does mix nicely with PostgreSQL's
1-GB table files.. :)


Yup, already in my crontab ;)

Thanks.


--
Regards,
Hannes Dorbath

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] COPY to/from non-local file

2007-06-27 Thread Ted Byers
I just checked the manual and it says nothing about being able to use the SQL 
copy to access network files.  
   
  But you have an option that almost works for you.  I am sure others here will 
have some suggestions, but if I were in your place, I'd gather more information 
about where bottlenecks exist in a solution that works for me, albeit slowly.
   
  To check your performance problem, you may want to add benchmarking code to 
your Ruby program to see where it is spending its time: on getting the data 
from PostgreSQL or writing it to a file.  That will tell you where your efforts 
are best spent.
   
  I don't work with Ruby, but if I was using C++ or Java or Perl, I'd use the 
same strategy: get the data en mass, and then write it out as efficiently as 
possible.  In C++ for example, there is an iostream class based idiom that 
using one line of code in the application program and that line is as fast as 
you can make your io code without delving deeply into IO processing and 
developing your own IOstream classes.  In both C++ and Java, you have stream 
classes, and significant control over how the data is written: one character at 
a time (if you want the process to take forever ;-) or in blocks of whatever 
size you want.  But this involves being willing to develop your own stream 
classes to implement your preferred buffering strategy.  In C++, you can save a 
lot of development time by having template policy classes that control how best 
to optimize disk IO.  In Perl, you can read a file en mass and then iterate 
through it a line at a time, but for this my preference at
 present is to use C++.  Since I don't know Ruby, I don't know how well it 
supports these, and related, IO programming idioms.  If it's support is poor, 
it may pay to use a more powerful and flexible language.  If it has outstanding 
power and flexibility for supporting IO optimization programming idioms, let me 
know and I'll invest the time to add Ruby to my list of languages in my 
development toolbox.  :-)  I would assume that what I have learned using the 
languages I know can be applied in some fashion to programming in Ruby.
   
  HTH
   
  Ted

Jaime Silvela [EMAIL PROTECTED] wrote:
  I've been looking around for this functionality:
Is it possible to use COPY with a file that doesn't reside in the DB's 
filesystem? I know there is sort of a solution in using COPY from stdin/ 
COPY to stdout, however that depends on calling the psql command, which 
breaks the flow of control of the programs I'm trying to write.

Here's the story: I have a Ruby on Rails application which sits on 
server A, and connects to a Postgres Database running on B. Using the 
regular flow, the rails code writes into various tables, and then 
generates a table that needs to be exported into a file which will be 
used as input into a separate program, a simulator, and kept in the 
filesystem of the Rails server. Using the COPY command would entail 
either of
a) Using COPY to generate the file on server B, then transfer to A - but 
how? ftp? I want to reduce this type of coupling
b) Using COPY TO STDOUT from psql, called in the Rails code with a 
backtick, then gathering the output and filing it. - but this solution 
depends on having psql in the path of the Rails server, reintroducing 
the server credentials, and from a programming point of view is kind of 
ugly.
c) The Postgres driver for Rails tries to give an interface to the COPY 
command using raw_connection, getline and endcopy, but it's quite 
brittle, so I'm avoiding it altogether.

At the moment I'm avoiding those solutions, and instead get the table 
into Rails space with a CURSOR on a SELECT, then simply write the file 
in Rails, one line at a time. However, the tables I'm writing are pretty 
big, and the performance is so much worse than with COPY...

Any suggestions?
Thanks
Jaime


***
Bear Stearns is not responsible for any recommendation, solicitation,
offer or agreement or any information about any transaction, customer
account or account activity contained in this communication.

Bear Stearns does not provide tax, legal or accounting advice. You
should consult your own tax, legal and accounting advisors before
engaging in any transaction. In order for Bear Stearns to comply with
Internal Revenue Service Circular 230 (if applicable), you are notified
that any discussion of U.S. federal tax issues contained or referred to
herein is not intended or written to be used, and cannot be used, for
the purpose of: (A) avoiding penalties that may be imposed under the
Internal Revenue Code; nor (B) promoting, marketing or recommending to
another party any transaction or matter addressed herein.
***

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings



[GENERAL] autovacumm not working ?

2007-06-27 Thread Tomasz Rakowski
Hi everybody,

 I have problem with frequently updated table (around 30.000 records and 1000 
updates/minute, few indexes). 
After a while all queries to that table become much slower then at the begining 
(the number of records in the table is quite stable all the time).

I can see that autovacuum is executed (select * from pg_stat_all_tables) so it 
should update statistics and free unused space.

But when after a while when I run VACUUM VERBOSE ANALYZE I saw strange results:

---
INFO:  vacuuming ais.t_ais_position
INFO:  scanned index t_ais_position_pkey to remove 552875 row versions
DETAIL:  CPU 0.00s/0.00u sec elapsed 2.81 sec.
INFO:  scanned index ix_t_ais_position_update_time to remove 552875 row 
versions
DETAIL:  CPU 0.00s/0.00u sec elapsed 144.73 sec.
INFO:  scanned index idx_ais_position to remove 552875 row versions
DETAIL:  CPU 0.00s/0.00u sec elapsed 281.09 sec.
INFO:  t_ais_position: removed 552875 row versions in 8611 pages
DETAIL:  CPU 0.00s/0.00u sec elapsed 211.54 sec.
INFO:  index t_ais_position_pkey now contains 30445 row versions in 367 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index ix_t_ais_position_update_time now contains 30445 row versions in 
18524 pages
DETAIL:  0 index row versions were removed.
4789 index pages have been deleted, 4789 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index idx_ais_position now contains 30445 row versions in 35981 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 281.12 sec.
INFO:  t_ais_position: found 552875 removable, 30445 nonremovable row 
versions in 24525 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 1229773 unused item pointers.
24478 pages contain useful free space.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 1112.89 sec.
INFO:  analyzing ais.t_ais_position
INFO:  t_ais_position: scanned 3000 of 24525 pages, containing 3721 live rows 
and 0 dead rows; 3000 rows in sample, 30419 estimated total rows

Total query runtime: 1136688 ms.
--

There are 30445  live rows in the table/indexes, but also 552875!  rows to be 
removed (in table and indexes).

I would expect that autovacuum would free unused rows much quicker, so the 
number of rows to be removed never would be so huge!.

It is not surprising now that queries are very slow: 30445 rows are located in 
24525  pages (after I run VACUUM FULL it occupied just 400 pages!) and and some 
indexes are located in 18524 pages (after I run REINDEX it occupy just 120 
pages).

Ofcourse after I run VACUUM and REINDEX manually all queries are much quicker.

Is there any bug in autovacuum or I did something wrong ?
Should I run VACUUM/REINDEX periodically anyway ?

Tomasz



# -
# PostgreSQL configuration file
# -
port = 5432# (change requires restart)
max_connections = 50# (change requires restart)
shared_buffers = 32MB# min 128kB or max_connections*16kB
work_mem = 1MB# min 64kB
maintenance_work_mem = 16MB# min 1MB
max_fsm_pages = 204800# min max_fsm_relations*16, 6 bytes each
vacuum_cost_delay = 150# 0-1000 milliseconds
checkpoint_segments = 30# in logfile segments, min 1, 16MB each
effective_cache_size = 128MB
log_destination = 'stderr'# Valid values are combinations of 
redirect_stderr = on# Enable capturing of stderr into log 
log_line_prefix = '%t '# Special values:
stats_start_collector = on# needed for block or row stats
stats_row_level = on
autovacuum = on# enable autovacuum subprocess?
datestyle = 'iso, mdy'
lc_messages = 'C'# locale for system error message 
lc_monetary = 'C'# locale for monetary formatting
lc_numeric = 'C'# locale for number formatting
lc_time = 'C'# locale for time formatting





   

Building a website is a piece of cake. Yahoo! Small Business gives you all the 
tools to get online.
http://smallbusiness.yahoo.com/webhosting 

Re: [GENERAL] COPY to/from non-local file

2007-06-27 Thread Erik Jones


On Jun 27, 2007, at 9:32 AM, Jaime Silvela wrote:


I've been looking around for this functionality:
Is it possible to use COPY with a file that doesn't reside in the  
DB's filesystem? I know there is sort of a solution in using COPY  
from stdin/ COPY to stdout, however that depends on calling the  
psql command, which breaks the flow of control of the programs I'm  
trying to write.


Here's the story: I have a Ruby on Rails application which sits on  
server A, and connects to a Postgres Database running on B. Using  
the regular flow, the rails code writes into various tables, and  
then generates a table that needs to be exported into a file which  
will be used as input into a separate program, a simulator, and  
kept in the filesystem of the Rails server. Using the COPY command  
would entail either of
a) Using COPY to generate the file on server B, then transfer to A  
- but how? ftp?  I want to reduce this type of coupling
b) Using COPY TO STDOUT from psql, called in the Rails code with a  
backtick, then gathering the output  and filing it. - but this  
solution depends on having psql in the path of the Rails server,  
reintroducing the server credentials, and from a programming point  
of view is kind of ugly.
c) The Postgres driver for Rails tries to give an interface to the  
COPY command using raw_connection, getline and endcopy, but  
it's quite brittle, so I'm avoiding it altogether.


At the moment I'm avoiding those solutions, and instead get the  
table into Rails space with a CURSOR on a SELECT, then simply write  
the file in Rails, one line at a time. However, the tables I'm  
writing are pretty big, and  the performance is so much worse than  
with COPY...


Any suggestions?
Thanks
Jaime


The way we usually handle situations similar to this is to use  
network mounts of directories that are visible from both servers  
using, say, nfs.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] How do you handle shared memory corruption issues? (contrib/xml2)

2007-06-27 Thread CG
Postgresql 8.1

I made a simple modification to contrib/xml2 to include the ability to process 
exslt tags... On the production side, the postmaster crashes when I try to 
process my exslt stylesheet. On my development machine, everything runs without 
crashing. There's a number of differences there... There's little or no load. 
It has far less memory, and its PostgreSQL memory configuration reflects that. 
It has a different processor (P4 versus x86_64 on production). There are other 
differences at the compiler and library level I'm sure. Any of these things 
could be contributing to the crashes.

They are both using the same version of PsotgreSQL, compiled with the same 
options. They are both using the same version of libxml2 and libxslt.

Incidently, I installed the Perl bindings for libxml2 and libxslt. I created a 
plperlu function which uses those libraries to do the xml transformations. It 
blows up in the same way.

So, I suspect that there is an issue with libxslt, but I have no idea how to 
nail it down. Please advise...

I'm up against a deadline, so my main goal is to get this working. Timelines 
and paychecks being as they are, I'm not nearly as concerned about fixing the 
larger problem. Is there a different way to process xml and (e)xslt at the 
database not using contrib/xml2 or XML::LibXSLT ?


   

Boardwalk for $500? In 2007? Ha! Play Monopoly Here and Now (it's updated for 
today's economy) at Yahoo! Games.
http://get.games.yahoo.com/proddesc?gamekey=monopolyherenow  

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] COPY to/from non-local file

2007-06-27 Thread Martijn van Oosterhout
On Wed, Jun 27, 2007 at 10:32:32AM -0400, Jaime Silvela wrote:
 I've been looking around for this functionality:
 Is it possible to use COPY with a file that doesn't reside in the DB's 
 filesystem? I know there is sort of a solution in using COPY from stdin/ 
 COPY to stdout, however that depends on calling the psql command, which 
 breaks the flow of control of the programs I'm trying to write.

Sorry? Copy to stdout/from stdin can be done by normal programs also,
not just psql. Execute the command and then use putline/getline to
transfer the data. In perl DBI is works like that, for C also, so
probably from ruby also.

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] How do you handle shared memory corruption issues? (contrib/xml2)

2007-06-27 Thread Martin Gainty

Hi CG

looks as if your 64bit box needs 64bit libraries instead of default 32 bit

did you check here
http://dries.studentenweb.org/apt/packages/perl-XML-LibXSLT/info.html

Let us know
M--

This email message and any files transmitted with it contain confidential
information intended only for the person(s) to whom this email message is
addressed.  If you have received this email message in error, please notify
the sender immediately by telephone or email and destroy the original
message without making a copy.  Thank you.

- Original Message - 
From: CG [EMAIL PROTECTED]

To: postgresql listserv pgsql-general@postgresql.org
Sent: Wednesday, June 27, 2007 12:02 PM
Subject: [GENERAL] How do you handle shared memory corruption issues? 
(contrib/xml2)




Postgresql 8.1

I made a simple modification to contrib/xml2 to include the ability to 
process exslt tags... On the production side, the postmaster crashes when 
I try to process my exslt stylesheet. On my development machine, 
everything runs without crashing. There's a number of differences there... 
There's little or no load. It has far less memory, and its PostgreSQL 
memory configuration reflects that. It has a different processor (P4 
versus x86_64 on production). There are other differences at the compiler 
and library level I'm sure. Any of these things could be contributing to 
the crashes.


They are both using the same version of PsotgreSQL, compiled with the same 
options. They are both using the same version of libxml2 and libxslt.


Incidently, I installed the Perl bindings for libxml2 and libxslt. I 
created a plperlu function which uses those libraries to do the xml 
transformations. It blows up in the same way.


So, I suspect that there is an issue with libxslt, but I have no idea how 
to nail it down. Please advise...


I'm up against a deadline, so my main goal is to get this working. 
Timelines and paychecks being as they are, I'm not nearly as concerned 
about fixing the larger problem. Is there a different way to process xml 
and (e)xslt at the database not using contrib/xml2 or XML::LibXSLT ?





Boardwalk for $500? In 2007? Ha! Play Monopoly Here and Now (it's updated 
for today's economy) at Yahoo! Games.

http://get.games.yahoo.com/proddesc?gamekey=monopolyherenow

---(end of broadcast)---
TIP 6: explain analyze is your friend




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] autovacumm not working ?

2007-06-27 Thread Matthew T. O'Connor

Tomasz Rakowski wrote:
 I have problem with frequently updated table (around 30.000 records and 
1000 updates/minute, few indexes).
After a while all queries to that table become much slower then at the 
begining

(the number of records in the table is quite stable all the time).

I can see that autovacuum is executed (select * from pg_stat_all_tables) 
so it should update statistics and free unused space.



Sounds like autovacuum is running, but not often enough to keep up with 
this highly active table.  You may be able to get better results by 
setting table specific autovacuum thresholds for this table so that it 
get vacuumed more often.  However if your table is *very* active then 
autovacuum may not be able to keep up even with the more aggressive 
settings, this is a known problem which is hopefully addressed in 8.3, 
some people solve this by turning off autovacuum for the highly active 
table and using a cron script to vacuum a table every minute or so.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] Checking for a number

2007-06-27 Thread Warren
I need to check if the last two characters of a field are a number. I am 
trying something like this but it does not want to work.


substring(TRIM(field8) from '..$') SIMILAR TO '\d\d'

How should I do this?

--
Thanks,

Warren Bell

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Checking for a number

2007-06-27 Thread Lee Keel
 -Original Message-
 From: [EMAIL PROTECTED] [mailto:pgsql-general-
 [EMAIL PROTECTED] On Behalf Of Warren
 Sent: Wednesday, June 27, 2007 12:31 PM
 To: pgsql-general@postgresql.org
 Subject: [GENERAL] Checking for a number
 
 I need to check if the last two characters of a field are a number. I am
 trying something like this but it does not want to work.
 
 substring(TRIM(field8) from '..$') SIMILAR TO '\d\d'
 
 How should I do this?
 
 --
 Thanks,
 
 Warren Bell
 
 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match

Warren,

Try this...

select substring(trim(both ' ' from field8) from E'\\d{2}$') from TABLENAME
where texticregexeq(trim(both ' ' from field8), E'\\d{2}$')

This will limit the query to only the rows that end in 2 digits as well as
return those digits for you.

Hope that helps,
Lee Keel

This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this email in error please notify the sender. This message 
contains confidential information and is intended only for the individual 
named. If you are not the named addressee you should not disseminate, 
distribute or copy this e-mail.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Quota

2007-06-27 Thread Hannes Dorbath

On 27.06.2007 15:18, Albe Laurenz wrote:

Hannes Dorbath wrote:
What would be a solution to prevent a single user/schema to fill the 
disk using PostgreSQL? Is it a good idea to use separate table spaces?


Table spaces are the only solution I can think of.
If you create a table space on a different file system, then a table in
that table space can only fill up that file system.


OK, as my file system implements a native per directory quota that sound 
good. Thanks.



--
Regards,
Hannes Dorbath

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] How do you handle shared memory corruption issues? (contrib/xml2)

2007-06-27 Thread CG
I compiled the libraries from the source tarballs I configured using 
--with-crypt --with-debugger --with-python. It is always possible I missed 
something!

Do you think the RPMs will even be applicable considering I'm not using RedHat 
or Fedora?


- Original Message 
From: Martin Gainty [EMAIL PROTECTED]
To: CG [EMAIL PROTECTED]; postgresql listserv pgsql-general@postgresql.org
Sent: Wednesday, June 27, 2007 12:46:59 PM
Subject: Re: [GENERAL] How do you handle shared memory corruption issues? 
(contrib/xml2)


Hi CG

looks as if your 64bit box needs 64bit libraries instead of default 32 bit

did you check here
http://dries.studentenweb.org/apt/packages/perl-XML-LibXSLT/info.html

Let us know
M--

This email message and any files transmitted with it contain confidential
information intended only for the person(s) to whom this email message is
addressed.  If you have received this email message in error, please notify
the sender immediately by telephone or email and destroy the original
message without making a copy.  Thank you.

- Original Message - 
From: CG [EMAIL PROTECTED]
To: postgresql listserv pgsql-general@postgresql.org
Sent: Wednesday, June 27, 2007 12:02 PM
Subject: [GENERAL] How do you handle shared memory corruption issues? 
(contrib/xml2)


 Postgresql 8.1

 I made a simple modification to contrib/xml2 to include the ability to 
 process exslt tags... On the production side, the postmaster crashes when 
 I try to process my exslt stylesheet. On my development machine, 
 everything runs without crashing. There's a number of differences there... 
 There's little or no load. It has far less memory, and its PostgreSQL 
 memory configuration reflects that. It has a different processor (P4 
 versus x86_64 on production). There are other differences at the compiler 
 and library level I'm sure. Any of these things could be contributing to 
 the crashes.

 They are both using the same version of PsotgreSQL, compiled with the same 
 options. They are both using the same version of libxml2 and libxslt.

 Incidently, I installed the Perl bindings for libxml2 and libxslt. I 
 created a plperlu function which uses those libraries to do the xml 
 transformations. It blows up in the same way.

 So, I suspect that there is an issue with libxslt, but I have no idea how 
 to nail it down. Please advise...

 I'm up against a deadline, so my main goal is to get this working. 
 Timelines and paychecks being as they are, I'm not nearly as concerned 
 about fixing the larger problem. Is there a different way to process xml 
 and (e)xslt at the database not using contrib/xml2 or XML::LibXSLT ?



 
 Boardwalk for $500? In 2007? Ha! Play Monopoly Here and Now (it's updated 
 for today's economy) at Yahoo! Games.
 http://get.games.yahoo.com/proddesc?gamekey=monopolyherenow

 ---(end of broadcast)---
 TIP 6: explain analyze is your friend



 

TV dinner still cooling? 
Check out Tonight's Picks on Yahoo! TV.
http://tv.yahoo.com/

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] How do you handle shared memory corruption issues? (contrib/xml2)

2007-06-27 Thread Martin Gainty

yes..go with the specific distro for your OS (either RH or Fedora)

keep us apprised!

Thanks/
M--
This email message and any files transmitted with it contain confidential
information intended only for the person(s) to whom this email message is
addressed.  If you have received this email message in error, please notify
the sender immediately by telephone or email and destroy the original
message without making a copy.  Thank you.

- Original Message - 
From: CG [EMAIL PROTECTED]
To: Martin Gainty [EMAIL PROTECTED]; postgresql listserv 
pgsql-general@postgresql.org

Sent: Wednesday, June 27, 2007 2:14 PM
Subject: Re: [GENERAL] How do you handle shared memory corruption issues? 
(contrib/xml2)



I compiled the libraries from the source tarballs I configured 
using --with-crypt --with-debugger --with-python. It is always possible I 
missed something!


Do you think the RPMs will even be applicable considering I'm not using 
RedHat or Fedora?



- Original Message 
From: Martin Gainty [EMAIL PROTECTED]
To: CG [EMAIL PROTECTED]; postgresql listserv 
pgsql-general@postgresql.org

Sent: Wednesday, June 27, 2007 12:46:59 PM
Subject: Re: [GENERAL] How do you handle shared memory corruption issues? 
(contrib/xml2)



Hi CG

looks as if your 64bit box needs 64bit libraries instead of default 32 bit

did you check here
http://dries.studentenweb.org/apt/packages/perl-XML-LibXSLT/info.html

Let us know
M--

This email message and any files transmitted with it contain confidential
information intended only for the person(s) to whom this email message is
addressed.  If you have received this email message in error, please 
notify

the sender immediately by telephone or email and destroy the original
message without making a copy.  Thank you.

- Original Message - 
From: CG [EMAIL PROTECTED]

To: postgresql listserv pgsql-general@postgresql.org
Sent: Wednesday, June 27, 2007 12:02 PM
Subject: [GENERAL] How do you handle shared memory corruption issues?
(contrib/xml2)



Postgresql 8.1

I made a simple modification to contrib/xml2 to include the ability to
process exslt tags... On the production side, the postmaster crashes when
I try to process my exslt stylesheet. On my development machine,
everything runs without crashing. There's a number of differences 
there...

There's little or no load. It has far less memory, and its PostgreSQL
memory configuration reflects that. It has a different processor (P4
versus x86_64 on production). There are other differences at the compiler
and library level I'm sure. Any of these things could be contributing to
the crashes.

They are both using the same version of PsotgreSQL, compiled with the 
same

options. They are both using the same version of libxml2 and libxslt.

Incidently, I installed the Perl bindings for libxml2 and libxslt. I
created a plperlu function which uses those libraries to do the xml
transformations. It blows up in the same way.

So, I suspect that there is an issue with libxslt, but I have no idea how
to nail it down. Please advise...

I'm up against a deadline, so my main goal is to get this working.
Timelines and paychecks being as they are, I'm not nearly as concerned
about fixing the larger problem. Is there a different way to process xml
and (e)xslt at the database not using contrib/xml2 or XML::LibXSLT ?




Boardwalk for $500? In 2007? Ha! Play Monopoly Here and Now (it's updated
for today's economy) at Yahoo! Games.
http://get.games.yahoo.com/proddesc?gamekey=monopolyherenow

---(end of broadcast)---
TIP 6: explain analyze is your friend






TV dinner still cooling?
Check out Tonight's Picks on Yahoo! TV.
http://tv.yahoo.com/

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] COPY to/from non-local file

2007-06-27 Thread Jaime Silvela
The problem is that the getline/readline interface, which does exist for 
Ruby, doesn't seem to work so well, and anyway operates line by line; I 
would have preferred to just hand a file descriptor and be done with it.


Thanks for your suggestions too, Erik and Ted. Performance-wise the best 
solution may be a network mount as you suggest Erik.


Thanks,
Jaime

Martijn van Oosterhout wrote:

On Wed, Jun 27, 2007 at 10:32:32AM -0400, Jaime Silvela wrote:
  

I've been looking around for this functionality:
Is it possible to use COPY with a file that doesn't reside in the DB's 
filesystem? I know there is sort of a solution in using COPY from stdin/ 
COPY to stdout, however that depends on calling the psql command, which 
breaks the flow of control of the programs I'm trying to write.



Sorry? Copy to stdout/from stdin can be done by normal programs also,
not just psql. Execute the command and then use putline/getline to
transfer the data. In perl DBI is works like that, for C also, so
probably from ruby also.

Have a nice day,
  



***
Bear Stearns is not responsible for any recommendation, solicitation,
offer or agreement or any information about any transaction, customer
account or account activity contained in this communication.

Bear Stearns does not provide tax, legal or accounting advice.  You
should consult your own tax, legal and accounting advisors before
engaging in any transaction. In order for Bear Stearns to comply with
Internal Revenue Service Circular 230 (if applicable), you are notified
that any discussion of U.S. federal tax issues contained or referred to
herein is not intended or written to be used, and cannot be used, for
the purpose of:  (A) avoiding penalties that may be imposed under the
Internal Revenue Code; nor (B) promoting, marketing or recommending to
another party any transaction or matter addressed herein.
***

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] timestamp wiht time zone

2007-06-27 Thread Jasbinder Singh Bali

Hi,
i have a column in my table defined like this:

time_stamp timestamp DEFAULT ('now'::text)::timestamp with time zone

Data gets filled in this column from a Unix tools server.

Example values of this field are:

time_stamp

2007-06-27 14:52:14.760133
2007-06-27 15:06:56.90582

I have the following questions on this field.

1. What is the value after the dot (period) at the end. Like 760133 and
90582
2. How does it talk about the time zone.

Also, the reason I'm using time zone is that I have to compare different
values in the tables correctly without any error(s)
based on time zones.

Any kind of help would be greatly appreciated.


Thanks,
~Jas


Re: [GENERAL] COPY to/from non-local file

2007-06-27 Thread Enrico Sirola

Hi,

Il giorno 27/giu/07, alle ore 20:54, Jaime Silvela ha scritto:

The problem is that the getline/readline interface, which does  
exist for Ruby, doesn't seem to work so well, and anyway operates  
line by line; I would have preferred to just hand a file descriptor  
and be done with it.


Thanks for your suggestions too, Erik and Ted. Performance-wise the  
best solution may be a network mount as you suggest Erik.


I transfer some million rows (for like 20 tables) every day from a  
server to another.
The two servers are in two separated networks, with different  
security levels, so unfortunately
network mounts are not available. At the beginning I used to transfer  
record-by-record within
a python application hosted by the source server, but it took ages.  
At the end,
some scripting around ssh proved to be enough. just put the public  
key file in
the .ssh/authorized_keys of the postgresql system user and you are  
done. You can transfer

files without the need for a password, and it's perfectly scriptable
I hope it helps,

Enrico Sirola
[EMAIL PROTECTED]





Re: [GENERAL] timestamp wiht time zone

2007-06-27 Thread Alvaro Herrera
Jasbinder Singh Bali escribió:
 Hi,
 i have a column in my table defined like this:
 
 time_stamp timestamp DEFAULT ('now'::text)::timestamp with time zone

Note that the column is of type timestamp, which _doesn't_ have a time
zone.  You probably want

time_stamp timestamp with time zone DEFAULT ('now'::text)::timestamp with time 
zone

 1. What is the value after the dot (period) at the end. Like 760133 and
 90582

milliseconds

 2. How does it talk about the time zone.

It doesn't because the time zone information is not being stored due to
the datatype issue I mentioned above.

Note: the time zone is not actually stored.  What actually happens is
that the value is rotated to GMT and stored as a GMT value, and then
when you extract it from the database it is rotated to the current
TimeZone for display.  If you need to store what time zone a value is
in you need to store that information in a separate column.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] COPY to/from non-local file

2007-06-27 Thread Martijn van Oosterhout
On Wed, Jun 27, 2007 at 02:54:05PM -0400, Jaime Silvela wrote:
 The problem is that the getline/readline interface, which does exist for 
 Ruby, doesn't seem to work so well, and anyway operates line by line; I 
 would have preferred to just hand a file descriptor and be done with it.

Hrm, in what sense doesn't it work well? Line-by-line means
record-by-record. And writing a function to take an fd and do the work
would be straightforward, or do you mean something else?

Do you have any suggestions for improvement?

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] How do you handle shared memory corruption issues? (contrib/xml2)

2007-06-27 Thread CG
...As we continue through this process I just want to express my sincere thanks 
for your suggestions heretofore

I use neither RedHat nor Fedora :( I also didn't look carefully at the RPMs you 
were offering me. They are for XML::LibXSLT, not LibXSLT. Sorry about that!

I originally built XML::LibXSLT using CPAN. For argument's sake, I dropped in 
just the binary from the Fedora Core 6 x86_64 RPM (the rest of the RPM's 
contents according to diff were identical). It blew up with a Floating Point 
Error when I ran 

#!/usr/bin/perl
 use XML::LibXSLT;
 use XML::LibXML;
 my $parser = XML::LibXML-new();
 my $xslt = XML::LibXSLT-new();
 my $source = $parser-parse_file('foo.xml');
 my $style_doc = $parser-parse_file('bar.xsl');
 my $stylesheet = $xslt-parse_stylesheet($style_doc);
 my $results = $stylesheet-transform($source);
 print $stylesheet-output_string($results);

The auto/XML/LibXSLT/LibXSLT.so library compiled on the machine which it is 
running should be sufficient, right?



- Original Message 
From: Martin Gainty [EMAIL PROTECTED]
To: CG [EMAIL PROTECTED]; postgresql listserv pgsql-general@postgresql.org
Sent: Wednesday, June 27, 2007 2:23:37 PM
Subject: Re: [GENERAL] How do you handle shared memory corruption issues? 
(contrib/xml2)


yes..go with the specific distro for your OS (either RH or Fedora)

keep us apprised!

Thanks/
M--
This email message and any files transmitted with it contain confidential
information intended only for the person(s) to whom this email message is
addressed.  If you have received this email message in error, please notify
the sender immediately by telephone or email and destroy the original
message without making a copy.  Thank you.

- Original Message - 
From: CG [EMAIL PROTECTED]
To: Martin Gainty [EMAIL PROTECTED]; postgresql listserv 
pgsql-general@postgresql.org
Sent: Wednesday, June 27, 2007 2:14 PM
Subject: Re: [GENERAL] How do you handle shared memory corruption issues? 
(contrib/xml2)


I compiled the libraries from the source tarballs I configured 
using --with-crypt --with-debugger --with-python. It is always possible I 
missed something!

 Do you think the RPMs will even be applicable considering I'm not using 
 RedHat or Fedora?


 - Original Message 
 From: Martin Gainty [EMAIL PROTECTED]
 To: CG [EMAIL PROTECTED]; postgresql listserv 
 pgsql-general@postgresql.org
 Sent: Wednesday, June 27, 2007 12:46:59 PM
 Subject: Re: [GENERAL] How do you handle shared memory corruption issues? 
 (contrib/xml2)


 Hi CG

 looks as if your 64bit box needs 64bit libraries instead of default 32 bit

 did you check here
 http://dries.studentenweb.org/apt/packages/perl-XML-LibXSLT/info.html

 Let us know
 M--

 This email message and any files transmitted with it contain confidential
 information intended only for the person(s) to whom this email message is
 addressed.  If you have received this email message in error, please 
 notify
 the sender immediately by telephone or email and destroy the original
 message without making a copy.  Thank you.

 - Original Message - 
 From: CG [EMAIL PROTECTED]
 To: postgresql listserv pgsql-general@postgresql.org
 Sent: Wednesday, June 27, 2007 12:02 PM
 Subject: [GENERAL] How do you handle shared memory corruption issues?
 (contrib/xml2)


 Postgresql 8.1

 I made a simple modification to contrib/xml2 to include the ability to
 process exslt tags... On the production side, the postmaster crashes when
 I try to process my exslt stylesheet. On my development machine,
 everything runs without crashing. There's a number of differences 
 there...
 There's little or no load. It has far less memory, and its PostgreSQL
 memory configuration reflects that. It has a different processor (P4
 versus x86_64 on production). There are other differences at the compiler
 and library level I'm sure. Any of these things could be contributing to
 the crashes.

 They are both using the same version of PsotgreSQL, compiled with the 
 same
 options. They are both using the same version of libxml2 and libxslt.

 Incidently, I installed the Perl bindings for libxml2 and libxslt. I
 created a plperlu function which uses those libraries to do the xml
 transformations. It blows up in the same way.

 So, I suspect that there is an issue with libxslt, but I have no idea how
 to nail it down. Please advise...

 I'm up against a deadline, so my main goal is to get this working.
 Timelines and paychecks being as they are, I'm not nearly as concerned
 about fixing the larger problem. Is there a different way to process xml
 and (e)xslt at the database not using contrib/xml2 or XML::LibXSLT ?



 
 Boardwalk for $500? In 2007? Ha! Play Monopoly Here and Now (it's updated
 for today's economy) at Yahoo! Games.
 

[GENERAL] Status of pgaccess

2007-06-27 Thread Bret Busby


What is happening (or not happening) with pgaccess?

In upgrading a computer from Debian Linux 3.1 (sarge) to Debian Linux 
4.0 (etch), the package pgaccess is no longer available.


My wife has it installed on her Ubuntu system, which is the version 
before the current one. However, the pgaccess is not functional, and, 
when I searched for the package on the Debian web site, pgaccess is only 
found to be available for Debian 3.1 (also now named oldstable, and 
has the version number 1:0.98.8.20030520-1, and, by context, I believe 
that that version works with PostgreSQL 7.x and not 8.x, which is 
apparently why pgaccess was not functional, as she had upgraded 
PostgreSQL on her system, from 7.4 to 8.1 .


Also, that version number of pgaccess, I believe, appears to indicate 
that it was released on 20 May 2003, which is, I believe, before the 
release of PostgreSQL 8.x .


On searching on the PostgreSQL web site for pgaccess, the page with the 
results includes the statement Based on your search term, we recommend 
the following links:

* http://www.pgaccess.org/


At that web site, is the title PgAccess (Redux) , so I do not know 
whether it has been renamed Redux, although searching on that name, 
in the Debian packages search engine, returns not found for all suites, 
sections, and architectures.


Also on the web site at http://www.pgaccess.org , is stated Last stable 
version is 0.98.7 , released on 27 January 2001. Read what's new. The 
first new release is expected later in June or July 2002.



So, it appears that that web site has not been updated since 2001 or 
2002. Most of the links on that web site home page, to information about 
pgaccess, are broken, and a stable version appears to have been 
released, after the last stable version that is mentioned on that web 
site.


On the web page at http://pgaccess.projects.postgresql.org/ is stated
PgAccess at PgFoundry
nothing here yet 

From the Debian website webpage for information about the pgaccess 
package, at http://packages.qa.debian.org/p/pgaccess.html , is a link to 
the web page at 
http://packages.qa.debian.org/p/pgaccess/news/20060816T210827Z.html , 
which states that, as at 16 August 2006, the package was removed from 
the Debian testing distribution, which was then etch, which is not 
the Debian stable distribution.


Has any work been done on pgaccess, since 2003 (the assumed date of the 
latest Debian release of the application package), and, does a version 
exist, that is compatible with PostgreSQL 8.x?


Thank you in anticipation.

--
Bret Busby
Armadale
West Australia
..

So once you do know what the question actually is,
 you'll know what the answer means.
- Deep Thought,
  Chapter 28 of Book 1 of
  The Hitchhiker's Guide to the Galaxy:
  A Trilogy In Four Parts,
  written by Douglas Adams,
  published by Pan Books, 1992



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] COPY to/from non-local file

2007-06-27 Thread Jaime Silvela
Well, for Ruby there are two Postgres drivers, a native compiled one I 
run in Linux, and a pure Ruby one that I use in Windows. The PGconn 
object in the Windows version doesn't seem to offer the methods putline, 
readline or endcopy. The Linux version does, but since I do development 
in both Linux and Windows, this is

not a solution.

It would have been great to have a COPY to/from string functionality. 
I'm sure there's a rationale for not having that. Would it be foolish to 
try to implement that? In my opinion, it would be cleaner design than 
having to mess with getline/putline or file transmission.


Thanks also Enrico, your idea is good.


Martijn van Oosterhout wrote:

On Wed, Jun 27, 2007 at 02:54:05PM -0400, Jaime Silvela wrote:
  
The problem is that the getline/readline interface, which does exist for 
Ruby, doesn't seem to work so well, and anyway operates line by line; I 
would have preferred to just hand a file descriptor and be done with it.



Hrm, in what sense doesn't it work well? Line-by-line means
record-by-record. And writing a function to take an fd and do the work
would be straightforward, or do you mean something else?

Do you have any suggestions for improvement?

Have a nice day,
  



***
Bear Stearns is not responsible for any recommendation, solicitation,
offer or agreement or any information about any transaction, customer
account or account activity contained in this communication.

Bear Stearns does not provide tax, legal or accounting advice.  You
should consult your own tax, legal and accounting advisors before
engaging in any transaction. In order for Bear Stearns to comply with
Internal Revenue Service Circular 230 (if applicable), you are notified
that any discussion of U.S. federal tax issues contained or referred to
herein is not intended or written to be used, and cannot be used, for
the purpose of:  (A) avoiding penalties that may be imposed under the
Internal Revenue Code; nor (B) promoting, marketing or recommending to
another party any transaction or matter addressed herein.
***

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] AutoVacuum Behaviour Question

2007-06-27 Thread Alvaro Herrera
Bruce McAlister wrote:

Ok now this is interesting:

 select datname, age(datfrozenxid) from pg_database;
  datname |age
 -+---
  blueface-crm| 441746613

Note this value is 440 million, and you said in your original report that

 autovacuum_freeze_max_age = 2

200 million.  So this database is being selected each time because of
this.

However, what should happen is that after the vacuum the age of the
database is decreased after the vacuuming.  What's your
vacuum_freeze_min_age setting?

-- 
Alvaro Herrera http://www.flickr.com/photos/alvherre/
Everything that I think about is more fascinating than the crap in your head.
   (Dogbert's interpretation of blogger philosophy)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Status of pgaccess

2007-06-27 Thread Joshua D. Drake

Bret Busby wrote:


What is happening (or not happening) with pgaccess?


As far as I know pgaccess has been dead for years. I would suggest 
www.pgadmin.org


Sincerely,

Joshua D. Drake

--

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] postgres and webmin

2007-06-27 Thread Andrej Ricnik-Bay

On 6/28/07, Danyelle Gragsone [EMAIL PROTECTED] wrote:

I am sorry William but you lost me.  I haven't disabled anything have I?

Hard to say w/o seeing the WHOLE file.  You're looking in the
wrong section.

What you need to make sure is that the line William posted
*isn't* commented out.


Cheers,
Andrej

--
Please don't top post, and don't use HTML e-Mail :}  Make your quotes concise.

http://www.american.edu/econ/notes/htmlmail.htm

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] timestamp wiht time zone

2007-06-27 Thread Jasbinder Singh Bali

Thanks Alvaro,
Your information proved very handy.
~Jas

On 6/27/07, Alvaro Herrera [EMAIL PROTECTED] wrote:


Jasbinder Singh Bali escribió:
 Hi,
 i have a column in my table defined like this:

 time_stamp timestamp DEFAULT ('now'::text)::timestamp with time zone

Note that the column is of type timestamp, which _doesn't_ have a time
zone.  You probably want

time_stamp timestamp with time zone DEFAULT ('now'::text)::timestamp with
time zone

 1. What is the value after the dot (period) at the end. Like 760133 and
 90582

milliseconds

 2. How does it talk about the time zone.

It doesn't because the time zone information is not being stored due to
the datatype issue I mentioned above.

Note: the time zone is not actually stored.  What actually happens is
that the value is rotated to GMT and stored as a GMT value, and then
when you extract it from the database it is rotated to the current
TimeZone for display.  If you need to store what time zone a value is
in you need to store that information in a separate column.

--
Alvaro Herrera
http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.



Re: [GENERAL] postgres and webmin

2007-06-27 Thread Danyelle Gragsone

BEFORE
# TYPE  DATABASEUSERCIDR-ADDRESS  METHOD

#local is for Unix domain socket connections only
host template1  postfix, postfixadmin  all 10.2.0.202 255.255.255.0 trust
# IPv4 local connections:
hostall all 127.0.0.1/32  trust
# IPv6 local connections:
#hostall all ::1/128   trust

AFTER

# TYPE  DATABASEUSERCIDR-ADDRESS  METHOD

local is for Unix domain socket connections only
host template1  postfix, postfixadmin  all 10.2.0.202 255.255.255.0 trust
# IPv4 local connections:
hostall all 127.0.0.1/32  trust
# IPv6 local connections:
#hostall all ::1/128   trust

You mean like this?

On 6/27/07, Andrej Ricnik-Bay [EMAIL PROTECTED] wrote:

On 6/28/07, Danyelle Gragsone [EMAIL PROTECTED] wrote:
 I am sorry William but you lost me.  I haven't disabled anything have I?
Hard to say w/o seeing the WHOLE file.  You're looking in the
wrong section.

What you need to make sure is that the line William posted
*isn't* commented out.


Cheers,
Andrej

--
Please don't top post, and don't use HTML e-Mail :}  Make your quotes concise.

http://www.american.edu/econ/notes/htmlmail.htm



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Checking for a number

2007-06-27 Thread Osvaldo Rosario Kussama

Warren escreveu:
I need to check if the last two characters of a field are a number. I am 
trying something like this but it does not want to work.


substring(TRIM(field8) from '..$') SIMILAR TO '\d\d'

How should I do this?



Try:
SELECT your_field ~ '.*[[:digit:]]{2}$';

Osvaldo


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Error Message accessing configuration file

2007-06-27 Thread Brian J. Erickson
Hello,

I get the following message in the pgstartup.log:
postmaster cannot access the server configuration file
/database/postgresql.conf: Permission denied


I have:
1) read the file doing the following:
a) su posgres
b) less postgresql.conf
2) made sure each directory to file is readable by postgres
3) made the postgresql.conf and each directory readable by
everyone.
4) verify the owner and group are postgres.
5) ran initdb as postgres

Any ideas as to what could be the problem?

Brian Erickson


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] postgres and webmin

2007-06-27 Thread Andrej Ricnik-Bay

On 6/28/07, Danyelle Gragsone [EMAIL PROTECTED] wrote:

BEFORE
# TYPE  DATABASEUSERCIDR-ADDRESS  METHOD

#local is for Unix domain socket connections only
host template1  postfix, postfixadmin  all 10.2.0.202 255.255.255.0 trust
# IPv4 local connections:
hostall all 127.0.0.1/32  trust
# IPv6 local connections:
#hostall all ::1/128   trust

AFTER

# TYPE  DATABASEUSERCIDR-ADDRESS  METHOD

local is for Unix domain socket connections only
host template1  postfix, postfixadmin  all 10.2.0.202 255.255.255.0 trust
# IPv4 local connections:
hostall all 127.0.0.1/32  trust
# IPv6 local connections:
#hostall all ::1/128   trust

You mean like this?


Firstly I fail to see the difference between the before and after.

Secondly you don't have a line that begins with local. Add the
following line to your file (was the stuff you just posted the entire file?)
and restart postgres.
local   all postgres  ident sameuser

Thirdly you're still top-posting :}


Cheers,
Andrej

--
Please don't top post, and don't use HTML e-Mail :}  Make your quotes concise.

http://www.american.edu/econ/notes/htmlmail.htm

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Checking for a number

2007-06-27 Thread Richard Broersma Jr

--- Osvaldo Rosario Kussama [EMAIL PROTECTED] wrote:
 Try:
 SELECT your_field ~ '.*[[:digit:]]{2}$';

This could be simplified a little. :o)

WHERE your_field ~ '\\d{2}$';

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] AutoVacuum Behaviour Question

2007-06-27 Thread Bruce McAlister
Alvaro Herrera wrote:
 Bruce McAlister wrote:
 
 Ok now this is interesting:
 
 select datname, age(datfrozenxid) from pg_database;
  datname |age
 -+---
  blueface-crm| 441746613
 
 Note this value is 440 million, and you said in your original report that
 
 autovacuum_freeze_max_age = 2
 
 200 million.  So this database is being selected each time because of
 this.
 

Ahhh okay, I didnt know how to extract the age for a database. Learnt
something new here.

 However, what should happen is that after the vacuum the age of the
 database is decreased after the vacuuming.  What's your
 vacuum_freeze_min_age setting?
 

My *_freeze_* values are:

autovacuum_freeze_max_age = 2
#vacuum_freeze_min_age = 1

The vacuum_freeze_min_age is the default at 100 million (I assume).

How much is the age decremented by on a vacuum run then?

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] autovacumm not working ?

2007-06-27 Thread Tomasz Rakowski
Matthew,

  Thank you for reply.

  I was trying to configure autovacuum for given table to be more aggressive 
(min=100, scale factor=0.01).
  Then waited for autovacuum to be activated for given table 
  (watching  Server status window; about 1000 updates/min, 30.000 rows in the 
table, so didn't have to wait too long).

  When  autovacuum was done I started VACUUM VERBOSE manually (to be more 
precise 6 sec later).

  I expected that VACUUM VERBOSE will show me just few records to be removed 
(maybe around hundred), but in fact it reported 3200 record to be removed. It 
seems that autovacuum was started (anyway I expected it to be activated little 
bit earlier) but haven't do anything.

 Any suggestions why ? 
Do you know what is a difference between acivated autovacuum and VACUUM for 
given table ?


Tomasz







- Original Message 
From: Matthew T. O'Connor [EMAIL PROTECTED]
To: Tomasz Rakowski [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org
Sent: Wednesday, June 27, 2007 9:50:42 AM
Subject: Re: [GENERAL] autovacumm not working ?

Tomasz Rakowski wrote:
  I have problem with frequently updated table (around 30.000 records and 
 1000 updates/minute, few indexes).
 After a while all queries to that table become much slower then at the 
 begining
 (the number of records in the table is quite stable all the time).
 
 I can see that autovacuum is executed (select * from pg_stat_all_tables) 
 so it should update statistics and free unused space.


Sounds like autovacuum is running, but not often enough to keep up with 
this highly active table.  You may be able to get better results by 
setting table specific autovacuum thresholds for this table so that it 
get vacuumed more often.  However if your table is *very* active then 
autovacuum may not be able to keep up even with the more aggressive 
settings, this is a known problem which is hopefully addressed in 8.3, 
some people solve this by turning off autovacuum for the highly active 
table and using a cron script to vacuum a table every minute or so.







   

Yahoo! oneSearch: Finally, mobile search 
that gives answers, not web links. 
http://mobile.yahoo.com/mobileweb/onesearch?refer=1ONXIC

Re: [GENERAL] postgres and webmin

2007-06-27 Thread Danyelle Gragsone

Here is the entire file.


# PostgreSQL Client Authentication Configuration File
# ===
#
# Refer to the PostgreSQL Administrator's Guide, chapter Client
# Authentication for a complete description.  A short synopsis
# follows.
#
# This file controls: which hosts are allowed to connect, how clients
# are authenticated, which PostgreSQL user names they can use, which
# databases they can access.  Records take one of these forms:
#
# local  DATABASE  USER  METHOD  [OPTION]
# host   DATABASE  USER  CIDR-ADDRESS  METHOD  [OPTION]
# hostsslDATABASE  USER  CIDR-ADDRESS  METHOD  [OPTION]
# hostnossl  DATABASE  USER  CIDR-ADDRESS  METHOD  [OPTION]
#
# (The uppercase items must be replaced by actual values.)
#
# The first field is the connection type: local is a Unix-domain socket,
# host is either a plain or SSL-encrypted TCP/IP socket, hostssl is an
# SSL-encrypted TCP/IP socket, and hostnossl is a plain TCP/IP socket.
#
# DATABASE can be all, sameuser, samegroup, a database name, or
# a comma-separated list thereof.
#
# USER can be all, a user name, a group name prefixed with +, or
# a comma-separated list thereof.  In both the DATABASE and USER fields
# you can also write a file name prefixed with @ to include names from
# a separate file.
#
# CIDR-ADDRESS specifies the set of hosts the record matches.
# It is made up of an IP address and a CIDR mask that is an integer
# (between 0 and 32 (IPv4) or 128 (IPv6) inclusive) that specifies
# the number of significant bits in the mask.  Alternatively, you can write
# an IP address and netmask in separate columns to specify the set of hosts.
#
# METHOD can be trust, reject, md5, crypt, password,
# krb4, krb5, ident, or pam.  Note that password sends passwords
# in clear text; md5 is preferred since it sends encrypted passwords.
#
# OPTION is the ident map or the name of the PAM service, depending on METHOD.
#
# Database and user names containing spaces, commas, quotes and other special
# characters must be quoted. Quoting one of the keywords all, sameuser or
# samegroup makes the name lose its special character, and just match a
# database or username with that name.
#
# This file is read on server startup and when the postmaster receives
# a SIGHUP signal.  If you edit the file on a running system, you have
# to SIGHUP the postmaster for the changes to take effect.  You can use
# pg_ctl reload to do that.

# Put your actual configuration here
# --
#
# If you want to allow non-local connections, you need to add more
# host records. In that case you will also need to make PostgreSQL listen
# on a non-local interface via the listen_addresses configuration parameter,
# or via the -i or -h command line switches.
#

# CAUTION: Configuring the system for local trust authentication allows
# any local user to connect as any PostgreSQL user, including the database
# superuser. If you do not trust all your local users, use another
# authentication method.


# TYPE  DATABASEUSERCIDR-ADDRESS  METHOD

#local is for Unix domain socket connections only
host template1  postfix, postfixadmin  all 10.2.0.202 255.255.255.0 trust
# IPv4 local connections:
hostall all 127.0.0.1/32  trust
# IPv6 local connections:
#hostall all ::1/128   trust

Could someone please take a look at this and tell me what I am doing
wrong.  Apparently I have something commented that I shouldn't but I
don't know what that is.  Sorry if I sound like a complete moron.. but
I am trying to learn.

Thanks,
Danyelle

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] postgres and webmin

2007-06-27 Thread Andrej Ricnik-Bay

On 6/28/07, Danyelle Gragsone [EMAIL PROTECTED] wrote:

Here is the entire file.



Could someone please take a look at this and tell me what I am doing
wrong.  Apparently I have something commented that I shouldn't but I
don't know what that is.  Sorry if I sound like a complete moron.. but
I am trying to learn.

The line you're missing isn't commented out, it's not there at
all.  Just add it to the file and restart postgres.

local   all postgres  ident sameuser

Those lines that have a local in them in your file are using
IP rather than the unix socket.



Thanks,
Danyelle

Cheers,
Andrej


--
Please don't top post, and don't use HTML e-Mail :}  Make your quotes concise.

http://www.american.edu/econ/notes/htmlmail.htm

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Execution variability

2007-06-27 Thread Vincenzo Romano
Hi all.
I understand this can be a ridiculous question for most you.

The very same query on the very same db shows very variable timings.
I'm the only one client on an unpupolated server so I'd expect a
rather constant timing.

INstead for a while the query become very slooow and the CPU reached 
60 to 70% and the time needed is about 1.5 minutes0.
Again with just me on it.
Normally the same query rises the usage to a mere 5% to 7% with timing
with the tenth of a second.

I have disable both the autovacuum and the stats_start_collector in 
the
attempt to disable possibe reasons for slow down. No luck in this.

Do you have any hint?


-- 
Vincenzo Romano
--
Maybe Computer will never become as intelligent as Humans.
For sure they won't ever become so stupid. [VR-1988]

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] AutoVacuum Behaviour Question

2007-06-27 Thread Alvaro Herrera
Bruce McAlister wrote:
 Alvaro Herrera wrote:
  Bruce McAlister wrote:
  
  Ok now this is interesting:
  
  select datname, age(datfrozenxid) from pg_database;
   datname |age
  -+---
   blueface-crm| 441746613
  
  Note this value is 440 million, and you said in your original report that
  
  autovacuum_freeze_max_age = 2
  
  200 million.  So this database is being selected each time because of
  this.
  
 
 Ahhh okay, I didnt know how to extract the age for a database. Learnt
 something new here.
 
  However, what should happen is that after the vacuum the age of the
  database is decreased after the vacuuming.  What's your
  vacuum_freeze_min_age setting?
 
 My *_freeze_* values are:
 
 autovacuum_freeze_max_age = 2
 #vacuum_freeze_min_age = 1
 
 The vacuum_freeze_min_age is the default at 100 million (I assume).

What do you get from a SHOW vacuum_freeze_min_age?  That would tell you
what's the actual value in use.  Most likely it's those 100 million but
if you change it, reload, then comment it back in the file and reload
again, the value in use will be the one to which you first changed it.

 How much is the age decremented by on a vacuum run then?

It should be decremented to the vacuum_freeze_min_age.  However, I'm
running some experiments with your settings and apparently it's not
working as it should.

-- 
Alvaro Herrera   http://www.PlanetPostgreSQL.org/
Hay que recordar que la existencia en el cosmos, y particularmente la
elaboración de civilizaciones dentro de él no son, por desgracia,
nada idílicas (Ijon Tichy)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] autovacumm not working ?

2007-06-27 Thread Alvaro Herrera
Tomasz Rakowski wrote:
 Matthew,
 
   Thank you for reply.
 
   I was trying to configure autovacuum for given table to be more aggressive 
 (min=100, scale factor=0.01).
   Then waited for autovacuum to be activated for given table 
   (watching  Server status window; about 1000 updates/min, 30.000 rows
   in the table, so didn't have to wait too long).

Did you reload (pg_ctl reload) after changing the postgresql.conf
settings?  Also note that you can alter values for a specific table by
putting them in the pg_autovacuum table.

-- 
Alvaro Herrera  http://www.amazon.com/gp/registry/5ZYLFMCVHXC
I am amazed at [the pgsql-sql] mailing list for the wonderful support, and
lack of hesitasion in answering a lost soul's question, I just wished the rest
of the mailing list could be like this.   (Fotis)
   (http://archives.postgresql.org/pgsql-sql/2006-06/msg00265.php)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Checking for a number

2007-06-27 Thread Michael Glaesemann


On Jun 27, 2007, at 16:17 , Richard Broersma Jr wrote:



--- Osvaldo Rosario Kussama [EMAIL PROTECTED] wrote:

Try:
SELECT your_field ~ '.*[[:digit:]]{2}$';


This could be simplified a little. :o)

WHERE your_field ~ '\\d{2}$';


Using dollar-quotes means not having to escape your \d (which I  
always find a bit of a hassle):


WHERE your_field ~ $re$\d{2}$$re$;

It's important to remember to use a tagged dollar quote (e.g., $re$)  
if you're using $ as an anchor.


And with standard_conforming_strings on you don't even need to use  
dollar-quotes:


test=# show standard_conforming_strings;
standard_conforming_strings
-
on
(1 row)

test=# select 'foo33' ~ '\d{2}$';
?column?
--
t
(1 row)

Dollar-quoting gets you around having to worry about what  
standard_conforming_strings is set to, though.


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] postgres and webmin

2007-06-27 Thread Danyelle Gragsone

Amazing how one little line.. makes everything sad.

Now all I have to do is figure out how to login.  My old username and
password no longer works..

thanks for helping!

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] postgres and webmin

2007-06-27 Thread Andrej Ricnik-Bay

On 6/28/07, Danyelle Gragsone [EMAIL PROTECTED] wrote:

Now all I have to do is figure out how to login.  My old username and
password no longer works..

So you were able to connect to the database with webmin
before this change?  In that case you may want a similar
line for your own user account (rather than the postgres
one you just added) and make the authorisation whatever
you used in the host based statement.




thanks for helping!

Cheers,
Andrej

--
Please don't top post, and don't use HTML e-Mail :}  Make your quotes concise.

http://www.american.edu/econ/notes/htmlmail.htm

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] autovacumm not working ?

2007-06-27 Thread Tomasz Rakowski
Alvaro,

  I changed autovacuum parametrs for this specific table in pg_autovacuum

insert into pg_autovacuum(vacrelid,enabled,vac_base_thresh, 
vac_scale_factor, anl_base_thresh, anl_scale_factor,   
vac_cost_delay, vac_cost_limit, 
freeze_min_age, freeze_max_age) 
values ( (select oid from pg_class where relname='t_ais_position'), True, 
100, 0.01 , 100, 0.02, -1, -1, -1, -1 )

Should I somehow let autovacuum deamon know about new table configuration or 
above insert is enough ?

Tomasz







- Original Message 
From: Alvaro Herrera [EMAIL PROTECTED]
To: Tomasz Rakowski [EMAIL PROTECTED]
Cc: Matthew T. O'Connor [EMAIL PROTECTED]; pgsql-general@postgresql.org
Sent: Wednesday, June 27, 2007 2:50:40 PM
Subject: Re: [GENERAL] autovacumm not working ?

Tomasz Rakowski wrote:
 Matthew,
 
   Thank you for reply.
 
   I was trying to configure autovacuum for given table to be more aggressive 
 (min=100, scale factor=0.01).
   Then waited for autovacuum to be activated for given table 
   (watching  Server status window; about 1000 updates/min, 30.000 rows
   in the table, so didn't have to wait too long).

Did you reload (pg_ctl reload) after changing the postgresql.conf
settings?  Also note that you can alter values for a specific table by
putting them in the pg_autovacuum table.

-- 
Alvaro Herrera  http://www.amazon.com/gp/registry/5ZYLFMCVHXC
I am amazed at [the pgsql-sql] mailing list for the wonderful support, and
lack of hesitasion in answering a lost soul's question, I just wished the rest
of the mailing list could be like this.   (Fotis)
   (http://archives.postgresql.org/pgsql-sql/2006-06/msg00265.php)







 

Expecting? Get great news right away with email Auto-Check. 
Try the Yahoo! Mail Beta.
http://advision.webevents.yahoo.com/mailbeta/newmail_tools.html 

Re: [GENERAL] AutoVacuum Behaviour Question

2007-06-27 Thread Alvaro Herrera
Alvaro Herrera wrote:

  How much is the age decremented by on a vacuum run then?
 
 It should be decremented to the vacuum_freeze_min_age.  However, I'm
 running some experiments with your settings and apparently it's not
 working as it should.

Nah, false alarm, it's working as expected for me.  And I see the age of
databases being correctly decreased to the freeze min age (plus however
many transactions it took to do the vacuuming work).  So I'm still at a
loss on why is it failing to advance the datfrozenxid of your database.

Please let me have a look at this query result while connected to that
database:

select relname, relfrozenxid from pg_class where relkind in ('r', 't');

You can change the relname to oid if showing the table names is
problematic for you.

-- 
Alvaro Herrera  http://www.amazon.com/gp/registry/5ZYLFMCVHXC
Al principio era UNIX, y UNIX habló y dijo: Hello world\n.
No dijo Hello New Jersey\n, ni Hello USA\n.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] autovacumm not working ?

2007-06-27 Thread Alvaro Herrera
Tomasz Rakowski wrote:
 Alvaro,
 
   I changed autovacuum parametrs for this specific table in pg_autovacuum
 
 insert into pg_autovacuum(vacrelid,enabled,vac_base_thresh, 
 vac_scale_factor, anl_base_thresh, anl_scale_factor,   
 vac_cost_delay, vac_cost_limit, 
 freeze_min_age, freeze_max_age) 
 values ( (select oid from pg_class where relname='t_ais_position'), True, 
 100, 0.01 , 100, 0.02, -1, -1, -1, -1 )
 
 Should I somehow let autovacuum deamon know about new table
 configuration or above insert is enough ?

The insert should be enough.  You do see the autovacuum process starting
on that database, right?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] autovacumm not working ?

2007-06-27 Thread Tomasz Rakowski
Alvaro, 

   I see autovacuum process starting in avarage every 5 minutes 
(table contains 30.000 records, and update rate is about 1000records /min).
 
 But what is strange is that the number of pages allocated to the table are 
constant (at least for this hour: 500) 
and number of pages allocated to indexes are constantly growing (109 - 145, 92 
- 250!!!, 194-256)

But as I stated in first post after a while it gets worse and worse  (thousands 
of allocated pages for the same number of records..)

Tomasz




- Original Message 
From: Alvaro Herrera [EMAIL PROTECTED]
To: Tomasz Rakowski [EMAIL PROTECTED]
Cc: Matthew T. O'Connor [EMAIL PROTECTED]; pgsql-general@postgresql.org
Sent: Wednesday, June 27, 2007 3:54:14 PM
Subject: Re: [GENERAL] autovacumm not working ?

Tomasz Rakowski wrote:
 Alvaro,
 
   I changed autovacuum parametrs for this specific table in pg_autovacuum
 
 insert into pg_autovacuum(vacrelid,enabled,vac_base_thresh, 
 vac_scale_factor, anl_base_thresh, anl_scale_factor,   
 vac_cost_delay, vac_cost_limit, 
 freeze_min_age, freeze_max_age) 
 values ( (select oid from pg_class where relname='t_ais_position'), True, 
 100, 0.01 , 100, 0.02, -1, -1, -1, -1 )
 
 Should I somehow let autovacuum deamon know about new table
 configuration or above insert is enough ?

The insert should be enough.  You do see the autovacuum process starting
on that database, right?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support







   

Yahoo! oneSearch: Finally, mobile search 
that gives answers, not web links. 
http://mobile.yahoo.com/mobileweb/onesearch?refer=1ONXIC

Re: [GENERAL] postgres and webmin

2007-06-27 Thread Danyelle Gragsone

Yes,

That was the odd thing about it.  All I did was add the ip address.  I
will have to change my postgres password cause I can't remember it.

bleh..

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Column Default Clause and User Defined Functions

2007-06-27 Thread Keary Suska
Is it possible to have a user-defined function (a plpqsql function) as the
argument to a default clause that issues SELECTs on other tables?

Thanks,

Keary Suska
Esoteritech, Inc.
Demystifying technology for your home or business



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Image Archiving with postgres

2007-06-27 Thread Eddy D. Sanchez

Hello Everyone.

I want to scan a large quantity of books and documents and store  
these like images, I want use postgres, anyone have experience with  
this kind of systems, can you suggest me an opensource solution ??


Thanks.
 


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Column Default Clause and User Defined Functions

2007-06-27 Thread Michael Glaesemann


On Jun 27, 2007, at 18:18 , Keary Suska wrote:

Is it possible to have a user-defined function (a plpqsql function)  
as the

argument to a default clause that issues SELECTs on other tables?


Not according to the documentation:

http://www.postgresql.org/docs/8.2/interactive/sql-createtable.html


DEFAULT default_expr

The DEFAULT clause assigns a default data value for the column  
whose column definition it appears within. The value is any  
variable-free expression (subqueries and cross-references to other  
columns in the current table are not allowed). The data type of the  
default expression must match the data type of the column.


The default expression will be used in any insert operation  
that does not specify a value for the column. If there is no  
default for a column, then the default is null.



Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] postgres and webmin

2007-06-27 Thread Danyelle Gragsone

yay its fixed.  How do I make the topic solved?

Danyelle

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] varchar(n) VS text

2007-06-27 Thread Paul Lambert

Tom Lane wrote:

Pierre Thibaudeau [EMAIL PROTECTED] writes:

I am puzzling over this issue:



1) Is there ever ANY reason to prefer varchar(n) to text as a column type?


In words of one syllable: no.

Not unless you have an application requirement for a specific maximum
length limit (eg, your client code will crash if fed a string longer
than 256 bytes, or there's a genuine data-validity constraint that you
can enforce this way).

Or if you want to have schema-level portability to some other DB that
understands varchar(N) but not text.  (varchar(N) is SQL-standard,
while text isn't, so I'm sure there are some such out there.)


From my reading of the dataype documentation, the ONLY reason I can
think of for using varchar(n) would be in order to add an extra
data-type constraint to the column.


That is *exactly* what it does.  No more and no less.  There's no
performance advantage, in fact you can expect to lose a few cycles
to the constraint check.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/




Is there any disk space advantages to using varchar over text? Or will a 
text field only ever use up as much data as it needs.


I have a database where pretty much all text-type fields are created as 
varchars - I inherited this db from an MS SQL server and left them as 
varchar when I converted the database over to PG. My thoughts were text 
being a non-constrained data type may use up more disk space than a 
varchar and if I know there will never be more than 3 characters in the 
field for example, I could save some space by only creating a 3 length 
field.


In my case, any field length restrictions are governed by the 
application so I don't really need the constraint built into the back 
end. If there is a slight performance disadvantage to using varchar and 
no real disk space saving - and I have in some cases 40 or 50 of these 
fields in a table - then would it be better for me to convert these 
fields to text?.


Not to mention that I run into a problem occasionally where inputting a 
string that contains an apostraphe - PG behaves differently if it is a 
varchar to if it is a text type and my app occasionally fails.


I.e.
insert into tester (test_varchar) values ('abc''test');
I get the following:
ERROR: array value must start with { or dimension information
SQL state: 22P02

If I use the same command but inserting into a text-type field.
insert into tester (test_text) values ('abc''test');
It works fine.

But that's beside the point - my question is should I convert everything 
to text fields and, if so, is there any easy way of writting a script to 
change all varchar fields to text?


--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Error Message accessing configuration file

2007-06-27 Thread Tom Lane
Brian J. Erickson [EMAIL PROTECTED] writes:
 I get the following message in the pgstartup.log:
 postmaster cannot access the server configuration file
 /database/postgresql.conf: Permission denied

Are you trying to change the standard config file location on a
Red Hat or Fedora system?  If so you're likely to run into
SELinux restrictions on where the postmaster daemon can touch
the filesystem.  You'd need to modify the SELinux policy (or
turn off SELinux but I don't really recommend that) to use a
nonstandard config file together with the standard postgres package.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Better way to handle functions doing inserts into dynamically named tables?

2007-06-27 Thread Steve Wormley

So, I wrote myself a trigger function that when called will
dynamically create new partition tables and rules so that the first
new record for each partition creates the partition. The only bit that
I wanted to see if there was a a better solution was the actual insert
into the new table at the end of the function. I tried just
reinserting into the main table(meta_data_part) but because of the way
functions are handled it doesn't hit the newly created rules. The
interesting bit, is labeled 'the interesting bit'.

(I've included the whole function in case it's useful to anyone.)

CREATE OR REPLACE FUNCTION meta_data_pitf() RETURNS trigger AS $pitf$
   DECLARE
  month_data varchar;
  som timestamptz;
  eom timestamptz;
  rowdata text;

   BEGIN
-- determine the month
  month_data := to_char(NEW.data_time,'_MM');
  som := date_trunc('month',NEW.data_time);
  eom := date_trunc('month',NEW.data_time + '1 month'::interval);

  BEGIN --exception block
-- create the table
  EXECUTE $tc$CREATE TABLE p_md.md_$tc$||month_data||$tc$
  ( CHECK ( data_time = '$tc$||som||$tc$' AND data_time  '$tc$||eom||$tc$' )
  ) INHERITS (meta_data_part) ;
  $tc$;

-- create the insert rule
  EXECUTE $rc$CREATE OR REPLACE RULE meta_data_pir_$rc$||month_data||$rc$ AS
  ON INSERT TO meta_data_part WHERE
  ( data_time = '$rc$||som||$rc$' AND data_time  '$rc$||eom||$rc$' )
  DO INSTEAD INSERT INTO p_md.md_$rc$||month_data||$rc$
 VALUES (NEW.*);
 $rc$;
 EXCEPTION WHEN duplicate_table THEN
 -- dont care
 END;

 SELECT NEW INTO rowdata;

-- now the interesting bit
  EXECUTE $ins$INSERT INTO p_md.md_$ins$||month_data||$ins$
  SELECT ($ins$||quote_literal(rowdata)||$ins$::meta_data_part).* ; $ins$;

-- skip the next insert... maybe
RETURN NULL;

   END;
$pitf$ LANGUAGE plpgsql;


Thanks,
-Steve

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] varchar(n) VS text

2007-06-27 Thread Michael Glaesemann


On Jun 27, 2007, at 19:38 , Paul Lambert wrote:


Is there any disk space advantages to using varchar over text?


No.


Or will a text field only ever use up as much data as it needs.


Yes.

From http://www.postgresql.org/docs/8.2/interactive/datatype- 
character.html


The storage requirement for data of these types is 4 bytes plus the  
actual string, and in case of character plus the padding. Long  
strings are compressed by the system automatically, so the physical  
requirement on disk may be less. Long values are also stored in  
background tables so they do not interfere with rapid access to the  
shorter column values. In any case, the longest possible character  
string that can be stored is about 1 GB. (The maximum value that  
will be allowed for n in the data type declaration is less than  
that. It wouldn't be very useful to change this because with  
multibyte character encodings the number of characters and bytes  
can be quite different anyway. If you desire to store long strings  
with no specific upper limit, use text or character varying without  
a length specifier, rather than making up an arbitrary length limit.)


Tip: There are no performance differences between these three  
types, apart from the increased storage size when using the blank- 
padded type. While character(n) has performance advantages in some  
other database systems, it has no such advantages in PostgreSQL. In  
most situations text or character varying should be used instead.




 then would it be better for me to convert these fields to text?.


Probably not. See above.

Not to mention that I run into a problem occasionally where  
inputting a string that contains an apostraphe - PG behaves  
differently if it is a varchar to if it is a text type and my app  
occasionally fails.


I.e.
insert into tester (test_varchar) values ('abc''test');
I get the following:
ERROR: array value must start with { or dimension information
SQL state: 22P02


Works for me:

test=# select version();

version
 
--
PostgreSQL 8.2.4 on powerpc-apple-darwin8.9.0, compiled by GCC  
powerpc-apple-darwin8-gcc-4.0.1 (GCC) 4.0.1 (Apple Computer, Inc.  
build 5367)

(1 row)

test=# create table tester (test_varchar varchar primary key);
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index  
tester_pkey for table tester

CREATE TABLE
test=# insert into tester (test_varchar) values ('abc''test');
INSERT 0 1
test=# select * from tester;
test_varchar
--
abc'test
(1 row)

But that's beside the point - my question is should I convert  
everything to text fields and, if so, is there any easy way of  
writting a script to change all varchar fields to text?


It's probably not worth the effort, but if you're interested you  
could query the system catalogs for varchar columns and write a  
script that would update them for you.


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] varchar(n) VS text

2007-06-27 Thread Alvaro Herrera
Paul Lambert wrote:

 Is there any disk space advantages to using varchar over text?  Or will a 
 text field only ever use up as much data as it needs.

1. no 2. yes.

-- 
Alvaro Herrera   Valdivia, Chile   ICBM: S 39º 49' 18.1, W 73º 13' 56.4
Hackers share the surgeon's secret pleasure in poking about in gross innards,
the teenager's secret pleasure in popping zits. (Paul Graham)

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] varchar(n) VS text

2007-06-27 Thread Paul Lambert

Michael Glaesemann wrote:

Works for me:

test=# select version();
   version
-- 

PostgreSQL 8.2.4 on powerpc-apple-darwin8.9.0, compiled by GCC 
powerpc-apple-darwin8-gcc-4.0.1 (GCC) 4.0.1 (Apple Computer, Inc. build 
5367)

(1 row)

test=# create table tester (test_varchar varchar primary key);
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index 
tester_pkey for table tester

CREATE TABLE
test=# insert into tester (test_varchar) values ('abc''test');
INSERT 0 1
test=# select * from tester;
test_varchar
--
abc'test
(1 row)

Michael Glaesemann
grzm seespotcode net


Looks like my bad - I created the table initially through pgAdminIII and 
 it appears I selected the wrong character varying from the dropdown list.


CREATE TABLE tester
(
  test_varchar character varying[],
  test_text text
)

If I change it to character varying(20) it works fine.

Apologies for that.

Thanks for the other info though.

--
Paul Lambert
Database Administrator
AutoLedgers

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] varchar(n) VS text

2007-06-27 Thread Alvaro Herrera
Paul Lambert wrote:

 Looks like my bad - I created the table initially through pgAdminIII and 
  it appears I selected the wrong character varying from the dropdown list.
 
 CREATE TABLE tester
 (
   test_varchar character varying[],
   test_text text
 )
 
 If I change it to character varying(20) it works fine.

Yeah, what you chose is an array of varchar.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] autovacumm not working ?

2007-06-27 Thread Matthew O'Connor

Tomasz Rakowski wrote:

   I see autovacuum process starting in avarage every 5 minutes
(table contains 30.000 records, and update rate is about 1000records /min).
 
 But what is strange is that the number of pages allocated to the table 
are constant (at least for this hour: 500)
and number of pages allocated to indexes are constantly growing (109 - 
145, 92 - 250!!!, 194-256)


But as I stated in first post after a while it gets worse and worse  
(thousands of allocated pages for the same number of records..)


You might want to reduce the autovacuum naptime.  Every 5 min may not be 
frequent enough for you.  Also, you might want make sure your FSM 
settings are high enough.




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] Pg7.4.x plpgsql trigger execution order bug

2007-06-27 Thread Luki Rustianto

Hi All,

I've found on pg7.4.x that plpgsql trigger will not wait for finishing
one line of command before executing to the next line, specially if
that command is firing another trigger on another table. This is very
wrong on my point of view. This only happen on pg7. To check:
(create master and detail table, trigger in master table is firing
trigger on detail table)


CREATE OR REPLACE FUNCTION public.master_tr_func () RETURNS trigger AS'
BEGIN
 RAISE WARNING ''First in Master'';
 INSERT INTO detail values (NEW.mid + 1);
 RAISE WARNING ''Last in Master'';
 RETURN NEW;
END;
'LANGUAGE 'plpgsql';


CREATE OR REPLACE FUNCTION public.detail_tr_func () RETURNS trigger AS'
DECLARE
  testah numeric;
BEGIN

 RAISE warning ''BEGIN: INSIDE detail'';
 RAISE warning ''END: INSIDE detail'';
 RETURN NEW;
END;
'LANGUAGE 'plpgsql';

CREATE TABLE public.master (
 mid INTEGER
) WITH OIDS;

CREATE TRIGGER master_tr AFTER INSERT OR UPDATE
ON public.master FOR EACH ROW
EXECUTE PROCEDURE public.master_tr_func();


CREATE TABLE public.detail (
 did INTEGER
) WITH OIDS;

CREATE TRIGGER detail_tr AFTER INSERT OR UPDATE
ON public.detail FOR EACH ROW
EXECUTE PROCEDURE public.detail_tr_func();


now we test:

lukitest=# select version();
   version

PostgreSQL 7.4.17 on i486-pc-linux-gnu, compiled by GCC cc (GCC)
4.1.2 20061115 (prerelease) (Debian 4.1.1-21)
(1 row)

lukitest=# insert into master values (1);
WARNING:  First in Master
WARNING:  Last in Master
WARNING:  BEGIN: INSIDE detail
WARNING:  END: INSIDE detail
INSERT 410367 1


See? it should be (on pg8):


luki=# select version();
 version
---
PostgreSQL 8.1.8 on i686-pc-linux-gnu, compiled by GCC gcc (GCC)
3.3.4 (Debian 1:3.3.4-3)
(1 row)
luki=# insert into master values (1);
WARNING:  First in Master
WARNING:  BEGIN: INSIDE detail
CONTEXT:  SQL statement INSERT INTO detail values ( $1  + 1)
PL/pgSQL function master_tr_func line 3 at SQL statement
WARNING:  END: INSIDE detail
CONTEXT:  SQL statement INSERT INTO detail values ( $1  + 1)
PL/pgSQL function master_tr_func line 3 at SQL statement
WARNING:  Last in Master
INSERT 426015 1

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Column Default Clause and User Defined Functions

2007-06-27 Thread Tom Lane
Michael Glaesemann [EMAIL PROTECTED] writes:
 On Jun 27, 2007, at 18:18 , Keary Suska wrote:
 Is it possible to have a user-defined function (a plpqsql function)  
 as the
 argument to a default clause that issues SELECTs on other tables?

 Not according to the documentation:
 http://www.postgresql.org/docs/8.2/interactive/sql-createtable.html

What those docs say is that you can't have a naked sub-SELECT in the
DEFAULT expression.  This is to some extent an implementation limitation
--- we don't do planning on DEFAULT expressions.  You can definitely get
around it by hiding the sub-SELECT in a function.

Whether that is a good idea is another question entirely ... it seems
a bit questionable, but on the other hand time-varying defaults like
default now() have time-honored usefulness, so I'm not quite sure
why I feel uncomfortable with it.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Pg7.4.x plpgsql trigger execution order bug

2007-06-27 Thread Tom Lane
Luki Rustianto [EMAIL PROTECTED] writes:
 I've found on pg7.4.x that plpgsql trigger will not wait for finishing
 one line of command before executing to the next line, specially if
 that command is firing another trigger on another table. This is very
 wrong on my point of view. This only happen on pg7.

To quote the 8.0 release notes:

: Observe the following incompatibilities:
: ...
: Nondeferred AFTER triggers are now fired immediately after completion of
: the triggering query, rather than upon finishing the current interactive
: command. This makes a difference when the triggering query occurred
: within a function: the trigger is invoked before the function proceeds
: to its next operation.

This change is not going to be back-patched into 7.x, because it would
break applications that depended on the old behavior.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match