Re: [GENERAL] Installed. Now what?

2011-11-20 Thread Amitabh Kant
On Sun, Nov 20, 2011 at 1:12 PM, Phoenix Kiula phoenix.ki...@gmail.comwrote:

 snip

 Another idea.

 I use CSF/LFD firewall.

 For TCP_IN, I have enabled 6432 port number.

 Do I also need to enable it elsewhere, such as TCP_OUT or UDP_IN etc?


Could you just try disabling the firewall for once?

Amitabh


Re: [GENERAL] Installed. Now what?

2011-11-20 Thread John R Pierce

On 11/19/11 11:42 PM, Phoenix Kiula wrote:

I use CSF/LFD firewall.

For TCP_IN, I have enabled 6432 port number.

Do I also need to enable it elsewhere, such as TCP_OUT or UDP_IN etc?


does this firewall block localhost at all?  many don't.  (I'm not at all 
familiar with this CSF/LFD thing)


if you enable a port for TCP_IN, does it automatically allow replies back?

postgres uses no UDP.



--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to install latest stable postgresql on Debian

2011-11-20 Thread Andrus

Scott,

Thank you.


Just add them to the bottom then run
sudo sysctl -p
to make them take effect.


It seems that if this value is greater than RAM, linux kerner silently uses 
max possible value.
Linuxes are used mostly to run PostgreSql only but ram may vary depending on 
virtual maschine configuration at runtime.
Is it reasonable to use large value, eq.  8GB as SHMMAX in sysctl.conf file 
always ?
In this case root and SSH access to server is not required if RAM amount 
changes. This simplifies server administration. Only postgresql.conf needs 
changed which can be done from 5432 port using pgAdmin.



btw, the standard way to control rc stuff is update-rc.d Odd name but
it's pretty easy, just look up the man page.


Debian seems to require update-rc.d and Centos chkconfig
How to use single command for every distro ?

/etc/init.d/postgresql start works in all distros. Adding to postgresql to 
startup requires different commands in different distros ?!


Andrus 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] invalid byte sequence for encoding UTF8: 0x00

2011-11-20 Thread hubert depesz lubaczewski
On Sat, Nov 19, 2011 at 09:32:12AM -0800, pawel_kukawski wrote:
 Is there any way I can store NULL character (\u) in string ?
 
 Or there is only one option that I have change every text field to bytea.

correct question is: why do you want to store \u in text field?

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installed. Now what?

2011-11-20 Thread Phoenix Kiula
On Sun, Nov 20, 2011 at 4:49 PM, John R Pierce pie...@hogranch.com wrote:
 On 11/19/11 11:42 PM, Phoenix Kiula wrote:

 does this firewall block localhost at all?  many don't.  (I'm not at all
 familiar with this CSF/LFD thing)

 if you enable a port for TCP_IN, does it automatically allow replies back?

 postgres uses no UDP.



The firewall is set to:

1. Ignore the process pgbouncer (in fact the entire directory in which
pgbouncer sits)

2. Allow 127.0.0.1 for everything, no limitations

3. Yes, it can allow replies back (the same settings work with
postgresql, should pgbouncer be any different?)

I tried disabling the firewall completely. Same thing -- pgbouncer
still does not work.

It's not the firewall. It isn't blocking anything. Nothing in the logs
related to pgbouncer. I merely mentioned it as a step.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Adding 1 week to a timestamp, which can be NULL or expired

2011-11-20 Thread Alexander Farber
Hello,

I'm trying to add 1 week VIP-status to all users in a table:

   update pref_users set vip = max(vip, now()) + interval '1 week';

but max() doesn't work with timestamps.

Is there maybe still a way to solve it with a one-liner?

Thank you
Alex

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding 1 week to a timestamp, which can be NULL or expired

2011-11-20 Thread John R Pierce

On 11/20/11 2:32 AM, Alexander Farber wrote:

update pref_users set vip = max(vip, now()) + interval '1 week';

but max() doesn't work with timestamps.


max works fine with timestamps... however, its a 1 argument function 
that takes an aggregate as its argument.


you perhaps want GREATEST(val1,val2)

update pref_users set vip = greatest(vip, now()) + interval '1 week';



--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installed. Now what?

2011-11-20 Thread Marko Kreen
On Sun, Nov 20, 2011 at 8:32 AM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
 The password I am entering in the terminal is right for sure. I've
 tried it a few times, checked the caps lock, etc. Also, if the log
 carries this FATAL password authentication failed, why does the
 terminal give the vague error no working server connection?

no working connection means that client logged into pgbouncer successfully,
but pgbouncer cannot log into server.

Please look into Postrgres log file for details.

If you see no failures there, you have wrong connect string in pgbouncer.ini.

-- 
marko

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installed. Now what?

2011-11-20 Thread Scott Mead
On Sun, Nov 20, 2011 at 6:21 AM, Marko Kreen mark...@gmail.com wrote:

 On Sun, Nov 20, 2011 at 8:32 AM, Phoenix Kiula phoenix.ki...@gmail.com
 wrote:
  The password I am entering in the terminal is right for sure. I've
  tried it a few times, checked the caps lock, etc. Also, if the log
  carries this FATAL password authentication failed, why does the
  terminal give the vague error no working server connection?


ISTM that either your connect string is bad to the database or you already
have too many clients connected to the db.  Have you tried:

  show max_clients;

  select count(1) from pg_stat_activity;

  In postgres?  Is it possible that there are just too many clients already
connected?  If not, then it's probably just your connect string ( in
pgbouncer.ini) not being quite right.  You are using 127.0.0.1 for
connecting, is postgres even listening?

  netstat -lntp | grep 5432

Good luck.

--Scott


 no working connection means that client logged into pgbouncer
 successfully,
 but pgbouncer cannot log into server.

 Please look into Postrgres log file for details.

 If you see no failures there, you have wrong connect string in
 pgbouncer.ini.

 --
 marko

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Installed. Now what?

2011-11-20 Thread Phoenix Kiula
On Sun, Nov 20, 2011 at 7:43 PM, Scott Mead sco...@openscg.com wrote:


 On Sun, Nov 20, 2011 at 6:21 AM, Marko Kreen mark...@gmail.com wrote:

 On Sun, Nov 20, 2011 at 8:32 AM, Phoenix Kiula phoenix.ki...@gmail.com
 wrote:
  The password I am entering in the terminal is right for sure. I've
  tried it a few times, checked the caps lock, etc. Also, if the log
  carries this FATAL password authentication failed, why does the
  terminal give the vague error no working server connection?

 ISTM that either your connect string is bad to the database or you already
 have too many clients connected to the db.  Have you tried:
   show max_clients;
   select count(1) from pg_stat_activity;
   In postgres?  Is it possible that there are just too many clients already
 connected?


You may be on to something. And the queries results are below. (5
connections are reserved for superusers so you may be right.)


MYDB=# show max_connections;
 max_connections
-
 150
(1 row)

Time: 0.517 ms


MYDB=#   select count(1) from pg_stat_activity;
 count
---
   144
(1 row)

Time: 1.541 ms



But isn't the point to connect to pgbouncer (instead of PG directly)
and have it manage connections? Even when I restart PG so that its
connection count is fresh and low, and immediately try to connect to
pgbouncer, it still shows me an error.

How can I debug that the connections are the problem?

The error message in the pgbouncer log points to some FATAL password
authentication.



If not, then it's probably just your connect string ( in
 pgbouncer.ini) not being quite
 right.  You are using 127.0.0.1 for
 connecting, is postgres even listening?
   netstat -lntp | grep 5432



Yes. It is.


 netstat -lntp | grep 5432
tcp0  0 127.0.0.1:5432  0.0.0.0:*
 LISTEN  26220/postmaster
tcp0  0 :::5432 :::*
 LISTEN  26220/postmaster


 netstat -lntp | grep 6432
tcp0  0 127.0.0.1:6432  0.0.0.0:*
 LISTEN  10854/pgbouncer


Any ideas?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installed. Now what?

2011-11-20 Thread Phoenix Kiula
On Sun, Nov 20, 2011 at 7:52 PM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
 On Sun, Nov 20, 2011 at 7:43 PM, Scott Mead sco...@openscg.com wrote:


 On Sun, Nov 20, 2011 at 6:21 AM, Marko Kreen mark...@gmail.com wrote:

 On Sun, Nov 20, 2011 at 8:32 AM, Phoenix Kiula phoenix.ki...@gmail.com
 wrote:
  The password I am entering in the terminal is right for sure. I've
  tried it a few times, checked the caps lock, etc. Also, if the log
  carries this FATAL password authentication failed, why does the
  terminal give the vague error no working server connection?

 ISTM that either your connect string is bad to the database or you already
 have too many clients connected to the db.  Have you tried:
   show max_clients;
   select count(1) from pg_stat_activity;
   In postgres?  Is it possible that there are just too many clients already
 connected?


 You may be on to something. And the queries results are below. (5
 connections are reserved for superusers so you may be right.)


 MYDB=# show max_connections;
  max_connections
 -
  150
 (1 row)

 Time: 0.517 ms


 MYDB=#   select count(1) from pg_stat_activity;
  count
 ---
   144
 (1 row)

 Time: 1.541 ms



 But isn't the point to connect to pgbouncer (instead of PG directly)
 and have it manage connections? Even when I restart PG so that its
 connection count is fresh and low, and immediately try to connect to
 pgbouncer, it still shows me an error.

 How can I debug that the connections are the problem?

 The error message in the pgbouncer log points to some FATAL password
 authentication.



 If not, then it's probably just your connect string ( in
 pgbouncer.ini) not being quite
  right.  You are using 127.0.0.1 for
 connecting, is postgres even listening?
   netstat -lntp | grep 5432



 Yes. It is.


 netstat -lntp | grep 5432
 tcp        0      0 127.0.0.1:5432              0.0.0.0:*
     LISTEN      26220/postmaster
 tcp        0      0 :::5432                     :::*
     LISTEN      26220/postmaster


 netstat -lntp | grep 6432
 tcp        0      0 127.0.0.1:6432              0.0.0.0:*
     LISTEN      10854/pgbouncer


 Any ideas?



Just to add, the connection string I try for pgbouncer is EXACTLY the
same as the one I use to connect directly to PG, but I add the port
number.

For Direct PG  (works) --
pg_connect(host=localhost dbname=$db user=myuser password=mypass);

For Pgbouncer  (does NOT work) --
pg_connect(host=localhost dbname=$db port=6432 user=myuser password=mypass);

Given that both PG and postgresql are alive and kicking on 5432 and
6432 ports respectively, as shown in the netstat output above, I
wonder if the connection string is the problem.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 0.0.0.0 addresses in postgresql.conf on Windows

2011-11-20 Thread deepak
Hi!

This seems to be the problem on Windows 2008 server, though starting with
'0.0.0.0' on Windows 2003 server worked fine.

I tried turning off IPv6 on the 2008 server to see if that has a effect,
but it still
behaved the same way after that change.

Please let me know if you have any thoughts on this one?


--
Deepak


On Fri, Nov 18, 2011 at 6:35 PM, deepak deepak...@gmail.com wrote:

 Hi!

 It appears that when I try to configure listen_addresses in
 postgresql.conf (on Windows)
 with '0.0.0.0' , pg_ctl doesn't properly detect that server has started
 and blocks forever.

 C:\pg\pgsqlbin\pg_ctl.exe -D data -w start
 waiting for server to
 start.
 .. stopped waiting
 pg_ctl: could not start server
 Examine the log output.

 (although, the postgres.exe processes were started)

 If I change listen_addresses to '*', then pg_ctl exits fine after starting
 up.

 The documentation at the following link still mentions that one could use
 '0.0.0.0' to listen on
 all IPv4 addresses:

 http://www.postgresql.org/docs/9.1/static/runtime-config-connection.html#RUNTIME-CONFIG-CONNECTION-SETTINGS

 Did something change in 9.1.1 in this area or am I missing something?




 --
 Deepak



Re: [GENERAL] Huge number of INSERTs

2011-11-20 Thread Phoenix Kiula
On Sun, Nov 20, 2011 at 2:11 PM, Tomas Vondra t...@fuzzy.cz wrote:
 Dne 18.11.2011 13:30, Phoenix Kiula napsal(a):
 Full DB:   32GB
 The big table referenced above:  28 GB

 It's inserts into this one that are taking time.

 Hm, in that case the shared_buffers is probably too low. It'd be nice to
 have at least the indexes on the table in the buffers, and I guess
 they're significantly over 256MB (your shared_buffers).

 But regarding the vmstat 5 10 output you've posted, you probably don't
 issue with I/O as the iowait is 0 most of the time.

 You do have a serious problem with CPU, though - most of the time, the
 CPU is almost 100% utilized. Not sure which process is responsible for
 this, but this might be the main problem problem.

 I'm not saying adding a row to the table (and indexes) is extremely
 expensive, but you do have an insane number of processes (350
 connections, a lot of apache workers) and a lot of them are asking for
 CPU time.

 So once again: set the number of connections and workers to sane values,
 considering your current hardware. Those numbers are actually a handy
 throttle - you may increase the numbers until the CPU is reasonably
 utilized (don't use 100%, leave a reasonable margin - I wouldn't go
 higher than 90%).



Thanks Tomas. And others.

Some observations and questions from my ongoing saga.

I have disabled all ADDing of data (INSERT + UPDATE) and just allowed
SELECTs so far. Site is under maintenance.

For a moment there, I unleashed the valve and allowed the INSERT
functionality. The log was immediately flooded with this:


LOG:  duration: 6851.054 ms  statement: select nextval('maintable_id_seq')
LOG:  duration: 6848.266 ms  statement: select nextval('maintable_id_seq')
LOG:  duration: 6846.672 ms  statement: select nextval('maintable_id_seq')
LOG:  duration: 6853.451 ms  statement: select nextval('maintable_id_seq')
LOG:  duration: 6991.966 ms  statement: select nextval('maintable_id_seq')
LOG:  duration: 8244.315 ms  statement: select nextval('maintable_id_seq')
LOG:  duration: 6991.071 ms  statement: select nextval('maintable_id_seq')
LOG:  duration: 6990.043 ms  statement: select nextval('maintable_id_seq')
LOG:  duration: 6988.483 ms  statement: select nextval('maintable_id_seq')
LOG:  duration: 6986.793 ms  statement: select nextval('maintable_id_seq')
LOG:  duration: 6985.802 ms  statement: select nextval('maintable_id_seq')
...

I hope it's just because of too much load that even a simple query
such as this was taking so much time?

Other queries taking too much time are also indexed queries!

Anyway, right now, with that valve closed, and only SELECTs allowed,
here's the stats:



 vmstat 5 10
procs ---memory-- ---swap-- -io --system--
-cpu--
 r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id wa st
 7  1   1352  47596  26412 618996033  5228   243   17   10 51
19 26  4  0
16  1   1352  45520  26440 619165600  1230  3819 1368 65722 68
31  1  0  0
 9  0   1352  61048  26464 617468800  1000  4290 1370 65545 67
32  1  0  0
27  1   1352  51908  26508 618385200  1332  3916 1381 65684 68
32  1  0  0
29  0   1352  48380  26536 618576400   977  3983 1368 65684 67
32  1  0  0
24  1   1352  46436  26576 618908000   220  4135 1373 65743 66
33  1  0  0
25  1   1352  46204  26616 619145200 0  3963 1348 66867 67
32  1  0  0
13  1   1352  57444  26692 61932200024  4038 1436 66891 66
32  2  0  0
22  1   1352  51300  26832 619673600   439  5088 1418 66995 66
31  2  0  0
26  1   1352  51940  26872 619838400 0  3354 1385 67122 67
31  2  0  0




 iostat -d -x 5 3
Linux 2.6.18-238.9.1.el5 (host.MYDB.com)11/20/2011

Device: rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz
avgqu-sz   await  svctm  %util
sda  86.34   151.41 392.90 92.36 41796.00  1949.66
90.15 1.593.27   0.40  19.65
sda1  0.00 0.00  0.00  0.00 0.01 0.0022.38
0.003.04   3.01   0.00
sda2  0.27 8.20  0.06  0.22 3.3567.05   255.22
0.01   34.36   3.02   0.08
sda3  1.0213.83  3.29  3.65   165.35   139.7543.96
0.16   22.52   7.32   5.08
sda4  0.00 0.00  0.00  0.00 0.00 0.00 2.00
0.000.00   0.00   0.00
sda5  0.57 3.63  0.64  0.7226.5234.7245.16
0.02   11.26   4.67   0.63
sda6  0.21 0.57  0.41  0.2713.79 6.7630.24
0.02   24.31  16.51   1.12
sda7  0.24 5.36  0.11  0.44 1.9246.3286.94
0.02   44.21   7.99   0.44
sda8  2.24 2.25  1.22  0.9827.6225.8324.33
0.06   27.61  18.20   4.00
sda9 81.79   117.57 387.18 86.08 41557.45  1629.24
91.25 1.302.75   0.39  18.30

Device: rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz
avgqu-sz   await  svctm  %util
sda

[GENERAL] Table Design question for gurus (without going to NoSQL)...

2011-11-20 Thread Phoenix Kiula
Hi.

Want to start another thread, loosely related to the performance
problems thread I have going.

Need some DB design guidance from the gurus here.

My big table now has about 70 million rows, with the following columns:


 alias   | character varying(35)
 url | text
 modify_date | timestamp without time zone
 ip  | bigint


For each IP address (user of my application) I want to have a unique
URL. So I used to have a UNIQUE constraint on IP, URL. But the index
based on this became huge, as some URLs are gigantic. so I introduced
an md5 of the URL:


 url_md5 | varchar(32)


I now have two scenarios:

1. To have an index (unique?) on (ip, url_md5)

2. To not have an index on just the ip. This way a query that tries
to match   ...WHERE ip = 999 AND url_md5 = 'md5 here'... will
still look only at the ip bit of the index, then refine it with the
url_md5.

The good thing about #2 is the size of index remains very small with
only a bigint field (ip) being indexed.

The bad thing about #2 is that each query of ...WHERE ip = 999 AND
url_md5 = 'md5 here'...  will have to refine the indexed IP. If one
IP address has put in a lot of URLs, then this becomes a bit slow. As
is now happening, where I have users who have over 1 million URLs
each!

Questions:

1. Instead of md5, is there any integer hashing algorithm that will
allow me to have a bigint column and save a lot hopefully in both
storage space and speed?  (Some very useful points mentioned here:
http://stackoverflow.com/questions/1422725/represent-md5-hash-as-an-integer
)

2. If I do go with the above scenario #1 of a joint index, is there
any way I can save space and maintain speed? Partitioning etc are out
of the question.

With a growing web database, I am sure many people face this
situation. Are nosql type databases the only sane solution to such
massive volumes and throughput expectations (e.g., CouchDb's MemBase)?

Many thanks for any ideas or pointers!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] 9.1.1 build failure : postgres link fails

2011-11-20 Thread Rob Sargentg
I just got round to updating my laptop to ubuntu-10.4 (32bit), in part 
because I kept hitting snags while trying to configure postgres 9.1.1.


The upgrade did in fact solve the dependency issues (though  I was 
surprised UUID came along with out the ossp specific impl??) but the 
build is having troubles linking the server executable.


Here's my configure line:
./configure --with-pgport=5439 --with-perl --with-python --with-openssl 
--with-ldap --with-ossp-uuid --with-libxml --with-libxslt


and the tail of the configure output:

configure: creating ./config.status
config.status: creating GNUmakefile
config.status: creating src/Makefile.global
config.status: creating src/include/pg_config.h
config.status: creating src/interfaces/ecpg/include/ecpg_config.h
config.status: linking src/backend/port/tas/dummy.s to 
src/backend/port/tas.s
config.status: linking src/backend/port/dynloader/linux.c to 
src/backend/port/dynloader.c
config.status: linking src/backend/port/sysv_sema.c to 
src/backend/port/pg_sema.c
config.status: linking src/backend/port/sysv_shmem.c to 
src/backend/port/pg_shmem.c
config.status: linking src/backend/port/unix_latch.c to 
src/backend/port/pg_latch.c
config.status: linking src/backend/port/dynloader/linux.h to 
src/include/dynloader.h
config.status: linking src/include/port/linux.h to 
src/include/pg_config_os.h

config.status: linking src/makefiles/Makefile.linux to src/Makefile.port

ubuntu-10.4 is not the latest of course and comes with gnu make 3.8.1, 
but it seems to compile everything then fails to link postgres executable:


gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith 
-Wdeclaration-after-statement -Wendif-labels -Wformat-security 
-fno-strict-aliasing -fwrapv -L../../src/port -Wl,--as-needed 
-Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags  -Wl,-E TONS OF 
DOT OHs ../../src/port/libpgport_srv.a -lxslt -lxml2 -lssl -lcrypto 
-lcrypt -ldl -lm -lldap -o postgres

postmaster/postmaster.o: In function `PostmasterMain':
postmaster.c:(.text+0x48d7): undefined reference to `optreset'
tcop/postgres.o: In function `process_postgres_switches':
postgres.c:(.text+0x1312): undefined reference to `optreset'
utils/misc/ps_status.o: In function `set_ps_display':
ps_status.c:(.text+0xd4): undefined reference to `setproctitle'
collect2: ld returned 1 exit status
make[2]: *** [postgres] Error 1
make[2]: Leaving directory `/home/rob/tools/postgresql-9.1.1/src/backend'
make[1]: *** [all-backend-recurse] Error 2
make[1]: Leaving directory `/home/rob/tools/postgresql-9.1.1/src'
make: *** [all-src-recurse] Error 2


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table Design question for gurus (without going to NoSQL)...

2011-11-20 Thread Phoenix Kiula
On Sun, Nov 20, 2011 at 9:33 PM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
 Hi.

 Want to start another thread, loosely related to the performance
 problems thread I have going.

 Need some DB design guidance from the gurus here.

 My big table now has about 70 million rows, with the following columns:


  alias           | character varying(35)
  url             | text
  modify_date     | timestamp without time zone
  ip              | bigint


 For each IP address (user of my application) I want to have a unique
 URL. So I used to have a UNIQUE constraint on IP, URL. But the index
 based on this became huge, as some URLs are gigantic. so I introduced
 an md5 of the URL:


  url_md5             | varchar(32)


 I now have two scenarios:

 1. To have an index (unique?) on (ip, url_md5)

 2. To not have an index on just the ip. This way a query that tries
 to match   ...WHERE ip = 999 AND url_md5 = 'md5 here'... will
 still look only at the ip bit of the index, then refine it with the
 url_md5.

 The good thing about #2 is the size of index remains very small with
 only a bigint field (ip) being indexed.

 The bad thing about #2 is that each query of ...WHERE ip = 999 AND
 url_md5 = 'md5 here'...  will have to refine the indexed IP. If one
 IP address has put in a lot of URLs, then this becomes a bit slow. As
 is now happening, where I have users who have over 1 million URLs
 each!

 Questions:

 1. Instead of md5, is there any integer hashing algorithm that will
 allow me to have a bigint column and save a lot hopefully in both
 storage space and speed?  (Some very useful points mentioned here:
 http://stackoverflow.com/questions/1422725/represent-md5-hash-as-an-integer
 )

 2. If I do go with the above scenario #1 of a joint index, is there
 any way I can save space and maintain speed? Partitioning etc are out
 of the question.

 With a growing web database, I am sure many people face this
 situation. Are nosql type databases the only sane solution to such
 massive volumes and throughput expectations (e.g., CouchDb's MemBase)?

 Many thanks for any ideas or pointers!




I thought of adding a bigserial (serial8) column instead of
varchar(32) for the md5. But postgresql tells me that:

--
ERROR:  type bigserial does not exist
--

Why is this? Why can't I create a column with this type? Whats the
current syntax?

Thanks.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table Design question for gurus (without going to NoSQL)...

2011-11-20 Thread Gregg Jaskiewicz
partition your table if it is too big.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table Design question for gurus (without going to NoSQL)...

2011-11-20 Thread David Johnston
On Nov 20, 2011, at 8:33, Phoenix Kiula phoenix.ki...@gmail.com wrote:

 Hi.
 
 Want to start another thread, loosely related to the performance
 problems thread I have going.
 
 Need some DB design guidance from the gurus here.
 
 My big table now has about 70 million rows, with the following columns:
 
 
 alias   | character varying(35)
 url | text
 modify_date | timestamp without time zone
 ip  | bigint
 
 

While the schema is useful you need to provide HOW the data is being used if 
you want to help on finding ways to improve performance.

 For each IP address (user of my application) I want to have a unique
 URL. So I used to have a UNIQUE constraint on IP, URL.

Give a base URL can you encode an algorithm to generate the user-specific URL 
on-demand; then maybe cache that result in the application.

 But the index
 based on this became huge, as some URLs are gigantic. so I introduced

What does this mean?  Are there any patterns to the URLs that you can leverage 
(like, say, grouping them by domain name)?  Is there a lot of overlap between 
users so that having a URL table with a biting PK would make a difference?


 an md5 of the URL:
 
 
 url_md5 | varchar(32)
 
 
 I now have two scenarios:
 
 1. To have an index (unique?) on (ip, url_md5)
 
 2. To not have an index on just the ip. This way a query that tries
 to match   ...WHERE ip = 999 AND url_md5 = 'md5 here'... will
 still look only at the ip bit of the index, then refine it with the
 url_md5.
 
 The good thing about #2 is the size of index remains very small with
 only a bigint field (ip) being indexed.
 
 The bad thing about #2 is that each query of ...WHERE ip = 999 AND
 url_md5 = 'md5 here'...  will have to refine the indexed IP. If one
 IP address has put in a lot of URLs, then this becomes a bit slow. As
 is now happening, where I have users who have over 1 million URLs
 each!

Create a additional partial index on the URL for any IP address with more than 
X number of records?  You smallish users the only need to use the IP. Index 
while the big ones use that PLUS their personal URL index.

 
 Questions:
 
 1. Instead of md5, is there any integer hashing algorithm that will
 allow me to have a bigint column and save a lot hopefully in both
 storage space and speed?  (Some very useful points mentioned here:
 http://stackoverflow.com/questions/1422725/represent-md5-hash-as-an-integer
 )
 
 2. If I do go with the above scenario #1 of a joint index, is there
 any way I can save space and maintain speed? Partitioning etc are out
 of the question.
 

If you are going to discount the feature whose implementation solves this 
specific problem then you are basically asking the list to solve your specific 
problem and, from my comment above, to do so without providing sufficient 
details as to how your application works.

Also, WTF do you mean by etc.  If you are going to discount something from 
consideration you should be able to exactly specify what it is.

Furthermore, if you ask the question and exclude possible solutions you should 
explain why you cannot use them so that people will not propose other solutions 
that would have the same faults.

 With a growing web database, I am sure many people face this
 situation. Are nosql type databases the only sane solution to such
 massive volumes and throughput expectations (e.g., CouchDb's MemBase)?

You would implement these before you would partition?

There are likely multiple solutions to your problem but, again, simply giving a 
table schema doesn't help it determining which ones are feasible.

 
 Many thanks for any ideas or pointers!
 

The only data ignorant, and thus generally useful, PostgreSQL solution is table 
partitioning.  

Use It.

My other questions, while an interesting thought exercise, need intimate 
knowledge of the data to even evaluate if they make sense.

So, in short, use partitions.  If you cannot, provide reasons why and then 
include more details about the application and data so that meaningful 
solutions have a chance to be suggested.

David J.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.1.1 build failure : postgres link fails

2011-11-20 Thread Tom Lane
Rob Sargentg robjsarg...@gmail.com writes:
 I just got round to updating my laptop to ubuntu-10.4 (32bit), in part 
 because I kept hitting snags while trying to configure postgres 9.1.1.
 ubuntu-10.4 is not the latest of course and comes with gnu make 3.8.1, 
 but it seems to compile everything then fails to link postgres executable:

 gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith 
 -Wdeclaration-after-statement -Wendif-labels -Wformat-security 
 -fno-strict-aliasing -fwrapv -L../../src/port -Wl,--as-needed 
 -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags  -Wl,-E TONS OF 
 DOT OHs ../../src/port/libpgport_srv.a -lxslt -lxml2 -lssl -lcrypto 
 -lcrypt -ldl -lm -lldap -o postgres
 postmaster/postmaster.o: In function `PostmasterMain':
 postmaster.c:(.text+0x48d7): undefined reference to `optreset'
 tcop/postgres.o: In function `process_postgres_switches':
 postgres.c:(.text+0x1312): undefined reference to `optreset'
 utils/misc/ps_status.o: In function `set_ps_display':
 ps_status.c:(.text+0xd4): undefined reference to `setproctitle'
 collect2: ld returned 1 exit status

There's a similar report in the archives:
http://archives.postgresql.org/pgsql-hackers/2011-02/msg01474.php

It appears that on Ubuntu, libbsd defines those symbols, which confuses
configure into supposing that they're provided by libc, and then the
link fails because libbsd isn't actually linked into the postmaster.
The question is what's pulling in libbsd though.  In the previous report
it came via libedit, which you're not using.  I'd try looking in the
config.log file to see what it was linking in the test that decided
setproctitle was available, and then using ldd on each of those
libraries to see which one(s) require libbsd.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.1.1 build failure : postgres link fails

2011-11-20 Thread Rob Sargent


On 11/20/2011 09:24 AM, Tom Lane wrote:
 Rob Sargentg robjsarg...@gmail.com writes:
 I just got round to updating my laptop to ubuntu-10.4 (32bit), in part 
 because I kept hitting snags while trying to configure postgres 9.1.1.
 ubuntu-10.4 is not the latest of course and comes with gnu make 3.8.1, 
 but it seems to compile everything then fails to link postgres executable:
 gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith 
 -Wdeclaration-after-statement -Wendif-labels -Wformat-security 
 -fno-strict-aliasing -fwrapv -L../../src/port -Wl,--as-needed 
 -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags  -Wl,-E TONS OF 
 DOT OHs ../../src/port/libpgport_srv.a -lxslt -lxml2 -lssl -lcrypto 
 -lcrypt -ldl -lm -lldap -o postgres
 postmaster/postmaster.o: In function `PostmasterMain':
 postmaster.c:(.text+0x48d7): undefined reference to `optreset'
 tcop/postgres.o: In function `process_postgres_switches':
 postgres.c:(.text+0x1312): undefined reference to `optreset'
 utils/misc/ps_status.o: In function `set_ps_display':
 ps_status.c:(.text+0xd4): undefined reference to `setproctitle'
 collect2: ld returned 1 exit status
 There's a similar report in the archives:
 http://archives.postgresql.org/pgsql-hackers/2011-02/msg01474.php

 It appears that on Ubuntu, libbsd defines those symbols, which confuses
 configure into supposing that they're provided by libc, and then the
 link fails because libbsd isn't actually linked into the postmaster.
 The question is what's pulling in libbsd though.  In the previous report
 it came via libedit, which you're not using.  I'd try looking in the
 config.log file to see what it was linking in the test that decided
 setproctitle was available, and then using ldd on each of those
 libraries to see which one(s) require libbsd.

   regards, tom lane


Will do.  Then there's always trying Ubuntu-11?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.1.1 build failure : postgres link fails

2011-11-20 Thread Tom Lane
Rob Sargent robjsarg...@gmail.com writes:
 On 11/20/2011 09:24 AM, Tom Lane wrote:
 It appears that on Ubuntu, libbsd defines those symbols, which confuses
 configure into supposing that they're provided by libc, and then the
 link fails because libbsd isn't actually linked into the postmaster.
 The question is what's pulling in libbsd though.  In the previous report
 it came via libedit, which you're not using.  I'd try looking in the
 config.log file to see what it was linking in the test that decided
 setproctitle was available, and then using ldd on each of those
 libraries to see which one(s) require libbsd.

 Will do.  Then there's always trying Ubuntu-11?

Couldn't say.  But re-reading this, I wonder if maybe you *are* using
libedit.  Have you got readline installed?  If not, configure will try
libedit as second choice ... so maybe the best fix is to install
readline (and don't forget readline-devel or local equivalent).

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to install pgfincore with PG 9.1

2011-11-20 Thread Raghavendra
Respected,

I have tried installing pgfincore with PG 9.1(one-click installer). But
facing below error.

[root@localhost pgfincore-v1.1]# export PATH=/opt/PostgreSQL/9.1/bin:$PATH
[root@localhost pgfincore-v1.1]# echo $PATH
/opt/PostgreSQL/9.1/bin:/usr/lib/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin
[root@localhost pgfincore-v1.1]# which pg_config
/opt/PostgreSQL/9.1/bin/pg_config
*[root@localhost pgfincore-v1.1]# make*
grep: /pgfincore.control: No such file or directory
grep: /pgfincore.control: No such file or directory
grep: /pgfincore.control: No such file or directory
grep: /pgfincore.control: No such file or directory
grep: /pgfincore.control: No such file or directory
grep: /pgfincore.control: No such file or directory
grep: /pgfincore.control: No such file or directory
grep: /pgfincore.control: No such file or directory
cp pgfincore.sql pgfincore--.sql
gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith
-Wdeclaration-after-statement -Wendif-labels -Wformat-security
-fno-strict-aliasing -fwrapv -fpic -I. -I.
-I/opt/PostgreSQL/9.1/include/postgresql/server
-I/opt/PostgreSQL/9.1/include/postgresql/internal -D_GNU_SOURCE
-I/usr/local/include/libxml2  -I/usr/local/include  -c -o pgfincore.o
pgfincore.c
gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith
-Wdeclaration-after-statement -Wendif-labels -Wformat-security
-fno-strict-aliasing -fwrapv -fpic -L/opt/PostgreSQL/9.1/lib
-L/usr/local/lib  -L/usr/local/lib
-Wl,-rpath,'/opt/PostgreSQL/9.1/lib',--enable-new-dtags  -shared -o
pgfincore.so pgfincore.o
rm pgfincore.o
*[root@localhost pgfincore-v1.1]# make install*
grep: /pgfincore.control: No such file or directory
grep: /pgfincore.control: No such file or directory
grep: /pgfincore.control: No such file or directory
grep: /pgfincore.control: No such file or directory
grep: /pgfincore.control: No such file or directory
grep: /pgfincore.control: No such file or directory
grep: /pgfincore.control: No such file or directory
grep: /pgfincore.control: No such file or directory
/bin/mkdir -p '/opt/PostgreSQL/9.1/share/postgresql/extension'
/bin/mkdir -p '/opt/PostgreSQL/9.1/lib/postgresql'
/bin/mkdir -p '/opt/PostgreSQL/9.1/doc/postgresql/extension'
grep: /pgfincore.control: No such file or directory
grep: /pgfincore.control: No such file or directory
/bin/sh
/opt/PostgreSQL/9.1/lib/postgresql/pgxs/src/makefiles/../../config/install-sh
-c -m 644 ./pgfincore.control
'/opt/PostgreSQL/9.1/share/postgresql/extension/'
/bin/sh
/opt/PostgreSQL/9.1/lib/postgresql/pgxs/src/makefiles/../../config/install-sh
-c -m 644 ./pgfincore--unpackaged--.sql ./pgfincore--.sql
 '/opt/PostgreSQL/9.1/share/postgresql/extension/'
/opt/PostgreSQL/9.1/lib/postgresql/pgxs/src/makefiles/../../config/install-sh:
./pgfincore--unpackaged--.sql does not exist.
*make: *** [install] Error 1*

Please assist me, thanks in advance.

---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/


[GENERAL] Significant Digits in Floating Point Datatype

2011-11-20 Thread Lee Hachadoorian
I would like to store some in a single array some data that is 
conceptually related, but some of the data is floating point, and some 
of it is integer. Obviously the integer data *can* be stored as double 
precision, but I need to know about potential loss of precision. Double 
precision has a precision of at least 15 digits. I would assume that 
that would mean that for double precision, 15 digits of data would be 
faithfully preserved. But I found a question on the list where a value 
stored as 955.60 sometimes returns as 955.5998. 
(http://archives.postgresql.org/pgsql-general/2011-08/msg00144.php) If 
this is the case, what does a precision of at least [x] digits 
actually mean? And can I reliably retrieve the original integer by 
casting to int (or bigint) if the number of digits in the original 
integer is less than 15?


Regards,
--Lee

--
Lee Hachadoorian
PhD, Earth  Environmental Sciences (Geography)
Research Associate, CUNY Center for Urban Research
http://freecity.commons.gc.cuny.edu


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installed. Now what?

2011-11-20 Thread Scott Marlowe
On Sun, Nov 20, 2011 at 4:52 AM, Phoenix Kiula phoenix.ki...@gmail.com wrote:

 How can I debug that the connections are the problem?

Take a look at pg_stat_activity, specifically the fields client_addr,
client_port, and client_hostname.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Significant Digits in Floating Point Datatype

2011-11-20 Thread Tom Lane
Lee Hachadoorian lee.hachadoor...@gmail.com writes:
 And can I reliably retrieve the original integer by 
 casting to int (or bigint) if the number of digits in the original 
 integer is less than 15?

On IEEE-floating-point machines, I'd expect float8 to store integers
up to 2^52 (or maybe it's 2^53) exactly.  With other floating-point
formats the limit might be different, but it should still be exact for
reasonable-size integers.  This has nothing whatever to do with whether
decimal fractions are reproduced exactly (in general, they aren't, no
matter how many or few digits are involved).  So integers are fine,
bigints not so much.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to install pgfincore with PG 9.1

2011-11-20 Thread Cédric Villemain
2011/11/20 Raghavendra raghavendra@enterprisedb.com:
 Respected,
 I have tried installing pgfincore with PG 9.1(one-click installer). But
 facing below error.
 [root@localhost pgfincore-v1.1]# export PATH=/opt/PostgreSQL/9.1/bin:$PATH
 [root@localhost pgfincore-v1.1]# echo $PATH
 /opt/PostgreSQL/9.1/bin:/usr/lib/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin
 [root@localhost pgfincore-v1.1]# which pg_config
 /opt/PostgreSQL/9.1/bin/pg_config
 [root@localhost pgfincore-v1.1]# make
 grep: /pgfincore.control: No such file or directory
 grep: /pgfincore.control: No such file or directory
 grep: /pgfincore.control: No such file or directory
 grep: /pgfincore.control: No such file or directory
 grep: /pgfincore.control: No such file or directory
 grep: /pgfincore.control: No such file or directory
 grep: /pgfincore.control: No such file or directory
 grep: /pgfincore.control: No such file or directory
 cp pgfincore.sql pgfincore--.sql
 gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith
 -Wdeclaration-after-statement -Wendif-labels -Wformat-security
 -fno-strict-aliasing -fwrapv -fpic -I. -I.
 -I/opt/PostgreSQL/9.1/include/postgresql/server
 -I/opt/PostgreSQL/9.1/include/postgresql/internal -D_GNU_SOURCE
 -I/usr/local/include/libxml2  -I/usr/local/include  -c -o pgfincore.o
 pgfincore.c
 gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith
 -Wdeclaration-after-statement -Wendif-labels -Wformat-security
 -fno-strict-aliasing -fwrapv -fpic -L/opt/PostgreSQL/9.1/lib
 -L/usr/local/lib  -L/usr/local/lib
 -Wl,-rpath,'/opt/PostgreSQL/9.1/lib',--enable-new-dtags  -shared -o
 pgfincore.so pgfincore.o
 rm pgfincore.o
 [root@localhost pgfincore-v1.1]# make install
 grep: /pgfincore.control: No such file or directory
 grep: /pgfincore.control: No such file or directory
 grep: /pgfincore.control: No such file or directory
 grep: /pgfincore.control: No such file or directory
 grep: /pgfincore.control: No such file or directory
 grep: /pgfincore.control: No such file or directory
 grep: /pgfincore.control: No such file or directory
 grep: /pgfincore.control: No such file or directory
 /bin/mkdir -p '/opt/PostgreSQL/9.1/share/postgresql/extension'
 /bin/mkdir -p '/opt/PostgreSQL/9.1/lib/postgresql'
 /bin/mkdir -p '/opt/PostgreSQL/9.1/doc/postgresql/extension'
 grep: /pgfincore.control: No such file or directory
 grep: /pgfincore.control: No such file or directory
 /bin/sh
 /opt/PostgreSQL/9.1/lib/postgresql/pgxs/src/makefiles/../../config/install-sh
 -c -m 644 ./pgfincore.control
 '/opt/PostgreSQL/9.1/share/postgresql/extension/'
 /bin/sh
 /opt/PostgreSQL/9.1/lib/postgresql/pgxs/src/makefiles/../../config/install-sh
 -c -m 644 ./pgfincore--unpackaged--.sql ./pgfincore--.sql
  '/opt/PostgreSQL/9.1/share/postgresql/extension/'
 /opt/PostgreSQL/9.1/lib/postgresql/pgxs/src/makefiles/../../config/install-sh:
 ./pgfincore--unpackaged--.sql does not exist.
 make: *** [install] Error 1
 Please assist me, thanks in advance.

buggy Makefile. Thanks for the report.
I'll fix that soon and keep you informed.
-- 
Cédric Villemain +33 (0)6 20 30 22 52
http://2ndQuadrant.fr/
PostgreSQL: Support 24x7 - Développement, Expertise et Formation

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table Design question for gurus (without going to NoSQL)...

2011-11-20 Thread Adrian Klaver
On Sunday, November 20, 2011 7:12:59 am Phoenix Kiula wrote:
 On Sun, Nov 20, 2011 at 9:33 PM, Phoenix Kiula phoenix.ki...@gmail.com 
wrote:

 
 I thought of adding a bigserial (serial8) column instead of
 varchar(32) for the md5. But postgresql tells me that:
 
 --
 ERROR:  type bigserial does not exist
 --
 
 Why is this? Why can't I create a column with this type? Whats the
 current syntax?

bigserial is not a type so much as a macro that creates a bigint column with 
attached sequence. 

Example:

test(5432)aklaver=\d pk_test   

   
Table public.pk_test  

   
 Column |  Type   | Modifiers 
+-+---
 id | integer | not null
 fld_1  | text| 
Indexes:
pk PRIMARY KEY, btree (id)

test(5432)aklaver=ALTER TABLE pk_test ADD column bg bigserial;
NOTICE:  ALTER TABLE will create implicit sequence pk_test_bg_seq for serial 
column pk_test.bg
ALTER TABLE
test(5432)aklaver=\d pk_test 
 Table public.pk_test
 Column |  Type   |  Modifiers   
+-+--
 id | integer | not null
 fld_1  | text| 
 bg | bigint  | not null default nextval('pk_test_bg_seq'::regclass)
Indexes:
pk PRIMARY KEY, btree (id)

 
 Thanks.

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Significant Digits in Floating Point Datatype

2011-11-20 Thread Bill Moran

On 11/20/11 1:29:37 PM, Lee Hachadoorian wrote:

I would like to store some in a single array some data that is
conceptually related, but some of the data is floating point, and some
of it is integer. Obviously the integer data *can* be stored as double
precision, but I need to know about potential loss of precision. Double
precision has a precision of at least 15 digits. I would assume that
that would mean that for double precision, 15 digits of data would be
faithfully preserved. But I found a question on the list where a value
stored as 955.60 sometimes returns as 955.5998.
(http://archives.postgresql.org/pgsql-general/2011-08/msg00144.php) If
this is the case, what does a precision of at least [x] digits
actually mean? And can I reliably retrieve the original integer by
casting to int (or bigint) if the number of digits in the original
integer is less than 15?


Given Tom's answer, you may want to consider whether the DECIMAL
data type is a better fit for your needs.

--
Bill Moran

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installed. Now what?

2011-11-20 Thread Adrian Klaver
On Sunday, November 20, 2011 3:56:18 am Phoenix Kiula wrote:

  Any ideas?
 
 Just to add, the connection string I try for pgbouncer is EXACTLY the
 same as the one I use to connect directly to PG, but I add the port
 number.


That may be the problem. The Postgres server and pgbouncer are not the same 
thing. Visual aids:

Client  -- pgbouncer  --   Postgres server

Client credentialspgbouncer auth   Postgres auth

   auth file Pg 
pg_shadow 


The  authentication chain is

Client send credentials to pgbouncer.
pgbouncer checks against its list of approved users and method of 
authentication.
If client passes that then pgbouncer tries to open connection to database 
specified by client, using credentials listed in connection for that database 
in 
the [databases] section of ini file. 
If those credentials match those in Postgres server then a connection is 
allowed.

There is nothing that says the users  admitted by pgbouncer have to be the same 
as those admitted by Postgres.

From what you reporting you are authenticating to pgbouncer and not to 
Postgres. 
This was pointed out upstream by Marko.  I would do a careful review of what 
user you are connecting as, to each program.  Also when posting the log results 
please specify which program they are coming from, takes out the guess work:)



 
 For Direct PG  (works) --
 pg_connect(host=localhost dbname=$db user=myuser password=mypass);
 
 For Pgbouncer  (does NOT work) --
 pg_connect(host=localhost dbname=$db port=6432 user=myuser
 password=mypass);
 
 Given that both PG and postgresql are alive and kicking on 5432 and
 6432 ports respectively, as shown in the netstat output above, I
 wonder if the connection string is the problem.

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] FOSDEM 2012 - PostgreSQL Devroom: Call for Speakers

2011-11-20 Thread Andreas 'ads' Scherbaum


Hi all,

FOSDEM 2012 - PostgreSQL Devroom: Call for Speakers


The PostgreSQL project will have a Devroom at FOSDEM 2012, which takes 
place on February 4-5 in Brussels, Belgium. The Devroom will mainly 
cover topics for PostgreSQL users, developers and contributors. For more 
information about the event itself, please see the website at 
http://www.fosdem.org/2012/ .



We are now accepting proposals for talks. Please note that we only 
accept talks in English.


Each session will last 45 minutes (including discussion), and may be on 
any topic related to PostgreSQL. Suggested topic areas include:


* Developing applications for PostgreSQL
* Administering large scale PostgreSQL installations
* Case studies and/or success stories of PostgreSQL deployments
* PostgreSQL tools and utilities
* PostgreSQL hacking
* Community  user groups
* Tuning the server
* Migrating from other systems
* Scaling/replication
* Benchmarking  hardware
* PostgreSQL related products

Of course, we're happy to receive proposals for talks on other 
PostgreSQL related topics as well.



Please use our conference website to submit your proposal:

https://www.postgresql.eu/events/callforpapers/fosdem2012/


The deadline for submissions is December 20th, 2011. The schedule will 
be published and speakers will be informed by the end of the year.



Please also note my email about hotel reservation:

http://archives.postgresql.org/pgeu-general/2011-11/msg0.php


--
Andreas 'ads' Scherbaum
German PostgreSQL User Group
European PostgreSQL User Group - Board of Directors
Volunteer Regional Contact, Germany - PostgreSQL Project

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table Design question for gurus (without going to NoSQL)...

2011-11-20 Thread Gavin Flower

On 21/11/11 02:33, Phoenix Kiula wrote:

Hi.

Want to start another thread, loosely related to the performance
problems thread I have going.

Need some DB design guidance from the gurus here.

My big table now has about 70 million rows, with the following columns:


  alias   | character varying(35)
  url | text
  modify_date | timestamp without time zone
  ip  | bigint


For each IP address (user of my application) I want to have a unique
URL. So I used to have a UNIQUE constraint on IP, URL. But the index
based on this became huge, as some URLs are gigantic. so I introduced
an md5 of the URL:


  url_md5 | varchar(32)


I now have two scenarios:

1. To have an index (unique?) on (ip, url_md5)

2. To not have an index on just the ip. This way a query that tries
to match   ...WHERE ip = 999 AND url_md5 = 'md5 here'... will
still look only at the ip bit of the index, then refine it with the
url_md5.

The good thing about #2 is the size of index remains very small with
only a bigint field (ip) being indexed.

The bad thing about #2 is that each query of ...WHERE ip = 999 AND
url_md5 = 'md5 here'...  will have to refine the indexed IP. If one
IP address has put in a lot of URLs, then this becomes a bit slow. As
is now happening, where I have users who have over 1 million URLs
each!

Questions:

1. Instead of md5, is there any integer hashing algorithm that will
allow me to have a bigint column and save a lot hopefully in both
storage space and speed?  (Some very useful points mentioned here:
http://stackoverflow.com/questions/1422725/represent-md5-hash-as-an-integer
)

2. If I do go with the above scenario #1 of a joint index, is there
any way I can save space and maintain speed? Partitioning etc are out
of the question.

With a growing web database, I am sure many people face this
situation. Are nosql type databases the only sane solution to such
massive volumes and throughput expectations (e.g., CouchDb's MemBase)?

Many thanks for any ideas or pointers!

How about having 2 indexes: one on each of ip  url_md5? Pg will combine 
the indexes as required, or will just use one if that is best.


Why don't you have a time zone on your timestamp???


Regards,
Gavin

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is it ever necessary to vacuum a table that only gets inserts/updates?

2011-11-20 Thread Gavin Flower

On 20/11/11 11:57, Scott Marlowe wrote:

On Sat, Nov 19, 2011 at 12:53 PM, Gavin Flower
gavinflo...@archidevsys.co.nz  wrote:

On 19/11/11 11:32, Adam Cornett wrote:

On Fri, Nov 18, 2011 at 2:56 PM, Gavin Flower
gavinflo...@archidevsys.co.nz  wrote:

On 18/11/11 04:59, Tom Lane wrote:

Craig Ringerring...@ringerc.id.auwrites:

On Nov 17, 2011 1:32 PM, Tom Lanet...@sss.pgh.pa.uswrote:

If it's purely an insert-only table, such as a logging table, then in
principle you only need periodic ANALYZEs and not any VACUUMs.

Won't a VACUUM FREEZE (or autovac equivalent) be necessary eventually,
to
handle xid wraparound?

Sure, but if he's continually adding new rows, I don't see much point in
launching extra freeze operations.

regards, tom lane


Just curious...

Will the pattern of inserts be at all relevant?

For example random inserts compared to apending records.  I thought that
random inserts would lead to bloat, as there would be lots of blocks far
from the optimum fill factor.


Regards,
Gavin

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

I might be wrong (I'm sure Tom will correct me if so), but Postgres does not
store tuples in an ordered format on disk, they are on disk in the order
they are inserted, unless the table is re-ordered by cluster, which only
does a one time sort.
Table bloat (and the table fill factor) are usually associated with deletes
and updates.  If you delete a row, or update it so that it takes up less
room (by say removing a large text value) then postgres could use the now
free space on that page to store a new tuple.
-Adam

HI Adam,

I suspect that you are right - noiw I come to think of it- I think I got
caught out by the ghost of VSAM creeping up on me )You seriously do NOT want
to know about IBM's VSAM!).

Careful, on a list with as many old timers as this one, you may be
sending that message to the guy who wrote the original implementation.
  :)  I only go as far back as Rexx and JCL and RBase 5000, but never
used VSAM. ISAM yes.

Brings back memories... RRRHHH!

Many years ago (when I was not so old as I am now) I had a junior 
analyst/programmer, who I Had asked to insert about 20k ordered records 
from a tape file into a VSAM file where its primary was the sort key of 
the records on tape.  He wrote a COBOL program that took 7 hours to do 
so.  Now, he and a Systems Programmer had each been on a 5 day VSAM 
course, and they managed to optimise the download to take only half the 
time.  I went and looked at a COBOL manual for the first time in a few 
yeas (I was a Systems Analyst, and above 'mere' programming), and 
suggested they change the file type from 'RANDOM' to 'INDEX-SEQUENTIAL' 
- the download now took a mere 70 seconds!


At that I time I was adept at designing index sequential files on ICL 
mainframes, then VSAM on FACOM mainframes. So I avoided databases, 
especially as it involved another (rather snooty) team to do anything 
with a database, and program code had to be changed when migrating form 
development to UAT and then to production. Once they insisted I create a 
data model of a system I had designed with 5 files -- after 4 hours 
overtime 2 members of that team and myself came up with a data model; 
that exactly matched the 5 files and fields I had used...


I left the mainframe world many years ago, and did not fall into the 
temptation to get back into COBOL programming for Y2K.


Now my favourite software stack is Linux/PostgreSQL/JBoss -- and now I 
not only design systems, I am expected to code them too!





Re: [GENERAL] Installed. Now what?

2011-11-20 Thread Phoenix Kiula
On Mon, Nov 21, 2011 at 4:13 AM, Adrian Klaver adrian.kla...@gmail.com wrote:
 On Sunday, November 20, 2011 3:56:18 am Phoenix Kiula wrote:

  Any ideas?

 Just to add, the connection string I try for pgbouncer is EXACTLY the
 same as the one I use to connect directly to PG, but I add the port
 number.


 That may be the problem. The Postgres server and pgbouncer are not the same
 thing. Visual aids:

 Client          --             pgbouncer          --       Postgres server

 Client credentials        pgbouncer auth               Postgres auth

                                   auth file                         Pg 
 pg_shadow



Thanks for this.

(1) Do I need to create a new user for Pgbouncer then?

(2) What info goes in the auth_file -- the Pgbouncer user/password
or the Postgres user/password?

In any case, I have kept both the user name and passwords the same for
now. But I have not created anything for Pgbouncer specifically other
than to put the info in auth_file. Have I missed a step?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Huge number of INSERTs

2011-11-20 Thread Tomas Vondra
Dne 20.11.2011 13:46, Phoenix Kiula napsal(a):
 On Sun, Nov 20, 2011 at 2:11 PM, Tomas Vondra t...@fuzzy.cz wrote:

 For a moment there, I unleashed the valve and allowed the INSERT
 functionality. The log was immediately flooded with this:
 
 
 LOG:  duration: 6851.054 ms  statement: select nextval('maintable_id_seq')
 LOG:  duration: 6848.266 ms  statement: select nextval('maintable_id_seq')
 LOG:  duration: 6846.672 ms  statement: select nextval('maintable_id_seq')
 LOG:  duration: 6853.451 ms  statement: select nextval('maintable_id_seq')
 LOG:  duration: 6991.966 ms  statement: select nextval('maintable_id_seq')
 LOG:  duration: 8244.315 ms  statement: select nextval('maintable_id_seq')
 LOG:  duration: 6991.071 ms  statement: select nextval('maintable_id_seq')
 LOG:  duration: 6990.043 ms  statement: select nextval('maintable_id_seq')
 LOG:  duration: 6988.483 ms  statement: select nextval('maintable_id_seq')
 LOG:  duration: 6986.793 ms  statement: select nextval('maintable_id_seq')
 LOG:  duration: 6985.802 ms  statement: select nextval('maintable_id_seq')
 ...
 
 I hope it's just because of too much load that even a simple query
 such as this was taking so much time?

Probably, unless you have system with infinite amount of CPU time.

According to the vmstat output you've posted, the CPU is 99% utilized
all the time.

I'm not sure about the I/O, because you haven't posted iostat output
with the INSERTs enabled. But from the information you've provided so
far I guess the main issue is the load and overall system overload.

Another sign of this is rather high number of processes waiting in the
queue. So once again - decrease the number of connections and apache
clients to a reasonable number.

 Other queries taking too much time are also indexed queries!

And? Indexes are not a magical fairy dust - when the system is as
overloaded as yours, even the least expensive operations are going to
take insane amount of time.

And it's rather difficult to help you with queries, unless you provide
us EXPLAIN ANALYZE output - I've already asked you for this twice.
Without that piece of information, we can't tell whether the queries are
slo because of bad query plan or because of the load.

Tomas

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installed. Now what?

2011-11-20 Thread Tomas Vondra
Dne 20.11.2011 12:52, Phoenix Kiula napsal(a):
 You may be on to something. And the queries results are below. (5
 connections are reserved for superusers so you may be right.)
 
 
 MYDB=# show max_connections;
  max_connections
 -
  150
 (1 row)
 
 Time: 0.517 ms
 
 
 MYDB=#   select count(1) from pg_stat_activity;
  count
 ---
144
 (1 row)
 
 Time: 1.541 ms

The limit actually is not max_connections, as certain number of
connections is reserved for superusers (maintenance etc.). It's
specified by superuser_reserved_connections - by default it's set to 3,
so there are only 147 connections available.

 But isn't the point to connect to pgbouncer (instead of PG directly)
 and have it manage connections? Even when I restart PG so that its
 connection count is fresh and low, and immediately try to connect to
 pgbouncer, it still shows me an error.

Sure, but pgbouncer has to actually open a regular connection to the
database - those are regular connections, and the connection fails
because of reaching max_connections, pgbouncer can't do anything with it.

The point of connection pooler is that there'll be limited number of
pre-created connections, handed to clients. I see you have set

max_client_conn = 100
default_pool_size = 20

which means there will be at most 20 database connections, and 100
clients can connect to the pooler. Once all those 20 connections are
used, the other clients have to wait.

BTW max_client_conn = 100 means that at most 100 clients can connect
to the pooler, if there are more clients the connection will fail with
the same error as when reaching max_connections. As you were getting
too many clients with max_connections=350, you should probably
significantly increase max_client_conn - e.g. to 400.

 How can I debug that the connections are the problem?

Check the postgresql log file?

 The error message in the pgbouncer log points to some FATAL password
 authentication.

Then it probably is not caused by reaching max_connections. But I'm not
sure about this - maybe pgbouncer returns this when the database reaches
max_connections.

Tomas

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installed. Now what?

2011-11-20 Thread Tomas Vondra
Dne 21.11.2011 01:39, Phoenix Kiula napsal(a):
 On Mon, Nov 21, 2011 at 4:13 AM, Adrian Klaver adrian.kla...@gmail.com 
 wrote:
 On Sunday, November 20, 2011 3:56:18 am Phoenix Kiula wrote:

 Any ideas?

 Just to add, the connection string I try for pgbouncer is EXACTLY the
 same as the one I use to connect directly to PG, but I add the port
 number.


 That may be the problem. The Postgres server and pgbouncer are not the same
 thing. Visual aids:

 Client  -- pgbouncer  --   Postgres server

 Client credentialspgbouncer auth   Postgres auth

   auth file Pg 
 pg_shadow
 
 
 
 Thanks for this.
 
 (1) Do I need to create a new user for Pgbouncer then?
 
 (2) What info goes in the auth_file -- the Pgbouncer user/password
 or the Postgres user/password?

Those users are completely different.

1) There's a user/password used to connect to the pgbouncer. This is the
user specified in the auth_file - how exactly is it interpreted, depends
on the auth_type value. With trust, just an existence of the user name
is verified. With other auth types, the password is verified too.

So this works perfectly fine with auth_type=trust

tomas 

and this works with auth_type=plain (with actual value of my password)

tomas mypassword

I could set auth_type=md5 and put there MD5 hash of mypassword

tomas 34819d7beeabb9260a5c854bc85b3e44


2) Once you're connected to the pgbouncer, it has to handle you a
database connection. This has nothing to do with auth_file, the username
and password are encoded into the connection string (in the [databases]
section of the ini file).

[databases]
MYDB  = host=127.0.0.1 dbname=MYDB user=MYUSER password=MYPASSWORD
client_encoding=utf8 port=5432

 In any case, I have kept both the user name and passwords the same for
 now. But I have not created anything for Pgbouncer specifically other
 than to put the info in auth_file. Have I missed a step?

I'm really confused what the current config is. Do you have password=
in the connection string (in 'databases' section of the ini file)?

In the previous post I've recommended to use double quotes to enclose
the password - that does not work, sorry. You may use single quotes or
no quotes (if the password does not contain spaces etc.).

Tomas

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installed. Now what?

2011-11-20 Thread Tomas Vondra
Dne 20.11.2011 03:33, Amitabh Kant napsal(a):
 On Sun, Nov 20, 2011 at 4:14 AM, Phoenix Kiula phoenix.ki...@gmail.com

 
 Does the auth file (/var/lib/pgsql/pgbouncer.txt) contain valid login
 credentials to your database? If I remember correctly, it should have
 the username and password to your database.

No, it shouldn't. It should contain credentials for connecting to the
pgbouncer. The database credentials should go to the connection string
in '[databases]' section of your ini file.

Tomas

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] upgrading from 8.3 to 9.0

2011-11-20 Thread Robert Treat
You could also look into upgrading via pg_upgrade, if you don't want
to go through the dump/restore cycle. Even in that case if you can do
a test of pg_dump (one for schema, one for data) and make sure it
loads into the new db without any issues. 8.3 - 9.0 is pretty
harmless (you might want to think about 9.1 instead btw). Oh, it's
worth mentioning, you should really skim through the release notes and
make sure nothing in the incompatabilities applys to you; here's the
links to the docs:
http://www.postgresql.org/docs/9.1/interactive/release-9-0.html#AEN108545
http://www.postgresql.org/docs/9.1/interactive/release-8-4.html#AEN111313
http://www.postgresql.org/docs/9.1/interactive/release-8-3.html#AEN114593

Robert Treat
conjecture: xzilla.net
consulting: omniti.com

On Thu, Nov 17, 2011 at 8:14 PM, David Morton davidmor...@xtra.co.nz wrote:
 I've performed a very similar upgrade including postgis upgrade at the same
 time, we used the following command examples ... also put some simple
 scripting together to dump multiple databases in parallel as downtime was
 critical:
 Dump database data: pg_dump -Fc database --compress=1 
 /mnt/dumps/database.dump
 Dump global data: pg_dumpall -g  /mnt/dumps/globals.sql
 Parse the global file and create a script to create new directory structure
 for table spaces etc (also changed paths to new mount points here)
 Run the global sql script: psql -f /mnt/dumps/globals.sql postgres
 Restore databases without GIS functionality: pg_restore -j 2 -C -d postgres
 /mnt/dumps/database.dump
 Restore databases with GIS functionality (upgrade of postgis version
 requires this): sh /tmp/postgis_restore.pl
 /usr/share/postgresql/contrib/postgis-1.5/postgis.sql database_user
 /mnt/dumps/gisdatabase.dump -E=UTF8
 Those were the basic essential steps ... there are other supporting things
 we did around the outside to streamline the transition, it all worked
 perfectly on the day.
 Best advise is that if its more than a scratch environment, test test test
 !!
 
 From: Pedro Doria Meunier pdo...@netmadeira.com
 To: pgsql-general@postgresql.org
 Sent: Friday, 18 November 2011 12:40 AM
 Subject: [GENERAL] upgrading from 8.3 to 9.0

 Hi,

 I'm on the verge of upgrading a server (Fedora 8 ehehe) running postgresql
 8.3

 It also has postgis 1.3 installed.

 Thinking of using pgadmin3 to perform the backup and then restore it after
 I've upgraded the server to fedora 15/16 and thus upgrading postgresql to
 9.0.

 I seem to remember problems with restoring from a pgadmin's .backup file in
 the
 past... :S

 Any pitfalls I should be aware of?

 Btw: it's a reasonably large DB with 30mil+ rows...

 Already thankful for any insight,

 --
 Pedro Doria Meunier
 GSM: +351 91 581 88 23
 Skype: pdoriam




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installed. Now what?

2011-11-20 Thread Tomas Vondra
Dne 20.11.2011 04:15, Phoenix Kiula napsal(a):
 I just did some testing.
 
 If the password is wrong, then it shows me the authentication failed
 message right in the terminal window, immediately.
 
 If the password is correct (plain text or md5 of that plain text --
 both have similar requests), it shows me the second error no working
 connection below.

Because it's failing at different times.

The first command fails because the pgbouncer verifies the password
against the auth_file, finds out it's incorrect and kicks you out.

The second command actually connects to pgbouncer (the password is
correct), attempts to open the connection to the database using the
connection string - AFAIK it's

MYDB  = host=127.0.0.1 dbname=MYDB user=MYUSER client_encoding=utf8
port=5432

and fails because there's no password or incorrect password.


You've used the same username and password both for the connection
pooler and for database, so it's rather confusing.

 [host]   psql -h 127.0.0.1 MYDB -E MYDB_MYDB -p 6543psql: ERROR:
 password authentication failed for user MYDB_MYDB[coco] ~  [coco] ~
 pico  /var/lib/pgsql/pgbouncer.txt
 
 [host] ~   psql -h 127.0.0.1 MYDB -E MYDB_MYDB -p 6543
 psql: ERROR:  no working server connection
 
 
 But in the second case, the error in the pgbouncer log is the same --
 authentication is failing.

No it's not. When the authentication fails when connecting to pgbouncer,
the message is

   Pooler Error: Auth failed

but when the database authentication fails, the message is

   Pooler Error: password authentication failed for user ...

In the first case you have to check the auth_file, in the second you
need to check the connection string in pgbouncer.ini.

 Why this inconsistent and utterly inane behavior from pgbouncer? Why
 can't we see transparently what the error is?

It's saying you exactly what's going on. You're confused because the
connection pooling is new to you and because you've decided to use the
same credentials both for DB and pgbouncer.

 Nowhere in the docs does it clearly specify with an example how the
 auth_file format should be.

Not sure which docs are you talking about, but the quick start in
doc/usage.txt shows an example of the file, and doc/config.txt (and the
man pages) state that the format is the same as pg_auth/pg_pwd.

Anyway it's quite trivial - two strings, first one is username, second
one is the password. It's either plain or hashed (depending on the
auth_type).

Tomas

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Incremental backup with RSYNC or something?

2011-11-20 Thread Robert Treat
On Mon, Nov 14, 2011 at 12:45 AM, Venkat Balaji venkat.bal...@verse.in wrote:
 Question: what can I do to rsync only the new additions in every table
 starting 00:00:01 until 23:59:59 for each day?

 A table level replication (like Slony) should help here.
 Or
 A trigger based approach with dblink would be an-other (but, a bit complex)
 option.

If you don't actually care about the rows of data specifically, and
just want incremental data diff, you might look at what options your
filesystem gives you. We often use incremental snapshots on ZFS to
give use smaller copies that can be shipped off to the backup server
and used to reconstruct the server if needed.

Robert Treat
conjecture: xzilla.net
consulting: omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installed. Now what?

2011-11-20 Thread Phoenix Kiula
On Mon, Nov 21, 2011 at 10:18 AM, Tomas Vondra t...@fuzzy.cz wrote:
 Dne 20.11.2011 03:33, Amitabh Kant napsal(a):
 On Sun, Nov 20, 2011 at 4:14 AM, Phoenix Kiula phoenix.ki...@gmail.com


 Does the auth file (/var/lib/pgsql/pgbouncer.txt) contain valid login
 credentials to your database? If I remember correctly, it should have
 the username and password to your database.

 No, it shouldn't. It should contain credentials for connecting to the
 pgbouncer. The database credentials should go to the connection string
 in '[databases]' section of your ini file.



Thanks Tomas and everyone.

I have the following passwords:

1. Pgbouncer.ini file

[databases]
MYDB  = host=127.0.0.1 dbname=MYDB user=MYDB_MYDB client_encoding=utf8
port=5432 password=='bypass'


2. In the auth_file (with auth_type set to md5)

auth_type = md5
auth_file = /var/lib/pgsql/pgbouncer.txt

Inside the auth_file:
me an md5 string


3. In the PHP file where I need to call with  pg_connect() function.
This is the postgresql database user as usual.

pg_connect(host=127.0.0.1 dbname=$db port=6432 user=$user password=$pass);



Questions:

a. For #2, the pgbouncer password, do I need to create this me user
somewhere, or just writing here in the auth_file is fine? I have not
created this user anywhere else yet. Just written the user name and
md5 of the password in the auth_file.

b. In the connection string in #3 above, I need to be mentioning the
pgbouncer user name, right? Will the password then be md5 as in
auth_file? Or nothing?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table Design question for gurus (without going to NoSQL)...

2011-11-20 Thread Phoenix Kiula
On Mon, Nov 21, 2011 at 7:26 AM, Gavin Flower
gavinflo...@archidevsys.co.nz wrote:

 How about having 2 indexes: one on each of ip  url_md5? Pg will combine the
 indexes as required, or will just use one if that is best.



Thanks Gavin. Question: what if I have a joined index? If from a
joined index I only use the first column (say, ip) will a joined
index still be used?

It is cleaner to create two indexes for the two columns. Which is recommended?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installed. Now what?

2011-11-20 Thread Tomas Vondra
Dne 21.11.2011 02:44, Phoenix Kiula napsal(a):
 Thanks Tomas and everyone.
 
 I have the following passwords:
 
 1. Pgbouncer.ini file
 
 [databases]
 MYDB  = host=127.0.0.1 dbname=MYDB user=MYDB_MYDB client_encoding=utf8
 port=5432 password=='bypass'
 
 
 2. In the auth_file (with auth_type set to md5)
 
 auth_type = md5
 auth_file = /var/lib/pgsql/pgbouncer.txt
 
 Inside the auth_file:
 me an md5 string
 
 
 3. In the PHP file where I need to call with  pg_connect() function.
 This is the postgresql database user as usual.
 
 pg_connect(host=127.0.0.1 dbname=$db port=6432 user=$user password=$pass);

I guess the $user is 'me' (as stated in pgbouncer.txt) and the password
corresponds to pgbouncer.txt (i.e. when MD5-hashed the result is equal
to the value in the file).

 Questions:
 
 a. For #2, the pgbouncer password, do I need to create this me user
 somewhere, or just writing here in the auth_file is fine? I have not
 created this user anywhere else yet. Just written the user name and
 md5 of the password in the auth_file.

No. The user is created by listing the username/password in the auth_file.

 b. In the connection string in #3 above, I need to be mentioning the
 pgbouncer user name, right? Will the password then be md5 as in
 auth_file? Or nothing?

You need to put the pgbouncer user name (as listed in the auth_file).
The password has to be the actual value, not the hash. Otherwise it'd be
equal to auth_type=plain.

Tomáš

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table Design question for gurus (without going to NoSQL)...

2011-11-20 Thread Ondrej Ivanič
Hi,

On 21 November 2011 00:33, Phoenix Kiula phoenix.ki...@gmail.com wrote:
 Hi.

 Want to start another thread, loosely related to the performance
 problems thread I have going.

 Need some DB design guidance from the gurus here.

 My big table now has about 70 million rows, with the following columns:

You can put different tables/indexes on different disks using
tablespaces. For example, one tablespace for all tables and another
one for all indexes (and  change random_page_cost and seq_page_cost
appropriately ie. lower random_page_cost). It is a good idea to put
pg_xlog on the separate drive too.

Sometimes Postgres just can't utilise all available resources properly
and you can get results faster by running query over multiple
connections. It could be worth to investigate pg-poolII's parallel
query mode. You don't need multiple servers - just setup multiple PG
instances on the same physical machine (up to one PG instance per
core, with lower shared_mem, ...). Alternative could be parallel DB
like Postgres-XC (http://wiki.postgresql.org/wiki/Postgres-XC), ...

-- 
Ondrej Ivanic
(ondrej.iva...@gmail.com)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Significant Digits in Floating Point Datatype

2011-11-20 Thread Lee Hachadoorian

On 11/20/2011 02:06 PM, Tom Lane wrote:

Lee Hachadoorianlee.hachadoor...@gmail.com  writes:

And can I reliably retrieve the original integer by
casting to int (or bigint) if the number of digits in the original
integer is less than 15?

On IEEE-floating-point machines, I'd expect float8 to store integers
up to 2^52 (or maybe it's 2^53) exactly.  With other floating-point
formats the limit might be different, but it should still be exact for
reasonable-size integers.  This has nothing whatever to do with whether
decimal fractions are reproduced exactly (in general, they aren't, no
matter how many or few digits are involved).  So integers are fine,
bigints not so much.

regards, tom lane


Thank you, that clarification is extremely useful.  --Lee

--
Lee Hachadoorian
PhD, Earth  Environmental Sciences (Geography)
Research Associate, CUNY Center for Urban Research
http://freecity.commons.gc.cuny.edu


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installed. Now what?

2011-11-20 Thread Phoenix Kiula
On Mon, Nov 21, 2011 at 10:54 AM, Tomas Vondra t...@fuzzy.cz wrote:
 Dne 21.11.2011 02:44, Phoenix Kiula napsal(a):
 Thanks Tomas and everyone.

 I have the following passwords:

 1. Pgbouncer.ini file

 [databases]
 MYDB  = host=127.0.0.1 dbname=MYDB user=MYDB_MYDB client_encoding=utf8
 port=5432 password=='bypass'


 2. In the auth_file (with auth_type set to md5)

 auth_type = md5
 auth_file = /var/lib/pgsql/pgbouncer.txt

 Inside the auth_file:
 me an md5 string


 3. In the PHP file where I need to call with  pg_connect() function.
 This is the postgresql database user as usual.

 pg_connect(host=127.0.0.1 dbname=$db port=6432 user=$user password=$pass);

 I guess the $user is 'me' (as stated in pgbouncer.txt) and the password
 corresponds to pgbouncer.txt (i.e. when MD5-hashed the result is equal
 to the value in the file).

 Questions:

 a. For #2, the pgbouncer password, do I need to create this me user
 somewhere, or just writing here in the auth_file is fine? I have not
 created this user anywhere else yet. Just written the user name and
 md5 of the password in the auth_file.

 No. The user is created by listing the username/password in the auth_file.

 b. In the connection string in #3 above, I need to be mentioning the
 pgbouncer user name, right? Will the password then be md5 as in
 auth_file? Or nothing?

 You need to put the pgbouncer user name (as listed in the auth_file).
 The password has to be the actual value, not the hash. Otherwise it'd be
 equal to auth_type=plain.





Very clear. So all the passwords are now correct.

Now, when I do service pgbouncer restart, it shows me FAILED.

I'm on CentOS 5, 64 bit. PG is 9.0.5.

The PG log has nothing about this.

The pgbouncer log has nothing either, just a huge list of:


2011-11-20 09:03:46.855 10854 LOG Stats: 0 req/s, in 0 b/s, out 0 b/s,query 0 us
2011-11-20 09:04:46.856 10854 LOG Stats: 0 req/s, in 0 b/s, out 0 b/s,query 0 us
2011-11-20 09:05:46.856 10854 LOG Stats: 0 req/s, in 0 b/s, out 0 b/s,query 0 us
2011-11-20 09:06:46.857 10854 LOG Stats: 0 req/s, in 0 b/s, out 0 b/s,query 0 us


Any ideas on how I can determine what's going wrong?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replacement for tgisconstraint? (Upgrade from 8.4 to 9.0.5)

2011-11-20 Thread Ken Tanzer
Just to follow up on this, I went with istgconstraint which seems to work
OK.

I didn't see any way to edit the generated restore files and then continue
the process.  It would be nice if there was option to start (or stop) at
the time the files are written.

Although it ended up being quite simple, it took me a while to figure out
how to restore the old databases.  I couldn't tell if editing the files and
feeding them into psql would be a bad idea, since they were geared for
binary copy and has frozenxids and such.

So now I know it's just starting up the old 8.4 (included in
postgresql-upgrade), dumping the database, editing accordingly, shutting
down the old 8.4, starting the new and restore.  That info _can_ be found
elsewhere, but it would be really nice if this were documented somewhere
where half-frantic people are likely to find them.  Like in the
postgresql-upgrade package, perhaps under the heading if you carelessly
screw up your system.

I ended up doing roughly this:

/usr/lib64/pgsql/postgresql-8.4/bin/pg_ctl -D /var/lib/pgsql.old start
'pg_dumpall   old_dump.sql
(edit old_dump.sql)
/usr/lib64/pgsql/postgresql-8.4/bin/pg_ctl -D /var/lib/pgsql.old stop
service postgresql start
psql -f old_dump.sql postgres(?)

Hope this didn't belabor the obvious too badly!

Cheers,
Ken


On Sat, Nov 19, 2011 at 8:01 AM, Ken Tanzer ken.tan...@gmail.com wrote:

  Not being the author of that view, I confess some ignorance of pg
 internals, and just what the intended nuance was.

 As a little more explanation, the view is meant to list all the tables
 that have a trigger ending in _alert_notify, as created per this function:

 CREATE OR REPLACE FUNCTION alert_notify_enable(varchar,varchar) RETURNS
 boolean AS $$
 if {[info exists 1]} {
 set TABLE $1
 } else {
 elog ERROR no table passed to alert_notify()
 return false
 }
 if {[info exists 2]} {
 set CUSTOM_COLUMN  $2
 } else {
 set CUSTOM_COLUMN 
 }
 set cre_exec  CREATE TRIGGER ${TABLE}_alert_notify
 AFTER INSERT OR UPDATE OR DELETE ON ${TABLE}
 FOR EACH ROW EXECUTE PROCEDURE
 table_alert_notify(${CUSTOM_COLUMN})
 spi_exec $cre_exec
 return true
 $$ LANGUAGE pltcl;

 (The second view, about table_logs, is conceptually similar).

 Here's the slightly more readable source for the view:


 CREATE OR REPLACE VIEW alert_notify_enabled_objects AS

 SELECT REPLACE(cc.relname,'tbl_','') AS alert_object_code,
 INITCAP(REPLACE(REPLACE(cc.relname,'tbl_',''),'_',' ')) AS
 description
 FROM pg_catalog.pg_trigger t
  LEFT JOIN pg_catalog.pg_class cc ON ( t.tgrelid = cc.oid )
 WHERE t.tgname ~ '_alert_notify$'
 AND (NOT tgisconstraint  OR NOT EXISTS
(SELECT 1 FROM pg_catalog.pg_depend d
  JOIN pg_catalog.pg_constraint c ON
 (d.refclassid = c.tableoid AND d.refobjid = c.oid)
 WHERE d.classid = t.tableoid AND d.objid = t.oid
 AND d.deptype = 'i' AND c.contype = 'f')
 );

 If that clarifies the intention, please let me know!  Also, what about
 question #2--is there an easy/built-in way to edit the
 pg_upgrade_dump_db.sql and continue the postgresql-upgrade process?

 Thanks!

 Ken


 On Sat, Nov 19, 2011 at 7:44 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 Ken Tanzer ken.tan...@gmail.com writes:
  1)  Can anyone suggest equivalent PG9 replacement for those statements,
 or
  at least give me some hints?

 Per

 http://git.postgresql.org/gitweb/?p=postgresql.gita=commitdiffh=9a915e596

I also replaced the tgisconstraint column
with tgisinternal; the old meaning of tgisconstraint can now be
 had by
testing for nonzero tgconstraint, while there is no other way to
 get
the old meaning of nonzero tgconstraint, namely that the trigger
 was
internally generated rather than being user-created.

 It's not real clear to me whether your views actually want tgconstraint
 = 0, which would be the exact translation, or NOT tgisinternal, which
 might be a closer approximation to their intention.

regards, tom lane





Re: [GENERAL] How to install pgfincore with PG 9.1

2011-11-20 Thread Raghavendra
On Mon, Nov 21, 2011 at 1:06 AM, Cédric Villemain 
cedric.villemain.deb...@gmail.com wrote:

 2011/11/20 Raghavendra raghavendra@enterprisedb.com:
  Respected,
  I have tried installing pgfincore with PG 9.1(one-click installer). But
  facing below error.
  [root@localhost pgfincore-v1.1]# export
 PATH=/opt/PostgreSQL/9.1/bin:$PATH
  [root@localhost pgfincore-v1.1]# echo $PATH
 
 /opt/PostgreSQL/9.1/bin:/usr/lib/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin
  [root@localhost pgfincore-v1.1]# which pg_config
  /opt/PostgreSQL/9.1/bin/pg_config
  [root@localhost pgfincore-v1.1]# make
  grep: /pgfincore.control: No such file or directory
  grep: /pgfincore.control: No such file or directory
  grep: /pgfincore.control: No such file or directory
  grep: /pgfincore.control: No such file or directory
  grep: /pgfincore.control: No such file or directory
  grep: /pgfincore.control: No such file or directory
  grep: /pgfincore.control: No such file or directory
  grep: /pgfincore.control: No such file or directory
  cp pgfincore.sql pgfincore--.sql
  gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith
  -Wdeclaration-after-statement -Wendif-labels -Wformat-security
  -fno-strict-aliasing -fwrapv -fpic -I. -I.
  -I/opt/PostgreSQL/9.1/include/postgresql/server
  -I/opt/PostgreSQL/9.1/include/postgresql/internal -D_GNU_SOURCE
  -I/usr/local/include/libxml2  -I/usr/local/include  -c -o pgfincore.o
  pgfincore.c
  gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith
  -Wdeclaration-after-statement -Wendif-labels -Wformat-security
  -fno-strict-aliasing -fwrapv -fpic -L/opt/PostgreSQL/9.1/lib
  -L/usr/local/lib  -L/usr/local/lib
  -Wl,-rpath,'/opt/PostgreSQL/9.1/lib',--enable-new-dtags  -shared -o
  pgfincore.so pgfincore.o
  rm pgfincore.o
  [root@localhost pgfincore-v1.1]# make install
  grep: /pgfincore.control: No such file or directory
  grep: /pgfincore.control: No such file or directory
  grep: /pgfincore.control: No such file or directory
  grep: /pgfincore.control: No such file or directory
  grep: /pgfincore.control: No such file or directory
  grep: /pgfincore.control: No such file or directory
  grep: /pgfincore.control: No such file or directory
  grep: /pgfincore.control: No such file or directory
  /bin/mkdir -p '/opt/PostgreSQL/9.1/share/postgresql/extension'
  /bin/mkdir -p '/opt/PostgreSQL/9.1/lib/postgresql'
  /bin/mkdir -p '/opt/PostgreSQL/9.1/doc/postgresql/extension'
  grep: /pgfincore.control: No such file or directory
  grep: /pgfincore.control: No such file or directory
  /bin/sh
 
 /opt/PostgreSQL/9.1/lib/postgresql/pgxs/src/makefiles/../../config/install-sh
  -c -m 644 ./pgfincore.control
  '/opt/PostgreSQL/9.1/share/postgresql/extension/'
  /bin/sh
 
 /opt/PostgreSQL/9.1/lib/postgresql/pgxs/src/makefiles/../../config/install-sh
  -c -m 644 ./pgfincore--unpackaged--.sql ./pgfincore--.sql
   '/opt/PostgreSQL/9.1/share/postgresql/extension/'
 
 /opt/PostgreSQL/9.1/lib/postgresql/pgxs/src/makefiles/../../config/install-sh:
  ./pgfincore--unpackaged--.sql does not exist.
  make: *** [install] Error 1
  Please assist me, thanks in advance.

 buggy Makefile. Thanks for the report.
 I'll fix that soon and keep you informed.
 --



Thank you Cédric Villemain.

grep: /pgfincore.control: No such file or directory
 cp pgfincore.sql pgfincore--.sql

 /opt/PostgreSQL/9.1/lib/postgresql/pgxs/src/makefiles/../../config/install-sh:
 ./pgfincore--unpackaged--.sql does not exist.
 make: *** [install] Error 1


On the above two errors, I tried one more time with small changes which
worked well.
I have copied the files displayed in error output with those which MAKE is
looking for.

[root@localhost pgfincore-v1.1]# cp pgfincore--unpackaged--1.1.sql
pgfincore--unpackaged--.sql
[root@localhost pgfincore-v1.1]# make install
/bin/mkdir -p '/opt/PostgreSQL/9.1/share/postgresql/extension'
/bin/mkdir -p '/opt/PostgreSQL/9.1/lib/postgresql'
/bin/mkdir -p '/opt/PostgreSQL/9.1/doc/postgresql/extension'
grep: /pgfincore.control: No such file or directory
grep: /pgfincore.control: No such file or directory
/bin/sh
/opt/PostgreSQL/9.1/lib/postgresql/pgxs/src/makefiles/../../config/install-sh
-c -m 644 ./pgfincore.control
'/opt/PostgreSQL/9.1/share/postgresql/extension/'
/bin/sh
/opt/PostgreSQL/9.1/lib/postgresql/pgxs/src/makefiles/../../config/install-sh
-c -m 644 ./pgfincore--unpackaged--.sql ./pgfincore--.sql
 '/opt/PostgreSQL/9.1/share/postgresql/extension/'
/bin/sh
/opt/PostgreSQL/9.1/lib/postgresql/pgxs/src/makefiles/../../config/install-sh
-c -m 755  pgfincore.so '/opt/PostgreSQL/9.1/lib/postgresql/'
/bin/sh
/opt/PostgreSQL/9.1/lib/postgresql/pgxs/src/makefiles/../../config/install-sh
-c -m 644 ./README.rst '/opt/PostgreSQL/9.1/doc/postgresql/extension/'



-bash-4.1$ ./pgs ql -p 5433
psql.bin (9.1.1)
Type help for help.

postgres=# create extension pgfincore;
ERROR:  could not stat file
/opt/PostgreSQL/9.1/share/postgresql/extension/pgfincore--1.1.sql: No
such file or directory

I 

Re: [GENERAL] Table Design question for gurus (without going to NoSQL)...

2011-11-20 Thread David Johnston
On Nov 20, 2011, at 20:50, Phoenix Kiula phoenix.ki...@gmail.com wrote:

 On Mon, Nov 21, 2011 at 7:26 AM, Gavin Flower
 gavinflo...@archidevsys.co.nz wrote:
 
 How about having 2 indexes: one on each of ip  url_md5? Pg will combine the
 indexes as required, or will just use one if that is best.
 
 
 
 Thanks Gavin. Question: what if I have a joined index? If from a
 joined index I only use the first column (say, ip) will a joined
 index still be used?
 
 It is cleaner to create two indexes for the two columns. Which is recommended?
 

An index on (a, b) can be used for queries involving only a but not for those 
involving only b.

Neither is recommended - both have their benefits and limitations.

David J.





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.1.1 build failure : postgres link fails

2011-11-20 Thread Rob Sargentg

On 11/20/2011 11:05 AM, Tom Lane wrote:

Rob Sargentrobjsarg...@gmail.com  writes:

On 11/20/2011 09:24 AM, Tom Lane wrote:

It appears that on Ubuntu, libbsd defines those symbols, which confuses
configure into supposing that they're provided by libc, and then the
link fails because libbsd isn't actually linked into the postmaster.
The question is what's pulling in libbsd though.  In the previous report
it came via libedit, which you're not using.  I'd try looking in the
config.log file to see what it was linking in the test that decided
setproctitle was available, and then using ldd on each of those
libraries to see which one(s) require libbsd.

Will do.  Then there's always trying Ubuntu-11?

Couldn't say.  But re-reading this, I wonder if maybe you *are* using
libedit.  Have you got readline installed?  If not, configure will try
libedit as second choice ... so maybe the best fix is to install
readline (and don't forget readline-devel or local equivalent).

regards, tom lane
I built readline-6.2 from source and got passed linking 'postgres' 
executable.  Progress!  now chasing down libperl


gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith 
-Wdeclaration-after-statement -Wendif-labels -Wformat-security 
-fno-strict-aliasing -fwrapv -fpic -shared -o plperl.so plperl.o SPI.o 
Util.o -L../../../src/port -Wl,--as-needed 
-Wl,-rpath,'/usr/lib/perl/5.10/CORE',--enable-new-dtags  
-fstack-protector -L/usr/local/lib  -L/usr/lib/perl/5.10/CORE -lperl 
-ldl -lm -lpthread -lc -lcrypt

/usr/bin/ld: cannot find -lperl
collect2: ld returned 1 exit status
make[3]: *** [plperl.so] Error 1
make[3]: Leaving directory `/home/rob/tools/postgresql-9.1.1/src/pl/plperl'
make[2]: *** [all-plperl-recurse] Error 2
make[2]: Leaving directory `/home/rob/tools/postgresql-9.1.1/src/pl'
make[1]: *** [all-pl-recurse] Error 2
make[1]: Leaving directory `/home/rob/tools/postgresql-9.1.1/src'
make: *** [all-src-recurse] Error 2

I had to make a simlink 'ln -s /usr/lib/libperl.so.5.10 
/usr/lib/libperl.so' which I thought was supposed to happen automagically...


BUT
make[1]: Leaving directory `/home/rob/tools/postgresql-9.1.1/config'
All of PostgreSQL successfully made. Ready to install.
PostgreSQL installation complete.

Thank you Tom for the readline tip.  Sort of thing that would take me 
forever to track down.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table Design question for gurus (without going to NoSQL)...

2011-11-20 Thread Gavin Flower

On 21/11/11 14:50, Phoenix Kiula wrote:

On Mon, Nov 21, 2011 at 7:26 AM, Gavin Flower
gavinflo...@archidevsys.co.nz  wrote:


How about having 2 indexes: one on each of ip  url_md5? Pg will combine the
indexes as required, or will just use one if that is best.



Thanks Gavin. Question: what if I have a joined index? If from a
joined index I only use the first column (say, ip) will a joined
index still be used?
It will use the joined index if the planner thinks it is worth it.  
However, it is harder for the planner to jusify using the index for the 
second field when the query does not restrict on the first field (I am 
assuming it can, if required).




It is cleaner to create two indexes for the two columns. Which is recommended?
If you are frequently just using one or other of the indexes and/or 
could effectively use a joined index in both directins - then szeparate 
indexes will probably be better.


If your predominant query mode can use just use the one joined index 
effectively, then that would be better.


Consider the amount of RAM the indexes and table data will take up.

The advantages of indexing 2 fields separately compared to one =joined 
index are: that if you only need either single field index, it will take 
up less RAM and be also be quicker to read from disk.  Plus the 2 single 
field indexes can be used together for queiries that use both fields.  
The costs are that when both indexes need to be used, there is a little 
bit more processing involved, and 2 single field indexes take up more 
RAM than a single joined index.


So the answer is 'it depends...'!




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general