Re: [GENERAL] FreeBSD 7 needing to allocate lots of shared memory

2008-11-17 Thread Christiaan Willemsen
I still would like to allocate a bit more than 1 Gb of shared_memory on 
FreeBSD. So if anyone has any pointers what settings I do need to make, 
please let me know.


Kind regards,

Christiaan Willemsen

Christiaan Willemsen wrote:

Yes, it is a 64 bit machine, I'm sure of that:

 file ../bin/postgres
../bin/postgres: ELF 64-bit LSB executable, x86-64, version 1 
(FreeBSD), for FreeBSD 7.0 (700055), dynamically linked (uses shared 
libs), FreeBSD-style, not stripped


I just set kern.ipc.shmall equal to kern.ipc.shmmax, still won't start 
with more than 1GB of shared memory.


And yes, you can let the OS do the caching, but as far as I 
understood, you either the the OS cache a lot, of you set 
shared_buffers large enough so that large portions of tables can fit 
inside of it also acting as a cache. Since this last one has far less 
overhead, it is argued that this solution would be faster. But other 
people tend to say that this is not the case, and you should use a 
smaller shared_buffer size  I don't know yet, but want to find 
out. Anyway, I should be able to set it to more than 1GB...


Kind regards,

Christiaan Willemsen


Martijn van Oosterhout wrote:

On Fri, Nov 14, 2008 at 01:56:31PM +0100, Christiaan Willemsen wrote:
  
I'm trying to get postgressql running on FreeBSD with 64 GB of memory. 
Postgres will the the only thing running, so I want it to use as much 
memory as needed.


So I increased shared_memory to 50GB, and also set:



Umm, do you have a specific reason for not allowing the OS to cache
your DB? Is it actually a 64-bit machine you're running on?

  

kern.ipc.shm_use_phys=1
kern.ipc.semmap=512
kern.ipc.shmall=65536
kern.ipc.shmmax=68719476736



That shmall limits you to at most 256MB, maybe you want to change that
also?

I'd suggest making shared buffers at most a few GB and use the rest for
caching.

Have a nice day,
  


[GENERAL] Problem with pg_restore into new database

2008-11-17 Thread Mr. John
Hi.
I use latest version of Postgre in windows XP
I try to restore a database from .tar backup,into a new database.Using EMS 
Manager 
it works fine,my database is creating and restoring succesfully.
My application creates a .bat file and run it,restore is unsuccesfully
 
but file is :

 @echo off
 pg_restore.exe  -h 192.168.1.1 -p 5432 -U postgres  -C -v stru.tar   
2log_resto.txt 
 echo %errorlevel% err_resto.txt

log file is :
pg_restore: creating DATABASE _struct_fis_
pg_restore: connecting to new database _struct_fis_
pg_restore: creating SCHEMA pos
pg_restore: creating SCHEMA public
pg_restore: creating COMMENT SCHEMA public
pg_restore: creating PROCEDURAL LANGUAGE plpgsql
pg_restore: creating TABLE SALI
pg_restore: creating TABLE comenzi
pg_restore: creating TABLE comenzi_desf
pg_restore: creating TABLE mese
pg_restore: creating TABLE operatori
pg_restore: creating TABLE societati
pg_restore: creating SEQUENCE comenzi_desf_idcomanda_seq
pg_restore: executing SEQUENCE SET comenzi_desf_idcomanda_seq
pg_restore: creating SEQUENCE comenzi_desf_poz_seq
pg_restore: creating SEQUENCE OWNED BY comenzi_desf_poz_seq
pg_restore: executing SEQUENCE SET comenzi_desf_poz_seq
pg_restore: creating SEQUENCE comenzi_id_masa_seq
pg_restore: executing SEQUENCE SET comenzi_id_masa_seq
pg_restore: creating SEQUENCE comenzi_id_sala_seq
pg_restore: executing SEQUENCE SET comenzi_id_sala_seq
pg_restore: creating SEQUENCE comenzi_id_seq
pg_restore: creating SEQUENCE OWNED BY comenzi_id_seq
pg_restore: executing SEQUENCE SET comenzi_id_seq
pg_restore: creating SEQUENCE mese_id_sali_seq
pg_restore: executing SEQUENCE SET mese_id_sali_seq
pg_restore: creating SEQUENCE mese_id_seq
pg_restore: executing SEQUENCE SET mese_id_seq
pg_restore: creating SEQUENCE operatori_id_seq
pg_restore: executing SEQUENCE SET operatori_id_seq
pg_restore: creating SEQUENCE sali_id_seq
pg_restore: executing SEQUENCE SET sali_id_seq
pg_restore: creating DEFAULT id
pg_restore: creating DEFAULT poz
pg_restore: restoring data for table SALI
pg_restore: restoring data for table comenzi
pg_restore: restoring data for table comenzi_desf
pg_restore: restoring data for table mese
pg_restore: restoring data for table operatori
pg_restore: restoring data for table societati
pg_restore: creating CONSTRAINT SALI_pkey
pg_restore: creating CONSTRAINT comenzi_desf_IdComanda_key
pg_restore: creating CONSTRAINT comenzi_desf_pkey
pg_restore: creating CONSTRAINT comenzi_id_masa_key
pg_restore: creating CONSTRAINT comenzi_id_sala_key
pg_restore: creating CONSTRAINT comenzi_pkey
pg_restore: creating CONSTRAINT mese_id_Sali_key
pg_restore: creating CONSTRAINT mese_pkey
pg_restore: creating CONSTRAINT operatori_pkey
pg_restore: creating CONSTRAINT societati_pkey
pg_restore: creating FK CONSTRAINT comenzi_desf_fk
pg_restore: creating FK CONSTRAINT comenzi_fk
pg_restore: creating FK CONSTRAINT comenzi_fk1
pg_restore: creating FK CONSTRAINT comenzi_fk2
pg_restore: creating FK CONSTRAINT mese_fk
pg_restore: setting owner and privileges for DATABASE _struct_fis_
pg_restore: setting owner and privileges for SCHEMA pos
pg_restore: setting owner and privileges for SCHEMA public
pg_restore: setting owner and privileges for COMMENT SCHEMA public
pg_restore: setting owner and privileges for ACL public
pg_restore: setting owner and privileges for PROCEDURAL LANGUAGE plpgsql
pg_restore: setting owner and privileges for TABLE SALI
pg_restore: setting owner and privileges for TABLE comenzi
pg_restore: setting owner and privileges for TABLE comenzi_desf
pg_restore: setting owner and privileges for TABLE mese
pg_restore: setting owner and privileges for TABLE operatori
pg_restore: setting owner and privileges for TABLE societati
pg_restore: setting owner and privileges for SEQUENCE comenzi_desf_idcomanda_seq
pg_restore: setting owner and privileges for SEQUENCE comenzi_desf_poz_seq
pg_restore: setting owner and privileges for SEQUENCE OWNED BY 
comenzi_desf_poz_seq
pg_restore: setting owner and privileges for SEQUENCE comenzi_id_masa_seq
pg_restore: setting owner and privileges for SEQUENCE comenzi_id_sala_seq
pg_restore: setting owner and privileges for SEQUENCE comenzi_id_seq
pg_restore: setting owner and privileges for SEQUENCE OWNED BY comenzi_id_seq
pg_restore: setting owner and privileges for SEQUENCE mese_id_sali_seq
pg_restore: setting owner and privileges for SEQUENCE mese_id_seq
pg_restore: setting owner and privileges for SEQUENCE operatori_id_seq
pg_restore: setting owner and privileges for SEQUENCE sali_id_seq
pg_restore: setting owner and privileges for DEFAULT id
pg_restore: setting owner and privileges for DEFAULT poz
pg_restore: setting owner and privileges for CONSTRAINT SALI_pkey
pg_restore: setting owner and privileges for CONSTRAINT 
comenzi_desf_IdComanda_key
pg_restore: setting owner and privileges for CONSTRAINT comenzi_desf_pkey
pg_restore: setting owner and privileges for CONSTRAINT comenzi_id_masa_key
pg_restore: setting owner and privileges for CONSTRAINT 

Re: [GENERAL] Problem with pg_restore into new database

2008-11-17 Thread Richard Huxton
Mr. John wrote:
 but on the server is no database called _struct_fis_
 err_resto.txt file contents only 0.
 
 If I add -d
 @echo off
  pg_restore.exe  -h 192.168.1.1 -p 5432 -U postgres -d _struct_fis_ -C -v 
 stru.tar   2log_resto.txt 
  echo %errorlevel% err_resto.txt
 
 error is 1 and log file  :
 
pg_restore: connecting to database for restore
 pg_restore: [archiver (db)] connection to database _struct_fis_ failed: 
 FATAL:  database _struct_fis_ does not exist
 pg_restore: *** aborted because of error

I'm not sure I understand your problem fully, but this is because you
are trying to connect (-d ...) to database _struct_fis_ before it's
created. Connect to a database you know exists (template1 or postgres
for example) and then let the restore create the database.


-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] aggregates and case statements

2008-11-17 Thread Richard Huxton
Garry Saddington wrote:
 Is there a way to make the following statement return the total of all 
 effort. At the moment it gives a list of sum vs case.
 
 
 select sum(effort), 
 CASE WHEN effortandattainment.effort=5 THEN -3
 WHEN effortandattainment.effort=4 THEN -2
 WHEN effortandattainment.effort=3 THEN 1
 WHEN effortandattainment.effort=2 THEN 2
 WHEN effortandattainment.effort=1 THEN 3
 
END
 from effortandattainment

SELECT sum(effort) FROM effortandattainment;

But I'm guessing that's not what you mean. Can you show what output
you'd like to have?

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with pg_restore into new database

2008-11-17 Thread Mr. John
tanks for reply.

Why should I connect to a existing database to restore another one ?
from documentation,pg_restore seems to can do this :
http://www.postgresql.org/docs/8.0/interactive/app-pgrestore.html

-C
--create
Create the database before restoring into it.  (When this option is 
used, the database named with -d is used only to issue the initial 
CREATE DATABASE command.  All data is restored into the database name that  
   appears in the archive.) in my first attempt was no -d,no error and ... NO 
database 
@echo off
 pg_restore.exe  -h 192.168.1.1 -p 5432 -U postgres  -C -v stru.tar   
2log_resto.txt 
 echo %errorlevel% err_resto.txt

log file is :
pg_restore: creating DATABASE _struct_fis_
pg_restore: connecting to new database _struct_fis_
pg_restore: creating SCHEMA pos
pg_restore: creating SCHEMA public
pg_restore: creating COMMENT SCHEMA public
pg_restore: creating PROCEDURAL LANGUAGE plpgsql
.







From: Richard Huxton [EMAIL PROTECTED]
To: Mr. John [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org
Sent: Monday, November 17, 2008 11:31:28 AM
Subject: Re: [GENERAL] Problem with pg_restore into new database

Mr. John wrote:
 but on the server is no database called _struct_fis_
 err_resto.txt file contents only 0.
 
 If I add -d
 @echo off
  pg_restore.exe  -h 192.168.1.1 -p 5432 -U postgres -d _struct_fis_ -C -v 
 stru.tar   2log_resto.txt 
  echo %errorlevel% err_resto.txt
 
 error is 1 and log file  :
 
pg_restore: connecting to database for restore
 pg_restore: [archiver (db)] connection to database _struct_fis_ failed: 
 FATAL:  database _struct_fis_ does not exist
 pg_restore: *** aborted because of error

I'm not sure I understand your problem fully, but this is because you
are trying to connect (-d ...) to database _struct_fis_ before it's
created. Connect to a database you know exists (template1 or postgres
for example) and then let the restore create the database.


-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



  

Re: [GENERAL] disable/enable trigger and transaction

2008-11-17 Thread Grzegorz Jaśkiewicz
On Sun, Nov 16, 2008 at 12:39 PM, Peter Billen [EMAIL PROTECTED] wrote:

 Hi all,

 Is the instruction 'ALTER TABLE x ENABLE/DISABLE TRIGGER y' part of the
 current transaction? That is, say that there are two ongoing transactions,
 T1 and T2. Is the trigger y still enabled in transaction T2 if it's disabled
 in transaction T1?

yes, but this will lock table operations in other transactions that would
otherwise use that trigger. So be careful with it.


Re: [GENERAL] Problem with pg_restore into new database

2008-11-17 Thread Richard Huxton
Mr. John wrote:
 tanks for reply.
 
 Why should I connect to a existing database to restore another one ?

How do you issue a CREATE DATABASE command without being connected to
something?

 from documentation,pg_restore seems to can do this :
 http://www.postgresql.org/docs/8.0/interactive/app-pgrestore.html

Ideally, you don't want to be running 8.0 on Windows - it was the first
release on that platform and you'll find a lot of improvements if you
can set aside the time to upgrade to a more recent version. If you are
running 8.0, make sure it's 8.0.19 to get all the latest bug fixes.

 -C
 --create
 Create the database before restoring into it. (When this option is
 used, the database named with -d is used only to issue the initial
 CREATE DATABASE command. All data is restored into the database name
 that appears in the archive.) 

 in my first attempt was no -d,no error and
 ... NO database

If you're not connecting to a database, where do you think the output
goes? Hint - it can't be the database server since you're not connected
to a database.

 @echo off
  pg_restore.exe  -h 192.168.1.1 -p 5432 -U postgres  -C -v stru.tar   
 2log_resto.txt 
  echo %errorlevel% err_resto.txt
 
 log file is :
 pg_restore: creating DATABASE _struct_fis_
 pg_restore: connecting to new database _struct_fis_
 pg_restore: creating SCHEMA pos
 pg_restore: creating SCHEMA public
 pg_restore: creating COMMENT SCHEMA public
 pg_restore: creating PROCEDURAL LANGUAGE plpgsql

At a guess, you can't see STDOUT while you're doing this, can you?

Your options are:

1. Create the database and restore directly into it with -d
2. Get pg_restore to create the database for you then switch to it,
while connecting with -d to an existing database.
3. Have pg_restore output to STDOUT and pipe that to psql with whatever
connection settings you want (which is just duplicating option 2).

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FreeBSD 7 needing to allocate lots of shared memory

2008-11-17 Thread Isak Hansen
On Mon, Nov 17, 2008 at 12:03 PM, Isak Hansen [EMAIL PROTECTED] wrote:
 On Mon, Nov 17, 2008 at 9:51 AM, Christiaan Willemsen
 [EMAIL PROTECTED] wrote:
 I still would like to allocate a bit more than 1 Gb of shared_memory on
 FreeBSD. So if anyone has any pointers what settings I do need to make,
 please let me know.


 Did you already try changing shmall as Martijn pointed out, or did you
 miss that part of his reply?


Nm, my reading comprehension blows..



 Regards,
 Isak


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FreeBSD 7 needing to allocate lots of shared memory

2008-11-17 Thread Christiaan Willemsen

Yes, I did ;)

Isak Hansen wrote:

On Mon, Nov 17, 2008 at 12:03 PM, Isak Hansen [EMAIL PROTECTED] wrote:
  

On Mon, Nov 17, 2008 at 9:51 AM, Christiaan Willemsen
[EMAIL PROTECTED] wrote:


I still would like to allocate a bit more than 1 Gb of shared_memory on
FreeBSD. So if anyone has any pointers what settings I do need to make,
please let me know.

  

Did you already try changing shmall as Martijn pointed out, or did you
miss that part of his reply?




Nm, my reading comprehension blows..


  

Regards,
Isak




  


Re: [GENERAL] Problem with pg_restore into new database

2008-11-17 Thread Mr. John
thanks for answer Richard 
I'm new with Postgre,my version is 8.3
I read pg_restore documentation and what I understood is that  using  
pg_restore.exe with -C will create  the database named in backup archive or 
specified with -d
How exactly do I have to run pg_restore to create my database and then restore 
it as in your suggestion no.2
(2. Get pg_restore to create the database for you then switch to it,
while connecting with -d to an existing database.)

Thanks.





From: Richard Huxton [EMAIL PROTECTED]
To: Mr. John [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org
Sent: Monday, November 17, 2008 12:08:00 PM
Subject: Re: [GENERAL] Problem with pg_restore into new database

Mr. John wrote:
 tanks for reply.
 
 Why should I connect to a existing database to restore another one ?

How do you issue a CREATE DATABASE command without being connected to
something?

 from documentation,pg_restore seems to can do this :
 http://www.postgresql.org/docs/8.0/interactive/app-pgrestore.html

Ideally, you don't want to be running 8.0 on Windows - it was the first
release on that platform and you'll find a lot of improvements if you
can set aside the time to upgrade to a more recent version. If you are
running 8.0, make sure it's 8.0.19 to get all the latest bug fixes.

 -C
 --create
 Create the database before restoring into it. (When this option is
 used, the database named with -d is used only to issue the initial
 CREATE DATABASE command. All data is restored into the database name
 that appears in the archive.) 

 in my first attempt was no -d,no error and
 ... NO database

If you're not connecting to a database, where do you think the output
goes? Hint - it can't be the database server since you're not connected
to a database.

 @echo off
  pg_restore.exe  -h 192.168.1.1 -p 5432 -U postgres  -C -v stru.tar   
 2log_resto.txt 
  echo %errorlevel% err_resto.txt
 
 log file is :
 pg_restore: creating DATABASE _struct_fis_
 pg_restore: connecting to new database _struct_fis_
 pg_restore: creating SCHEMA pos
 pg_restore: creating SCHEMA public
 pg_restore: creating COMMENT SCHEMA public
 pg_restore: creating PROCEDURAL LANGUAGE plpgsql

At a guess, you can't see STDOUT while you're doing this, can you?

Your options are:

1. Create the database and restore directly into it with -d
2. Get pg_restore to create the database for you then switch to it,
while connecting with -d to an existing database.
3. Have pg_restore output to STDOUT and pipe that to psql with whatever
connection settings you want (which is just duplicating option 2).

-- 
  Richard Huxton
  Archonet Ltd



  

Re: [GENERAL] FreeBSD 7 needing to allocate lots of shared memory

2008-11-17 Thread Isak Hansen
On Mon, Nov 17, 2008 at 9:51 AM, Christiaan Willemsen
[EMAIL PROTECTED] wrote:
 I still would like to allocate a bit more than 1 Gb of shared_memory on
 FreeBSD. So if anyone has any pointers what settings I do need to make,
 please let me know.


Did you already try changing shmall as Martijn pointed out, or did you
miss that part of his reply?


Regards,
Isak

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with pg_restore into new database

2008-11-17 Thread Raymond O'Donnell
On 17/11/2008 11:26, Mr. John wrote:
 I read pg_restore documentation and what I understood is that  using 
 pg_restore.exe with -C will create  the database named in backup archive
 or specified with -d

You have to connect to *some* database, as Postgres doesn't have the
concept of just connecting to the server without connecting to a database.

Postgres has a maintenance database called postgres which is empty,
and it's normal to use this for initial connections if you haven't
already got a specific database to work with. In psql, there's a \c
command with changes the database to which you're connected, and you'll
see this command used in the output from pg_dump to connect to the
database being dumped.

HTH,

Ray.


--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to reduce impact of a query.

2008-11-17 Thread Howard Cole

Hi,

I am running multiple 8.2 databases on a not-so-powerful W2K3 server - 
and it runs great - for the majority of time. However I have some 
monster tsearch queries which take a lot of processing and hog system 
resources - especially disk.


I am not concerned with the amount of time or speed of the tsearch2 
query, which is doing a lot of work, all I need to do is make sure that 
the query does not impact all other databases and queries running on the 
same machine.


The machine in question is a 1GB Ram, AMD 64 with Raid 1 Sata disks. Non 
standard parts of my postgresql.conf are as follows:

max_connections=100
shared_buffers=128MB
work_mem=4MB
maintenance_work_mem=256MB
max_fsm_pages=204800
max_fsm_relations=1500

Any tips appreciated.

Howard.
www.selestial.com



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with pg_restore into new database

2008-11-17 Thread Mr. John
Thanks Raymond
pg_restore.exe  -h 192.168.1.1 -p 5432 -U postgres -d postgres  -C 
stru.tar   2log_resto.txt 
 echo %errorlevel% err_resto.txt

created and  restored my database  with the name saved in backup
Thanks a lot.



From: Raymond O'Donnell [EMAIL PROTECTED]
To: Mr. John [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org
Sent: Monday, November 17, 2008 1:49:17 PM
Subject: Re: [GENERAL] Problem with pg_restore into new database

On 17/11/2008 11:26, Mr. John wrote:
 I read pg_restore documentation and what I understood is that  using 
 pg_restore.exe with -C will create  the database named in backup archive
 or specified with -d

You have to connect to *some* database, as Postgres doesn't have the
concept of just connecting to the server without connecting to a database.

Postgres has a maintenance database called postgres which is empty,
and it's normal to use this for initial connections if you haven't
already got a specific database to work with. In psql, there's a \c
command with changes the database to which you're connected, and you'll
see this command used in the output from pg_dump to connect to the
database being dumped.

HTH,

Ray.


--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



  

Re: [GENERAL] How to reduce impact of a query.

2008-11-17 Thread Isak Hansen
On Mon, Nov 17, 2008 at 1:15 PM, Howard Cole [EMAIL PROTECTED] wrote:
 Hi,

 I am running multiple 8.2 databases on a not-so-powerful W2K3 server - and
 it runs great - for the majority of time. However I have some monster
 tsearch queries which take a lot of processing and hog system resources -
 especially disk.

 I am not concerned with the amount of time or speed of the tsearch2 query,
 which is doing a lot of work, all I need to do is make sure that the query
 does not impact all other databases and queries running on the same machine.

 The machine in question is a 1GB Ram, AMD 64 with Raid 1 Sata disks. Non
 standard parts of my postgresql.conf are as follows:
 max_connections=100
 shared_buffers=128MB
 work_mem=4MB
 maintenance_work_mem=256MB
 max_fsm_pages=204800
 max_fsm_relations=1500

 Any tips appreciated.

Not directly related to your question, but if effective_cache_size
hasn't been changed from the default, adjusting it could improve
overall performance a fair bit.


Regards,
Isak

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FreeBSD 7 needing to allocate lots of shared memory

2008-11-17 Thread Isak Hansen
On Mon, Nov 17, 2008 at 12:11 PM, Christiaan Willemsen
[EMAIL PROTECTED] wrote:
 Yes, I did ;)


Still the same error, i.e. unable to allocate shared memory? What does
sysctl -a | grep shm say?


Isak

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FreeBSD 7 needing to allocate lots of shared memory

2008-11-17 Thread Christiaan Willemsen

#sysctl -a | grep shm
kern.ipc.shm_allow_removed: 0
kern.ipc.shm_use_phys: 1
kern.ipc.shmall: 68719476736
kern.ipc.shmseg: 128
kern.ipc.shmmni: 192
kern.ipc.shmmin: 1
kern.ipc.shmmax: 68719476736

Isak Hansen wrote:

On Mon, Nov 17, 2008 at 12:11 PM, Christiaan Willemsen
[EMAIL PROTECTED] wrote:
  

Yes, I did ;)




Still the same error, i.e. unable to allocate shared memory? What does
sysctl -a | grep shm say?


Isak

  


Re: [GENERAL] How to reduce impact of a query.

2008-11-17 Thread Teodor Sigaev
The machine in question is a 1GB Ram, AMD 64 with Raid 1 Sata disks. Non 
standard parts of my postgresql.conf are as follows:

max_connections=100
shared_buffers=128MB
work_mem=4MB
maintenance_work_mem=256MB
max_fsm_pages=204800
max_fsm_relations=1500

Any tips appreciated.


Pls, show
1) effective_cache_size
2) The query
3) Output of EXPLAIN ANALYZE of query

--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] aggregates and case statements

2008-11-17 Thread Richard Huxton
[EMAIL PROTECTED] wrote:
 Quoting Richard Huxton [EMAIL PROTECTED]:
 But I'm guessing that's not what you mean. Can you show what output
 you'd like to have?
 
 effort sum
  1 245
  2 463
 etc.
 
 Each value for effort has a different statistical meaning as in the case
 statement.

Sorry - still don't understand. The query you originally gave should
have provided the sum for each case (which you seem to be calling
effort here. I don't see where the problem is.

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


List Ettiquette (perhaps Off Topic by now) was: Re: [GENERAL] Database access over the Internet...

2008-11-17 Thread charlie derr

Steve Atkins wrote:


On Nov 16, 2008, at 2:41 AM, Michelle Konzack wrote:


Am 2008-11-15 09:53:15, schrieb Scott Marlowe:

What's mess up is that the solution given the user DOES work.  She
just refuses to try it, because she assumes that the mailing list
server doesn't see the exact same CC list as her email server.  Well,
it does, and it then drops the message outbound for her so she ONLY
gets it from the original sender.


You did not understand the problem





My Mailbox which I use for my business is bombed by  over  50.000  spams
per day and I do already heavy filtering.  I have to read my mail  while
I am @work which mean, MOBILE using my cellphone connected to my Laptop.

Since I read the List ove another channel AND already filtering  my  own
threads (and convert it into a form which is much smaller then  E-Mails)
to get rid of the rest from the list since I am subscribed to  over  120
mailinglists and need less then 3% from it...

So, peoples now sending me PMs does not help, since it DoS my system and
make it harder for me to work since I have to walk through this messages
I do not need because I have it already seen.


Well, go away then.

If you are too cheap to receive emails in answer to your questions,
stop asking questions.

This is not a support channel you should consider using until you
learn enough courtesy and basic competence with email to interact
in public.

Cheers,
  Steve


I don't think that what Michelle is asking is unreasonable.   While certainly not a universal standard a significant subset of 
mailing lists definitely support the view that personal replies are unneccessary (and many lists forbid/discourage them).


I also think (based on what I've read from Michelle here and on other lists) that it'd be a drag to lose the participation of 
someone who has often contributed interesting viewpoints/information to technical discussions.


In my view, anything that causes people to check more closely about exactly 
where a reply is being sent is a good thing.

be well,
~c

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ERROR

2008-11-17 Thread Gustavo Rosso
I create a db wiht user postgres, but other users can't no create 
tables, and I give all privileges.


(banco is my db)

grant all on database banco to public;
grant create on database banco to public;

This is the error:
*ERROR:  must be owner of relation (table)*
Help me!!!

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: List Ettiquette (perhaps Off Topic by now) was: Re: [GENERAL] Database access over the Internet...

2008-11-17 Thread Scott Marlowe
On Mon, Nov 17, 2008 at 6:46 AM, charlie derr [EMAIL PROTECTED] wrote:
 Steve Atkins wrote:

 On Nov 16, 2008, at 2:41 AM, Michelle Konzack wrote:

 Am 2008-11-15 09:53:15, schrieb Scott Marlowe:

 What's mess up is that the solution given the user DOES work.  She
 just refuses to try it, because she assumes that the mailing list
 server doesn't see the exact same CC list as her email server.  Well,
 it does, and it then drops the message outbound for her so she ONLY
 gets it from the original sender.

 You did not understand the problem



 My Mailbox which I use for my business is bombed by  over  50.000  spams
 per day and I do already heavy filtering.  I have to read my mail  while
 I am @work which mean, MOBILE using my cellphone connected to my Laptop.

 Since I read the List ove another channel AND already filtering  my  own
 threads (and convert it into a form which is much smaller then  E-Mails)
 to get rid of the rest from the list since I am subscribed to  over  120
 mailinglists and need less then 3% from it...

 So, peoples now sending me PMs does not help, since it DoS my system and
 make it harder for me to work since I have to walk through this messages
 I do not need because I have it already seen.

 Well, go away then.

 If you are too cheap to receive emails in answer to your questions,
 stop asking questions.

 This is not a support channel you should consider using until you
 learn enough courtesy and basic competence with email to interact
 in public.

 Cheers,
  Steve

 I don't think that what Michelle is asking is unreasonable.   While
 certainly not a universal standard a significant subset of mailing lists
 definitely support the view that personal replies are unneccessary (and many
 lists forbid/discourage them).

This one does not.  On this list, you reply all to list and user.
That's how it's setup.  And there's a VERY easy to configure option to
stop the mailing list server from sending you a cc when someone has
replied to both you and the list.  I am not hitting reply all then
editing the outbound list everytime Michelle gets on the list.  Not
when she's got a very simple solution she's refused to even try.  I've
seen no confirmation from her that she's tried the solution and it
didn't work.  I have been shouted at a lot for not understanding and
terrorizing her.

 I also think (based on what I've read from Michelle here and on other lists)
 that it'd be a drag to lose the participation of someone who has often
 contributed interesting viewpoints/information to technical discussions.

It would.  It's also a shame they can't even try the solution provided
and then report back whether or not it worked.

 In my view, anything that causes people to check more closely about exactly
 where a reply is being sent is a good thing.

Really?  Really?  I spend a fair bit of time on these lists helping
people out.  If I have to examine my reply-tos every time I'm done.  I
hit reply to all and type. Knowing that my reply goes where it's
supposed to.  I have better things to do than argue this point over
and over, and so do a lot of other very helpful folks on this list.
There is an OPTION that fixes the cc problem for people with brain
dead email clients that can't eliminate the duplicate for them.  It's
in the FAQ.  I think we've done enough for those people.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Res: Res: [GENERAL] Archive files growth!!!

2008-11-17 Thread paulo matadr
result for your query

cliente_fone 341130 3345 26760k



De: Craig Ringer [EMAIL PROTECTED]
Para: paulo matadr [EMAIL PROTECTED]
Cc: GENERAL pgsql-general@postgresql.org
Enviadas: Sábado, 15 de Novembro de 2008 5:53:12
Assunto: Re: Res: [GENERAL] Archive files growth!!!

paulo matadr wrote:
 
 Let me see, for example one insert type (inser into table2 (select *
 from table1), where table have size  26megas,make 226megas for archives
 files.
 (i made a test with parameter wal_bufffer before this is defaul value
 64k and the same commando make 640megas of archives after modify to
 1024k a take 226m)
 however continuous confused for me.
 how 26mega of insert generate 226mega of archive.

OK, that's interesting, since those should be full WAL archives, so your
problem isn't to do with unused WAL tails.

When you say that the table is 26 MB, how did you measure that? Is that
the size of a dump of the table using pg_dump ? If so, that is not the
real size of the table as it is actually stored by PostgreSQL.

The best way to find the size of the table is by asking PostgreSQL:

SELECT relname, reltuples, relpages, relpages*8 AS size_kb
FROM pg_class WHERE relname = 'tablename' ORDER BY relpages DESC ;

(I'm assuming that your database uses the default 8kb page size).

This will be quite a bit bigger than the size of the dump.

I am also fairly sure that the WAL contains a record of what is done to
the indexes as well as to the tables. If you have lots of indexes,
especially multi-column indexes, on the table you are interested in then
the WAL files generated by an INSERT will be a lot bigger than the
amount of data inserted.

It might help if you could upload some example data and SQL somewhere.
Right now it is not possible to see what you are doing, so there is a
lot of guesswork involved.

--
Craig Ringer



  Veja quais são os assuntos do momento no Yahoo! +Buscados
http://br.maisbuscados.yahoo.com

Re: [GENERAL] ERROR

2008-11-17 Thread Gustavo Rosso

Serge:
OS linux, my distro is debian
I don't changes in the configuration
Instalation:
Download tar file
unzip
cd /pghead
./configure
 make
 su
 make install
 adduser postgres
 mkdir /usr/local/pgsql/data
 chown postgres /usr/local/pgsql/data
 su postgres
 /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
 /usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data logfile 21 

and informix parche:
wget http://informix.postgresintl.com/postgresql020705.tar.gz


Serge Fonville escribió:

A little more information would be useful

What OS are you running
What changes have you made to the configuration
What steps have you taken to install PostgreSQL

Regards,

Serge Fonville

On Mon, Nov 17, 2008 at 4:08 PM, Gustavo Rosso [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


I create a db wiht user postgres, but other users can't no create
tables, and I give all privileges.

(banco is my db)

grant all on database banco to public;
grant create on database banco to public;

This is the error:
*ERROR:  must be owner of relation (table)*
Help me!!!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org

mailto:pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR

2008-11-17 Thread Lennin Caro
 From: Gustavo Rosso [EMAIL PROTECTED]
 Subject: [GENERAL] ERROR
 To: pgsql-general@postgresql.org
 Date: Monday, November 17, 2008, 3:08 PM
 I create a db wiht user postgres, but other users can't
 no create tables, and I give all privileges.
 
 (banco is my db)
 
 grant all on database banco to public;
 grant create on database banco to public;
 
 This is the error:
 *ERROR:  must be owner of relation (table)*
 Help me!!!
 

alter table  table_name owner to public

the public rol must exist


  


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to reduce impact of a query.

2008-11-17 Thread Howard Cole

Teodor Sigaev wrote:
The machine in question is a 1GB Ram, AMD 64 with Raid 1 Sata disks. 
Non standard parts of my postgresql.conf are as follows:

max_connections=100
shared_buffers=128MB
work_mem=4MB
maintenance_work_mem=256MB
max_fsm_pages=204800
max_fsm_relations=1500

Any tips appreciated.


Pls, show
1) effective_cache_size
2) The query
3) Output of EXPLAIN ANALYZE of query


effective_cache_size is set at 128MB (the default).

A simplified version of the query is as follows

select email_id from email where to_tsquery('default','hannah') @@ fts;

Bitmap Heap Scan on email  (cost=12.50..80.25 rows=18 width=8) (actual 
time=9073.878..39371.665 rows=6535 loops=1)

 Filter: ('''hannah'''::tsquery @@ fts)
 -  Bitmap Index Scan on email_fts_index  (cost=0.00..12.49 rows=18 
width=0) (actual time=9023.036..9023.036 rows=6696 loops=1)

   Index Cond: ('''hannah'''::tsquery @@ fts)
Total runtime: 39375.892 ms

The time that this query takes is not the issue, rather it is the impact 
that it has on the server - effectively killing it for the 40 seconds 
due to the heavy disk access.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to reduce impact of a query.

2008-11-17 Thread Scott Marlowe
On Mon, Nov 17, 2008 at 8:17 AM, Howard Cole [EMAIL PROTECTED] wrote:
 Teodor Sigaev wrote:

 The machine in question is a 1GB Ram, AMD 64 with Raid 1 Sata disks. Non

Your entire disk io subsystem is a pair of hard drives.  I'm assuming
software RAID.

 The time that this query takes is not the issue, rather it is the impact
 that it has on the server - effectively killing it for the 40 seconds due to
 the heavy disk access.

You either need to invest more into your drive subsystem so it can
handle parallel load better, or you need to create a slave db with
slony or londiste so that the ugly queries hit the slave.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to reduce impact of a query.

2008-11-17 Thread Howard Cole

Scott Marlowe wrote:

Your entire disk io subsystem is a pair of hard drives.  I'm assuming
software RAID.

Correct.

The time that this query takes is not the issue, rather it is the impact
that it has on the server - effectively killing it for the 40 seconds due to
the heavy disk access.



You either need to invest more into your drive subsystem so it can
handle parallel load better, or you need to create a slave db with
slony or londiste so that the ugly queries hit the slave.


I take your point Scott. But short of the hardware upgrade, is there a 
way to effectively throttle this query? If I reduce maintenance_work_mem 
then the database dump/restore is slower but there is less overall 
impact on the server. Is there some equivalent parameter on the server 
to throttle general queries? It would be unfortunate if all queries 
slowed down a bit, but a better outcome than having the entire server 
hang for 40 seconds.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to reduce impact of a query.

2008-11-17 Thread Scott Marlowe
On Mon, Nov 17, 2008 at 8:42 AM, Howard Cole [EMAIL PROTECTED] wrote:
 Scott Marlowe wrote:

 Your entire disk io subsystem is a pair of hard drives.  I'm assuming
 software RAID.

 Correct.

 The time that this query takes is not the issue, rather it is the impact
 that it has on the server - effectively killing it for the 40 seconds due
 to
 the heavy disk access.


 You either need to invest more into your drive subsystem so it can
 handle parallel load better, or you need to create a slave db with
 slony or londiste so that the ugly queries hit the slave.

 

 I take your point Scott. But short of the hardware upgrade, is there a way
 to effectively throttle this query? If I reduce maintenance_work_mem then
 the database dump/restore is slower but there is less overall impact on the
 server. Is there some equivalent parameter on the server to throttle general
 queries? It would be unfortunate if all queries slowed down a bit, but a
 better outcome than having the entire server hang for 40 seconds.

The problem is most likely you're I/O bound.  If one query is hitting
a table it can pull in data (sequentially) at 40 to 80 megabytes per
second.  Since most of your queries are small, they don't run into
each other a lot, so to speak. As soon as your big reporting query
hits it's likely hitting the drives much longer and getting in the way
of all the other queries.

You could add more ram, that should help since the kernel could then
fit much more of your data into ram and not be as dependent on your
drive subsystem.  Memory is cheap, even FBDIMMS are pretty inexpensive
nowadays for 4 to 8 gigs of ram in a server.

While you can nice the backend process with some kind of script it
doesn't generally affect io priority.  Some oses do allow process
priority to dictate io priority, but I'm pretty sure linux doesn't.
It still might help a little, but right now you need to either add
enough ram for the kernel to cache the majority of your data set or
get more drives.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to reduce impact of a query.

2008-11-17 Thread Howard Cole

Scott Marlowe wrote:

The problem is most likely you're I/O bound.  If one query is hitting
a table it can pull in data (sequentially) at 40 to 80 megabytes per
second.  Since most of your queries are small, they don't run into
each other a lot, so to speak. As soon as your big reporting query
hits it's likely hitting the drives much longer and getting in the way
of all the other queries.

  
Thanks for the input Scott. You are correct - I am IO bound, but only 
for the query described. 99% of the time, my IO runs at 3% or less, even 
during peak times, only this one query, which happens approximately 10 
times a day grinds the system to a halt.


I am considering running this query in the background every couple of 
minutes or so to force the tables/indexes into cache. Once the tables 
are cached the query runs very quickly and there is no significant IO 
impact.This is a bodge but hopefully should work.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to reduce impact of a query.

2008-11-17 Thread Scott Marlowe
On Mon, Nov 17, 2008 at 9:36 AM, Howard Cole [EMAIL PROTECTED] wrote:
 Scott Marlowe wrote:

 The problem is most likely you're I/O bound.  If one query is hitting
 a table it can pull in data (sequentially) at 40 to 80 megabytes per
 second.  Since most of your queries are small, they don't run into
 each other a lot, so to speak. As soon as your big reporting query
 hits it's likely hitting the drives much longer and getting in the way
 of all the other queries.



 Thanks for the input Scott. You are correct - I am IO bound, but only for
 the query described. 99% of the time, my IO runs at 3% or less, even during
 peak times, only this one query, which happens approximately 10 times a day
 grinds the system to a halt.

 I am considering running this query in the background every couple of
 minutes or so to force the tables/indexes into cache. Once the tables are
 cached the query runs very quickly and there is no significant IO
 impact.This is a bodge but hopefully should work.

Simpler to probably just do:

select count(*) from sometable;

to force it into cache.  Buy some more memory and it'll likely stay in
memory without such tricks.

Best of luck on this.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to reduce impact of a query.

2008-11-17 Thread Howard Cole

Scott Marlowe wrote:

Best of luck on this.
  

Thanks Scott.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pgsql-general@postgresql.org

2008-11-17 Thread Tomas Lanczos
I am using Postgresql to store all my research related data. At the 
moment I am just finishing my PhD thesis and I want to cite postgresql 
correctly but can't find how to do it. Could somebody give me an advice?


Many thanks

tomas

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Fwd: Performance Tuning

2008-11-17 Thread John Zhang
Hi the list,

I have a performance problem and would like to any input on how to make it
perform as desired. In the DB, there are a few tables over 3 million records
in postgis. When I do some operation on the tables, e.g. CREATE INDEX, it
takes hours without results. I believe there must be something wrong within
the DB. However, I have not figured it out. Could anyone point me to the
right direction to tune and configured the database efficiently?

Thanks in advance
John


[GENERAL] compiling libpq.dll with Borland C++, is it possible?

2008-11-17 Thread Tony Caduto

Hi,
I am trying to compile my own copy of libpq.dll using bcc32.exe, the 
docs say it is possible, but I get a error when it tries to compile dirent.c
Has anyone been able to do this? 



C:\postgresql-8.3.5\src\interfaces\libpqmake -N -DCFG=Release /f bcc32.mak
MAKE Version 5.3  Copyright (c) 1987, 2008 CodeGear
Building the Win32 DLL and Static Library...

Configuration Release

   bcc32.exe @MAKE0014.@@@
Borland C++ 5.5.1 for Win32 Copyright (c) 1993, 2000 Borland
..\..\port\dirent.c:
Error E2451 ..\..\port\dirent.c 35: Undefined symbol 
'INVALID_FILE_ATTRIBUTES' in function opendir
Error E2451 ..\..\port\dirent.c 52: Undefined symbol 'dirname' in 
function opendir
Error E2451 ..\..\port\dirent.c 53: Undefined symbol 'dirname' in 
function opendir
Error E2451 ..\..\port\dirent.c 59: Undefined symbol 'dirname' in 
function opendir
Error E2451 ..\..\port\dirent.c 60: Undefined symbol 'dirname' in 
function opendir
Error E2451 ..\..\port\dirent.c 60: Undefined symbol 'dirname' in 
function opendir
Error E2451 ..\..\port\dirent.c 61: Undefined symbol 'dirname' in 
function opendir
Error E2451 ..\..\port\dirent.c 61: Undefined symbol 'dirname' in 
function opendir
Error E2451 ..\..\port\dirent.c 62: Undefined symbol 'dirname' in 
function opendir
Error E2451 ..\..\port\dirent.c 64: Undefined symbol 'dirname' in 
function opendir
Error E2451 ..\..\port\dirent.c 65: Undefined symbol 'handle' in 
function opendir
Error E2451 ..\..\port\dirent.c 66: Undefined symbol 'ret' in function 
opendir
Error E2451 ..\..\port\dirent.c 67: Undefined symbol 'ret' in function 
opendir
Error E2451 ..\..\port\dirent.c 77: Undefined symbol 'handle' in 
function readdir
Error E2451 ..\..\port\dirent.c 79: Undefined symbol 'handle' in 
function readdir
Error E2451 ..\..\port\dirent.c 79: Undefined symbol 'dirname' in 
function readdir
Error E2451 ..\..\port\dirent.c 80: Undefined symbol 'handle' in 
function readdir
Error E2451 ..\..\port\dirent.c 88: Undefined symbol 'handle' in 
function readdir
Error E2451 ..\..\port\dirent.c 100: Undefined symbol 'ret' in function 
readdir
Error E2451 ..\..\port\dirent.c 102: Undefined symbol 'ret' in function 
readdir
Error E2451 ..\..\port\dirent.c 102: Undefined symbol 'ret' in function 
readdir
Error E2451 ..\..\port\dirent.c 103: Undefined symbol 'ret' in function 
readdir
Error E2451 ..\..\port\dirent.c 109: Undefined symbol 'handle' in 
function closedir
Error E2451 ..\..\port\dirent.c 110: Undefined symbol 'handle' in 
function closedir
Error E2451 ..\..\port\dirent.c 111: Undefined symbol 'dirname' in 
function closedir
Error E2228 ..\..\port\dirent.c 111: Too many error or warning messages 
in function closedir

*** 26 errors in Compile ***

** error 1 ** deleting .\Release\dirent.obj

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fwd: Performance Tuning

2008-11-17 Thread Scott Marlowe
On Mon, Nov 17, 2008 at 11:21 AM, John Zhang [EMAIL PROTECTED] wrote:


 Hi the list,

 I have a performance problem and would like to any input on how to make it
 perform as desired. In the DB, there are a few tables over 3 million records
 in postgis. When I do some operation on the tables, e.g. CREATE INDEX, it
 takes hours without results. I believe there must be something wrong within
 the DB. However, I have not figured it out. Could anyone point me to the
 right direction to tune and configured the database efficiently?

Assuming it's creating GIN indexes, and possibly even if not, index
creation is very dependent on having a large enough work_mem for it to
fit a large portion of the data it's working on in memory.  Try
cranking up work_mem before index creation.  Note that you don't have
to do this in postgresql.conf and reload, you can just do it in the
session you're in:

set work_mem=512000;
create index blah blah blah;

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgsql-general@postgresql.org

2008-11-17 Thread Scott Marlowe
On Mon, Nov 17, 2008 at 11:09 AM, Tomas Lanczos [EMAIL PROTECTED] wrote:
 I am using Postgresql to store all my research related data. At the moment I
 am just finishing my PhD thesis and I want to cite postgresql correctly but
 can't find how to do it. Could somebody give me an advice?

You can start by looking at the wikipedia entry.  Not sure what you
mean by citing the db.  Are you looking at citing where it comes from,
who makes it, what?

It can be kinda tough to classify open source software... :)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fwd: Performance Tuning

2008-11-17 Thread Jeff Frost

On Mon, 17 Nov 2008, Scott Marlowe wrote:


On Mon, Nov 17, 2008 at 11:21 AM, John Zhang [EMAIL PROTECTED] wrote:



Hi the list,

I have a performance problem and would like to any input on how to make it
perform as desired. In the DB, there are a few tables over 3 million records
in postgis. When I do some operation on the tables, e.g. CREATE INDEX, it
takes hours without results. I believe there must be something wrong within
the DB. However, I have not figured it out. Could anyone point me to the
right direction to tune and configured the database efficiently?


Assuming it's creating GIN indexes, and possibly even if not, index
creation is very dependent on having a large enough work_mem for it to
fit a large portion of the data it's working on in memory.  Try
cranking up work_mem before index creation.  Note that you don't have
to do this in postgresql.conf and reload, you can just do it in the
session you're in:

set work_mem=512000;
create index blah blah blah;


Doesn't he want to change maintenance_work_mem instead of work_mem for index 
creation?


--
Jeff Frost, Owner   [EMAIL PROTECTED]
Frost Consulting, LLC   http://www.frostconsultingllc.com/
Phone: 916-647-6411 FAX: 916-405-4032

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] In memory Database for postgres

2008-11-17 Thread aravind chandu
Hello,



 
I guess most of you guys heard about In Memory Database.I have a small
question regarding it.I need to create an In Memory Database for
postgresql through which I have to perform various operations on
postgresql  database(queries,procedures,programs using pqxx API
etc...).I didn't have any idea of how to start and where to start this
issue.Please comment on this issue,so that it will be really helpful to
me .



Thanks,

Avin.




  

Re: [GENERAL] In memory Database for postgres

2008-11-17 Thread Andrej Ricnik-Bay
2008/11/18 aravind chandu [EMAIL PROTECTED]:
 Hello,
Hi!

   I guess most of you guys heard about In Memory Database.I have
 a small question regarding it.I need to create an In Memory Database for
 postgresql through which I have to perform various operations on postgresql
 database(queries,procedures,programs using pqxx API etc...).I didn't have
 any idea of how to start and where to start this issue.Please comment on
 this issue,so that it will be really helpful to me .
The fact aside that it's a bad idea (tm): you could have the data files
reside in a RAM disk.  Which OS are you planning to do this on?


 Thanks,
 Avin.
Cheers,
Andrej




-- 
Please don't top post, and don't use HTML e-Mail :}  Make your quotes concise.

http://www.american.edu/econ/notes/htmlmail.htm

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Using database to find file doublettes in my computer

2008-11-17 Thread Lothar Behrens
Hi,

I have a problem to find as fast as possible files that are double or
in other words, identical.
Also identifying those files that are not identical.

My approach was to use dir /s and an awk script to convert it to a sql
script to be imported into a table.
That done, I could start issuing queries.

But how to query for files to display a 'left / right view' for each
file that is on multible places ?

I mean this:

This File;Also here
C:\some.txt;C:\backup\some.txt
C:\some.txt;C:\backup1\some.txt
C:\some.txt;C:\backup2\some.txt

but have only this list:

C:\some.txt
C:\backup\some.txt
C:\backup1\some.txt
C:\backup2\some.txt


The reason for this is because I am faced with the problem of ECAD
projects that are copied around
many times and I have to identify what files are here missing and what
files are there.

So a manual approach is as follows:

1)   Identify one file (schematic1.sch) and see, where are copies of
it.
2)   Compare the files of both directories and make a desision about
what files to use further.
3)   Determine conflicts, thus these files can't be copied together
for a cleanup.

Are there any approaches or help ?

This is a very time consuming job and I am searching for any solution
that helps me save time :-)

I know that those problems did not arise when the projects are well
structured and in a version management system. But that isn't here :-)

Thanks

Lothar


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Trying Compile libpq.dll with VC++ 9.0, but need help

2008-11-17 Thread Tony Caduto

Hi,
Trying to get a libpq.dll that only has SSL deps using these instructions:

http://www.postgresql.org/docs/8.3/interactive/install-win32-libpq.html

It compiles a bit then dies at:

C:\postgresql-8.3.5\src\interfaces\libpqnmake /f win32.mak

Microsoft (R) Program Maintenance Utility Version 9.00.30729.01
Copyright (C) Microsoft Corporation.  All rights reserved.

Building the Win32 static library...

Using default OpenSSL Include directory: C:\OpenSSL\include
Using default OpenSSL Library directory: C:\OpenSSL\lib\VC
Using default Kerberos Include directory: C:\kfw-2.6.5\inc
Using default Kerberos Library directory: C:\kfw-2.6.5\lib\i386
   link.exe -lib @C:\DOCUME~1\20659\LOCALS~1\Temp\nm1B.tmp
.\Release\win32.obj : fatal error LNK1107: invalid or corrupt file: 
cannot read at 0xB39
NMAKE : fatal error U1077: 'C:\Program Files (x86)\Microsoft Visual 
Studio 9.0\VC\BIN\link.exe' : return code '0x453'

Stop.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trying Compile libpq.dll with VC++ 9.0, but need help

2008-11-17 Thread Dann Corbit
 -Original Message-
 From: [EMAIL PROTECTED] [mailto:pgsql-general-
 [EMAIL PROTECTED] On Behalf Of Tony Caduto
 Sent: Monday, November 17, 2008 12:06 PM
 To: pgsql-general@postgresql.org
 Subject: [GENERAL] Trying Compile libpq.dll with VC++ 9.0, but need
 help
 
 Hi,
 Trying to get a libpq.dll that only has SSL deps using these
 instructions:
 

http://www.postgresql.org/docs/8.3/interactive/install-win32-libpq.html
 
 It compiles a bit then dies at:
 
 C:\postgresql-8.3.5\src\interfaces\libpqnmake /f win32.mak
 
 Microsoft (R) Program Maintenance Utility Version 9.00.30729.01
 Copyright (C) Microsoft Corporation.  All rights reserved.
 
 Building the Win32 static library...
 
 Using default OpenSSL Include directory: C:\OpenSSL\include
 Using default OpenSSL Library directory: C:\OpenSSL\lib\VC
 Using default Kerberos Include directory: C:\kfw-2.6.5\inc
 Using default Kerberos Library directory: C:\kfw-2.6.5\lib\i386
 link.exe -lib @C:\DOCUME~1\20659\LOCALS~1\Temp\nm1B.tmp
 .\Release\win32.obj : fatal error LNK1107: invalid or corrupt file:
 cannot read at 0xB39
 NMAKE : fatal error U1077: 'C:\Program Files (x86)\Microsoft Visual
 Studio 9.0\VC\BIN\link.exe' : return code '0x453'
 Stop.

How much disk space do you have?

I guess that you are running out of space.

During the link, watch the available disk space.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] In memory Database for postgres

2008-11-17 Thread Scott Marlowe
On Mon, Nov 17, 2008 at 12:11 PM, aravind chandu [EMAIL PROTECTED] wrote:
 Hello,

   I guess most of you guys heard about In Memory Database.I have
 a small question regarding it.I need to create an In Memory Database for
 postgresql through which I have to perform various operations on postgresql
 database(queries,procedures,programs using pqxx API etc...).I didn't have
 any idea of how to start and where to start this issue.Please comment on
 this issue,so that it will be really helpful to me .

It's more important to ask yourself the question what problem am I
trying to solve?  Without knowing that, it's hard to say whether or
not an in memory database is a good idea.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] In memory Database for postgres

2008-11-17 Thread Dann Corbit
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of aravind chandu
Sent: Monday, November 17, 2008 11:11 AM
To: postgresql Forums
Subject: [GENERAL] In memory Database for postgres

 

Hello,

  I guess most of you guys heard about In Memory Database.I
have a small question regarding it.I need to create an In Memory
Database for postgresql through which I have to perform various
operations on postgresql  database(queries,procedures,programs using
pqxx API etc...).I didn't have any idea of how to start and where to
start this issue.Please comment on this issue,so that it will be really
helpful to me .


If you have lots of memory on your system, PostgreSQL will hold the
frequently used tables in memory.  So the burning question is:

What do you really want to do this for?

 

There are existing in memory database systems with a free license like
PostgreSQL:

FastDB:

http://www.garret.ru/databases.html

 

MonetDB:

http://monetdb.cwi.nl/

 

FastDB is an embedded, single writer - multiple reader type solution.

MonetDB is a column oriented database, especially suitable for database
warehouse designs.

 

There are additional solutions found in this article:

http://en.wikipedia.org/wiki/In-memory_database

 

Now, I think it would be very nice to have in-memory or column oriented
tables in PostgreSQL, but that would be a heavy-duty major difficult
project.

If you are volunteering to take that on, I suspect it is going to be
harder than you think, and if you want someone else to do it, I guess it
is not on the horizon yet.

 

 



Re: [GENERAL] Trying Compile libpq.dll with VC++ 9.0, but need help

2008-11-17 Thread Tony Caduto

Dann Corbit wrote:



How much disk space do you have?

I guess that you are running out of space.

During the link, watch the available disk space.

  


Hi Dan,
Thanks for the reply :-)

I actually got it working.Now the only problem is it still has a 
dependency for

MSVCR90.DLL

Does anyone know if this can be done with mingw so I can eliminate the 
MSVCR90.DLL dependency?
I have it installed but that is even more foreign to me than VS :-)  
(mostly use borland CodeGear tools)


Thanks,

Tony

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trying Compile libpq.dll with VC++ 9.0, but need help

2008-11-17 Thread Dave Page
On Mon, Nov 17, 2008 at 8:43 PM, Tony Caduto
[EMAIL PROTECTED] wrote:

 Does anyone know if this can be done with mingw so I can eliminate the
 MSVCR90.DLL dependency?
 I have it installed but that is even more foreign to me than VS :-)  (mostly
 use borland CodeGear tools)

That's the easiest way of all:

./configure --with-openssl
cd src/interfaces/libpq
make all

should do the trick.

-- 
Dave Page
EnterpriseDB UK:   http://www.enterprisedb.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trying Compile libpq.dll with VC++ 9.0, but need help

2008-11-17 Thread Dann Corbit
 -Original Message-
 From: Tony Caduto [mailto:[EMAIL PROTECTED]
 Sent: Monday, November 17, 2008 12:44 PM
 To: Dann Corbit
 Cc: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Trying Compile libpq.dll with VC++ 9.0, but
need
 help
 
 Dann Corbit wrote:
 
 
  How much disk space do you have?
 
  I guess that you are running out of space.
 
  During the link, watch the available disk space.
 
 
 
 Hi Dan,
 Thanks for the reply :-)
 
 I actually got it working.Now the only problem is it still has a
 dependency for
 MSVCR90.DLL
 
 Does anyone know if this can be done with mingw so I can eliminate the
 MSVCR90.DLL dependency?
 I have it installed but that is even more foreign to me than VS :-)
 (mostly use borland CodeGear tools)

Change from /MD to /MT and it will not use the runtime DLL.
Look under: Properties - Configuration Properties - C/C++ - Code
Generation - Runtime Library


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] citing postgresql

2008-11-17 Thread Tomas Lanczos
I meant something for use in publications, like it is used for R and 
almost identically for GRASS:



  R Development Core Team (2008). R: A language and environment for
  statistical computing. R Foundation for Statistical Computing,
  Vienna, Austria. ISBN 3-900051-07-0, URL http://www.R-project.org.

A BibTeX entry for LaTeX users is

  @Manual{,
title = {R: A Language and Environment for Statistical Computing},
author = {{R Development Core Team}},
organization = {R Foundation for Statistical Computing},
address = {Vienna, Austria},
year = {2008},
note = {{ISBN} 3-900051-07-0},
url = {http://www.R-project.org},
  }

I changed the subject BTW. First time I sent this message to the list it 
was from my another account, not subscribed here, so I just copied the 
text and instead the subject I copied an address ... I am exhausted, 
sorry ... :-)


Regards

Tomas


Scott Marlowe wrote:

On Mon, Nov 17, 2008 at 11:09 AM, Tomas Lanczos [EMAIL PROTECTED] wrote:

I am using Postgresql to store all my research related data. At the moment I
am just finishing my PhD thesis and I want to cite postgresql correctly but
can't find how to do it. Could somebody give me an advice?


You can start by looking at the wikipedia entry.  Not sure what you
mean by citing the db.  Are you looking at citing where it comes from,
who makes it, what?

It can be kinda tough to classify open source software... :)




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trying Compile libpq.dll with VC++ 9.0, but need help

2008-11-17 Thread Tony Caduto

Dann Corbit wrote:

Change from /MD to /MT and it will not use the runtime DLL.
Look under: Properties - Configuration Properties - C/C++ - Code
Generation - Runtime Library


  


Hi Dann,

I changed that in the win32.mak file and now it gives this error:

   link.exe @C:\DOCUME~1\20659\LOCALS~1\Temp\nmA6.tmp
  Creating library .\Release\libpqdll.lib and object .\Release\libpqdll.exp
LIBCMT.lib(dosmap.obj) : error LNK2005: __dosmaperr already defined in 
libpq.lib(win32error.obj)
.\Release\libpq.dll : fatal error LNK1169: one or more multiply defined 
symbols found
NMAKE : fatal error U1077: 'C:\Program Files (x86)\Microsoft Visual 
Studio 9.0\VC\BIN\link.exe' : return code '0x491'

Stop.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fwd: Performance Tuning

2008-11-17 Thread Scott Marlowe
On Mon, Nov 17, 2008 at 11:36 AM, Jeff Frost
[EMAIL PROTECTED] wrote:
 On Mon, 17 Nov 2008, Scott Marlowe wrote:

 On Mon, Nov 17, 2008 at 11:21 AM, John Zhang [EMAIL PROTECTED]
 wrote:


 Hi the list,

 I have a performance problem and would like to any input on how to make
 it
 perform as desired. In the DB, there are a few tables over 3 million
 records
 in postgis. When I do some operation on the tables, e.g. CREATE INDEX, it
 takes hours without results. I believe there must be something wrong
 within
 the DB. However, I have not figured it out. Could anyone point me to the
 right direction to tune and configured the database efficiently?

 Assuming it's creating GIN indexes, and possibly even if not, index
 creation is very dependent on having a large enough work_mem for it to
 fit a large portion of the data it's working on in memory.  Try
 cranking up work_mem before index creation.  Note that you don't have
 to do this in postgresql.conf and reload, you can just do it in the
 session you're in:

 set work_mem=512000;
 create index blah blah blah;

 Doesn't he want to change maintenance_work_mem instead of work_mem for index
 creation?

I hate it when my brain is thinking maintenance_work_mem and my
fingers are typing work_mem.

You're right.  It was maintenance_work_mem.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] In memory Database for postgres

2008-11-17 Thread Blazej
In my opinion very nice solution is building part of PostgreSQL
database in memory - below it is instruction how to build PostgreSQL
schema in memory in Linux. I tested this with my ROLAP solution for
recalculation MOLAP cubes in memory and then join with master cube
(this speeds up proces about 10 times!!! - but in other solution may
be different).

In grub (or other bootloader) you must declare ramdisk and then in OS:

mke2fs /dev/ram0
mkdir /mnt/ram0
mount /dev/ram0 /mnt/ram0
mkdir /mnt/ram0/pgspace

chown postgres:postgres /mnt/ram0/pgsapce

The /mnt/ram0/pgsapce folder must be empty (ERROR:  directory
/mnt/ram0 is not empty)

And  then you may create tables (in this schema of course) and write to them.

Of course you must delete schema before shutdown PostgreSQL and OS - I
dont't now how resolve problem with error when the schema was not
deleted? - I have no time to think about it maybe anybody know how to
restore db when the in memory schema was damaged?

Regards,
Blazej

2008/11/17 aravind chandu [EMAIL PROTECTED]:
 Hello,

   I guess most of you guys heard about In Memory Database.I have
 a small question regarding it.I need to create an In Memory Database for
 postgresql through which I have to perform various operations on postgresql
 database(queries,procedures,programs using pqxx API etc...).I didn't have
 any idea of how to start and where to start this issue.Please comment on
 this issue,so that it will be really helpful to me .

 Thanks,
 Avin.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] citing postgresql

2008-11-17 Thread Blazej
I'm joing to Tomas - and I want to cite postgresql too in my PhD thesis.


Regards,
Blazej Oleszkiewicz

2008/11/17 Tomas Lanczos [EMAIL PROTECTED]:
 I meant something for use in publications, like it is used for R and almost
 identically for GRASS:


  R Development Core Team (2008). R: A language and environment for
  statistical computing. R Foundation for Statistical Computing,
  Vienna, Austria. ISBN 3-900051-07-0, URL http://www.R-project.org.

 A BibTeX entry for LaTeX users is

  @Manual{,
title = {R: A Language and Environment for Statistical Computing},
author = {{R Development Core Team}},
organization = {R Foundation for Statistical Computing},
address = {Vienna, Austria},
year = {2008},
note = {{ISBN} 3-900051-07-0},
url = {http://www.R-project.org},
  }

 I changed the subject BTW. First time I sent this message to the list it was
 from my another account, not subscribed here, so I just copied the text and
 instead the subject I copied an address ... I am exhausted, sorry ... :-)

 Regards

 Tomas


 Scott Marlowe wrote:

 On Mon, Nov 17, 2008 at 11:09 AM, Tomas Lanczos [EMAIL PROTECTED]
 wrote:

 I am using Postgresql to store all my research related data. At the
 moment I
 am just finishing my PhD thesis and I want to cite postgresql correctly
 but
 can't find how to do it. Could somebody give me an advice?

 You can start by looking at the wikipedia entry.  Not sure what you
 mean by citing the db.  Are you looking at citing where it comes from,
 who makes it, what?

 It can be kinda tough to classify open source software... :)



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] In memory Database for postgres

2008-11-17 Thread Blazej
Sorry I forgot about create tablespace script - this is the SQL script:

CREATE TABLESPACE ram_space LOCATION '/mnt/ram0/pgspace';

And then:

CREATE TABLE (...) TABLESPACE ram_space;

and table is in memory.

Regards,
Blazej

2008/11/17 Blazej [EMAIL PROTECTED]:
 In my opinion very nice solution is building part of PostgreSQL
 database in memory - below it is instruction how to build PostgreSQL
 schema in memory in Linux. I tested this with my ROLAP solution for
 recalculation MOLAP cubes in memory and then join with master cube
 (this speeds up proces about 10 times!!! - but in other solution may
 be different).

 In grub (or other bootloader) you must declare ramdisk and then in OS:

 mke2fs /dev/ram0
 mkdir /mnt/ram0
 mount /dev/ram0 /mnt/ram0
 mkdir /mnt/ram0/pgspace

 chown postgres:postgres /mnt/ram0/pgsapce

 The /mnt/ram0/pgsapce folder must be empty (ERROR:  directory
 /mnt/ram0 is not empty)

 And  then you may create tables (in this schema of course) and write to them.

 Of course you must delete schema before shutdown PostgreSQL and OS - I
 dont't now how resolve problem with error when the schema was not
 deleted? - I have no time to think about it maybe anybody know how to
 restore db when the in memory schema was damaged?

 Regards,
 Blazej

 2008/11/17 aravind chandu [EMAIL PROTECTED]:
 Hello,

   I guess most of you guys heard about In Memory Database.I have
 a small question regarding it.I need to create an In Memory Database for
 postgresql through which I have to perform various operations on postgresql
 database(queries,procedures,programs using pqxx API etc...).I didn't have
 any idea of how to start and where to start this issue.Please comment on
 this issue,so that it will be really helpful to me .

 Thanks,
 Avin.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] In memory Database for postgres

2008-11-17 Thread Scott Marlowe
On Mon, Nov 17, 2008 at 3:02 PM, Blazej [EMAIL PROTECTED] wrote:
 In my opinion very nice solution is building part of PostgreSQL
 database in memory - below it is instruction how to build PostgreSQL
 schema in memory in Linux. I tested this with my ROLAP solution for
 recalculation MOLAP cubes in memory and then join with master cube
 (this speeds up proces about 10 times!!! - but in other solution may
 be different).

Just wondering if you compared it to how fast it runs if you've got
lots of shared_buffers and everything fits into memory.  That would be
an interesting comparison.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] In memory Database for postgres

2008-11-17 Thread Greg Smith

On Mon, 17 Nov 2008, Scott Marlowe wrote:


Just wondering if you compared it to how fast it runs if you've got
lots of shared_buffers and everything fits into memory.  That would be
an interesting comparison.


With a large increase in work_mem as well to speed up sorting.  If the 
bottleneck is write speed on the intermediate results mentioned, then it 
may very well be the case that the best way to accelerate this workload is 
with a RAM-based tablespace.  But in some cases tuning shared_buffers and 
work_mem way upwards is all it takes.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] In memory Database for postgres

2008-11-17 Thread Blazej
2008/11/18 Scott Marlowe [EMAIL PROTECTED]:
 On Mon, Nov 17, 2008 at 3:02 PM, Blazej [EMAIL PROTECTED] wrote:
 In my opinion very nice solution is building part of PostgreSQL
 database in memory - below it is instruction how to build PostgreSQL
 schema in memory in Linux. I tested this with my ROLAP solution for
 recalculation MOLAP cubes in memory and then join with master cube
 (this speeds up proces about 10 times!!! - but in other solution may
 be different).

 Just wondering if you compared it to how fast it runs if you've got
 lots of shared_buffers and everything fits into memory.  That would be
 an interesting comparison.


They are two small thing becouse I do this how I have described above:
(1) shared_buffers is SHARED;
(2) I needed extremaly speed up so if table stored in hdd it must be
sync (sooner or later - even if sync is off), so this operation is
very slowly on hdd when it compares to ram_disk.

Regards,
Blazej

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: Res: Res: [GENERAL] Archive files growth!!!

2008-11-17 Thread Craig Ringer
paulo matadr wrote:
 
 result for your query
 
 cliente_fone 341130 3345 26760k

OK. And what about indexes, logging triggers, etc?

Can you post the output of:

   \d client_fone

from psql?

(I can't really imagine how indexes alone could generate that much
logging data, though).

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] row locking question

2008-11-17 Thread Darren Govoni
Hi,
  I have experimented with PostgreSQL's table locking and FOR UPDATE
capabilities. But what I'm looking for is a row level lock specific to
only a set of rows, not the entire table.

For example, there is a table with many rows. Threads are doing
SELECTS to read some rows. They should be allowed to get results on
rows that are not currently locked by another thread's SELECT,FOR UPDATE
or equivalent read lock for those rows only.

Does PostgreSQL have a mechanism for this?

thank you.
Darren


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] row locking question

2008-11-17 Thread Tom Lane
Darren Govoni [EMAIL PROTECTED] writes:
 For example, there is a table with many rows. Threads are doing
 SELECTS to read some rows. They should be allowed to get results on
 rows that are not currently locked by another thread's SELECT,FOR UPDATE
 or equivalent read lock for those rows only.

Make those threads use SELECT FOR SHARE.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trying Compile libpq.dll with VC++ 9.0, but need help

2008-11-17 Thread Hiroshi Saito

Hi.

It is very strange

I checked that it was ready in VC9 again. 
http://winpg.jp/~saito/pg_work/WIN32_BUILD_INF/LOG_LIBPQ_WIN32MAK.txt
Furthermore, even psql was checked. 
http://winpg.jp/~saito/pg_work/WIN32_BUILD_INF/psql_win32_pg83.mak

Please move psql_win32_pg83 win32.mak on src\bin\psql.
http://winpg.jp/~saito/pg_work/WIN32_BUILD_INF/LOG_PSQL_WIN32MAK.txt

However, It will be necessary what is bad and to investigate in your 
environment.

I think

1. One proposal 
nmake -f win32.mak clean


Regards,
Hiroshi Saito

- Original Message - 
From: Tony Caduto [EMAIL PROTECTED]




Dann Corbit wrote:

Change from /MD to /MT and it will not use the runtime DLL.
Look under: Properties - Configuration Properties - C/C++ - Code
Generation - Runtime Library


  


Hi Dann,

I changed that in the win32.mak file and now it gives this error:

   link.exe @C:\DOCUME~1\20659\LOCALS~1\Temp\nmA6.tmp
  Creating library .\Release\libpqdll.lib and object .\Release\libpqdll.exp
LIBCMT.lib(dosmap.obj) : error LNK2005: __dosmaperr already defined in 
libpq.lib(win32error.obj)
.\Release\libpq.dll : fatal error LNK1169: one or more multiply defined 
symbols found
NMAKE : fatal error U1077: 'C:\Program Files (x86)\Microsoft Visual 
Studio 9.0\VC\BIN\link.exe' : return code '0x491'

Stop.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] row locking question

2008-11-17 Thread Darren Govoni
Thank you for the suggestion.

It seems that still blocks. Here is what I'm trying to achieve.

I have 10 rows each with an id 1,2,3,4 etc.

In one thread I want to update rows with id  4 (1-3) in my_table
inside a transaction (i.e. begin work;  update where; commit work;)

In another thread I do a select * from my_table (while the first thread
is inside the transaction). I want to receive rows with id 4-10 because
rows 1-3 are currently locked and therefore not seen.

For all I know, this isn't possible in current SQL semantics, but
would be very useful for distributed scheduling. So nodes competing to
update rows with the same query don't collide and shouldn't have to wait
on another threads updates to get some results.

I'll keep experimenting!

Darren

On Mon, 2008-11-17 at 19:37 -0500, Tom Lane wrote:
 Darren Govoni [EMAIL PROTECTED] writes:
  For example, there is a table with many rows. Threads are doing
  SELECTS to read some rows. They should be allowed to get results on
  rows that are not currently locked by another thread's SELECT,FOR UPDATE
  or equivalent read lock for those rows only.
 
 Make those threads use SELECT FOR SHARE.
 
   regards, tom lane
 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] In memory Database for postgres

2008-11-17 Thread Robert Treat
On Monday 17 November 2008 17:02:54 Blazej wrote:
 Of course you must delete schema before shutdown PostgreSQL and OS - I
 dont't now how resolve problem with error when the schema was not
 deleted? - I have no time to think about it maybe anybody know how to
 restore db when the in memory schema was damaged?


based on some similar, uh, experiences i've run across, i'd think easiest 
would be to keep a script around with truncate commands for all your tables, 
then when you restart, you run that script, which will fix your schema for 
you. This assumes you're keeping the default table space on hdd, if you lose 
the system catalogs, the right answer is initdb

-- 
Robert Treat
Conjecture: http://www.xzilla.net
Consulting: http://www.omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trying Compile libpq.dll with VC++ 9.0, but need help

2008-11-17 Thread Tony Caduto

Hiroshi Saito wrote:

Hi.

It is very strange

I checked that it was ready in VC9 again. 
http://winpg.jp/~saito/pg_work/WIN32_BUILD_INF/LOG_LIBPQ_WIN32MAK.txt
Furthermore, even psql was checked. 
http://winpg.jp/~saito/pg_work/WIN32_BUILD_INF/psql_win32_pg83.mak

Please move psql_win32_pg83 win32.mak on src\bin\psql.
http://winpg.jp/~saito/pg_work/WIN32_BUILD_INF/LOG_PSQL_WIN32MAK.txt

However, It will be necessary what is bad and to investigate in your 
environment.


I think

1. One proposal nmake -f win32.mak clean

Regards,
Hiroshi Saito

-



Hi,

I will give it another try tomorrow.  It could be something related to 
the fact that I am running on XP 64bit.  I will try and compile it on a 
VM with 32bit XP, and also try to build it with MinGW.  I did try the 
clean and it didn't make any difference.  It compiles fine with the
/MD option, I even got it to compile against the lastest openSSL, though 
I did have to comment out one little line.


Thanks,

Tony

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] In memory Database for postgres

2008-11-17 Thread Scott Marlowe
On Mon, Nov 17, 2008 at 7:13 PM, Robert Treat
[EMAIL PROTECTED] wrote:
 On Monday 17 November 2008 17:02:54 Blazej wrote:
 Of course you must delete schema before shutdown PostgreSQL and OS - I
 dont't now how resolve problem with error when the schema was not
 deleted? - I have no time to think about it maybe anybody know how to
 restore db when the in memory schema was damaged?


 based on some similar, uh, experiences i've run across, i'd think easiest
 would be to keep a script around with truncate commands for all your tables,
 then when you restart, you run that script, which will fix your schema for
 you. This assumes you're keeping the default table space on hdd, if you lose
 the system catalogs, the right answer is initdb

Heck, you could run PITR to another pgsql instance on the local hard
drives for cheap, and then if things go horribly wrong, you just
reinit the ram based instance and restore it from the hard drive one.

One shouldn't act / believe / have faith that the in store version of
the db is durable.  Of course it's not, no machine stays up all the
time without any errors.  Even mainframes occasionally suffer
downtime, even if it's some guy hitting the big red switch on accident
during a customer tour of the datacenter.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using database to find file doublettes in my computer

2008-11-17 Thread Eus
Hi Ho!

--- On Tue, 11/18/08, Lothar Behrens [EMAIL PROTECTED] wrote:

 Hi,
 
 I have a problem to find as fast as possible files that are
 double or
 in other words, identical.
 Also identifying those files that are not identical.
 
 My approach was to use dir /s and an awk script to convert
 it to a sql
 script to be imported into a table.
 That done, I could start issuing queries.
 
 But how to query for files to display a 'left / right
 view' for each
 file that is on multible places ?
 
 I mean this:
 
 This File;Also here
 C:\some.txt;C:\backup\some.txt
 C:\some.txt;C:\backup1\some.txt
 C:\some.txt;C:\backup2\some.txt
 
 but have only this list:
 
 C:\some.txt
 C:\backup\some.txt
 C:\backup1\some.txt
 C:\backup2\some.txt
 
 
 The reason for this is because I am faced with the problem
 of ECAD
 projects that are copied around
 many times and I have to identify what files are here
 missing and what
 files are there.
 
 So a manual approach is as follows:
 
 1)   Identify one file (schematic1.sch) and see, where are
 copies of
 it.
 2)   Compare the files of both directories and make a
 desision about
 what files to use further.
 3)   Determine conflicts, thus these files can't be
 copied together
 for a cleanup.
 
 Are there any approaches or help ?

I also have been in this kind of circumstance before, but I work under 
GNU/Linux as always.

1. At that time, I used `md5sum' to generate the fingerprint of all files in a 
given directory to be cleaned up.

2. Later, I created a simple Java program to group the names of all files that 
had the same fingerprint (i.e., MD5 hash).

3. I simply deleted the files with the same MD5 hash but one file with a good 
filename (in my case, the filename couldn't be relied on to perform a 
comparison since it differed by small additions like date, author's name, and 
the like).

4. After that, I used my brain to find related files based on the filenames 
(e.g., `[2006-05-23] Jeff - x.txt' should be the same as `Jenny - x.txt'). Of 
course, the Java program also helped me in grouping the files that I thought to 
be related.

5. Next, I perused the related files to see whether most of the contents were 
the same. If yes, I took the latest one based on the modified time.

 This is a very time consuming job and I am searching for
 any solution
 that helps me save time :-)

Well, I think I saved a lot of time at that time to be able to eliminate about 
7,000 files out of 15,000 files in about two weeks.

 I know that those problems did not arise when the projects
 are well
 structured and in a version management system. But that
 isn't here :-)

I hope you employ such a system ASAP :-)

 Thanks
 
 Lothar

Best regards,

Eus (FSF member #4445)



In this digital era, where computing technology is pervasive,

your freedom depends on the software controlling those computing devices.



Join free software movement today!

It is free as in freedom, not as in free beer!



Join: http://www.fsf.org/jf?referrer=4445


  

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] INNER JOIN .... USING in an UPDATE

2008-11-17 Thread Webb Sprague
Hi all,

(Version 3.5.5)

I have tried to figure this out, but the docs, google, and my all
imagination fail me. I want to use a join clause with a using list
in an update statement.  The following works, but it uses the WHERE
version of a join:

update new_pivoted_table a set 2008-11-10 = b.data_stuff from
test_pivot b  where a.id=b.id and a.id2=b.id2 and
date_ex='2008-11-10';
UPDATE 3

The following doesn't work,  to my chagrin:

wsprague=# update new_pivoted_table a set 2008-11-10 = b.data_stuff
from test_pivot b join a using (id, id2) where  date_ex='2008-11-10';
ERROR:  relation a does not exist

Neither does this:

wsprague=# update new_pivoted_table set 2008-11-10 = b.data_stuff
from test_pivot b join new_pivoted_table using (id, id2) where
date_ex='2008-11-10';
ERROR:  table name new_pivoted_table specified more than once

The following gives the wrong answer (updates five rows all to 2 -- wrong):

update new_pivoted_table set 2008-11-10 = b.data_stuff from
test_pivot b join new_pivoted_table c using (id, id2) where
date_ex='2008-11-10';

I would REALLY LIKE to use the using clause, because it works great in
autogenerated code for EXECUTE statements, if at all possible.  I also
would like to avoid iterating.

Here is test_pivot:

 id | id2 |  date_ex   | data_stuff
+-++
  1 | one | 2008-10-10 |  1
  1 | one | 2008-11-10 |  2
  2 | one | 2008-11-10 |  3
  1 | one | 2008-12-10 |  4
  2 | one | 2008-12-10 |  5
  3 | one | 2008-12-10 |  6
  1 | two | 2008-11-10 |  7
  1 | two | 2008-11-10 |  8
  2 | two | 2008-10-10 |  9

Here is new_pivoted_table:

 id | id2 | 2008-10-10 | 2008-11-10 | 2008-12-10
+-+++
  1 | one |||
  1 | two |||
  2 | one |||
  2 | two |||
  3 | one |||

Tx!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR

2008-11-17 Thread Craig Ringer
Gustavo Rosso wrote:
 I create a db wiht user postgres, but other users can't no create
 tables, and I give all privileges.
 
 (banco is my db)
 
 grant all on database banco to public;
 grant create on database banco to public;
 
 This is the error:
 *ERROR:  must be owner of relation (table)*

The user running the GRANT command must be the table owner or (I think)
a database superuser.

For table creation, you may need to GRANT the CREATE permission (or ALL
permissions) on the `public' schema. I can't remember off the top of my
head.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to reduce impact of a query.

2008-11-17 Thread Craig Ringer
Howard Cole wrote:

 If I reduce maintenance_work_mem
 then the database dump/restore is slower but there is less overall
 impact on the server.

There could be more impact, rather than less, if it forces a sort that'd
be done in memory out to disk instead. If you have dedicated storage on
separate spindles for disk sorts etc that might be OK, but it doesn't
sound like you do.

 Is there some equivalent parameter on the server
 to throttle general queries?

As far as I know there is no facility for this within PostgreSQL.

On a Linux (or maybe other UNIX too) machine you can use ionice to tell
the OS I/O scheduler to give that process lower priority for disk access
or rate limit it's disk access. Note that setting the CPU access
priority (`nice' level) will NOT help unless the server is CPU-limited,
and even then probably not much.

Maybe there is a similar facility to ionice for Windows, or the generic
process priority setting also affects disk I/O? You'll probably have to
do some research to find out.

I'm not sure there's any way to stop it pushing other useful data out of
shared_buffers, though. Anyone?

 It would be unfortunate if all queries
 slowed down a bit, but a better outcome than having the entire server
 hang for 40 seconds.

Are you sure there isn't a table locking issue involved - something your
batch query is doing that's causing other queries to block until that
transaction commits/rolls back? Check pg_locks:

  SELECT * FROM pg_locks;

Also: Try setting the transaction to readonly before running it, and see
if it succeeds.

  SET transaction_read_only = true;

This is probably a good thing to do anyway, as it *might* help the
database make better decisions.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to reduce impact of a query.

2008-11-17 Thread Craig Ringer
Howard Cole wrote:

 Thanks for the input Scott. You are correct - I am IO bound, but only
 for the query described. 99% of the time, my IO runs at 3% or less, even
 during peak times, only this one query, which happens approximately 10
 times a day grinds the system to a halt.

If your I/O is normally that idle, surely one big query shouldn't stop
everything? Slow it down, maybe, but stop it?

Perhaps your RAID drivers, hardware (if any) or OS are deeply queuing
requests and/or doing lots of readahead, probably to make sequential I/O
benchmarks and random I/O throughput benchmarks that don't consider
request latency look better.

Consider turning down I/O queue depths if possible, and/or tuning
readhead to something suitable for your I/O loads. The latter will
require some work to find the right balance between random request
latency and sequential I/O throughput.

I had HUGE problems with a 3Ware 8500-8 RAID controller queuing requests
very deeply inside the Linux driver its self, rather than the OS's I/O
scheduler, causing high priority small I/O to be stuck behind long
series of low priority bulk reads and writes since the driver wasn't
aware of the OS's I/O priority mechanisms. I ended up modifying the
driver a little to reduce the queue depth since there wasn't a runtime
param for it, and the result was VASTLY improved I/O latency with only a
very small cost to throughput.

It'd be nice if the controller had the brains to be told fetch this
block, and read this one too if it happens to be on the way ... but if
there's anything like that out there, with hardware or driver level I/O
priority awareness, I haven't run into it yet. Alas.


By the way, it'd be REALLY nice if the postmaster had the facility to
set CPU and I/O nice levels for the backends individually (on Linux and
other supporting platforms). I'm currently using a user C function
linked into the backend to set the nice level, but it'd be a great thing
to have built-in. Would patches for this be considered, with the
functions being no-ops (with warnings?) on non-supporting platforms?

I also think it's a wee bit of a pity that there's no way to tell Pg
that a job isn't important, so data shouldn't be permitted to push much
else out of shared_buffers or the OS's cache. The latter can be ensured
to an extent, at least on Linux, with posix_fadvise(...,
POSIX_FADV_NOREUSE) or with madvise(...). The former is presumably
possible with proper work_mem (etc) settings, but I find it's the OS's
habit of filling the cache with gigabytes of data I won't need again
that's the real problem. I don't know how this'd work when interacting
with other backends doing other work with the same tables, though.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table bloat in 8.3

2008-11-17 Thread pgsql-general
On Thu, 13 Nov 2008, Scott Marlowe wrote:
 On Thu, Nov 13, 2008 at 1:09 PM, David Wilson [EMAIL PROTECTED] wrote:
  On Thu, Nov 13, 2008 at 2:03 PM,  [EMAIL PROTECTED] wrote:
  I have several tables that when I run VACUUM FULL on, they are under 200k,
  but after a day of records getting added they grow to 10 to 20 megabytes.
  They get new inserts and a small number of deletes and updates.
 
  Can you define small number of deletes and updates? The stats above
  would disagree with small. Remember that every update creates a new,
  updated version of the row, which is where the increase is coming
  from.
 
 And don't forget to look into failed inserts.  Those too create dead tuples.

I finally figured it out.  I have three tables, A with child B who has a 
child C.  I had a query that would count all the rows in C and update the 
total to the rows they linked to in B, and then do the same with B to A.
It basicly updated every row in A and B whenever it ran.  It was supposed 
to only run if the counts got out of sync but a copy slipped into code 
that got run a LOT and so that was causing those tables to grow out
of control.

With that query removed I am no longer seeing the db expand. 

Thanks for the hints!

--
Ian Smith
www.ian.org

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to reduce impact of a query.

2008-11-17 Thread Scott Marlowe
On Mon, Nov 17, 2008 at 11:10 PM, Craig Ringer
[EMAIL PROTECTED] wrote:

 I also think it's a wee bit of a pity that there's no way to tell Pg
 that a job isn't important, so data shouldn't be permitted to push much
 else out of shared_buffers or the OS's cache. The latter can be ensured
 to an extent, at least on Linux, with posix_fadvise(...,
 POSIX_FADV_NOREUSE) or with madvise(...). The former is presumably
 possible with proper work_mem (etc) settings, but I find it's the OS's
 habit of filling the cache with gigabytes of data I won't need again
 that's the real problem. I don't know how this'd work when interacting
 with other backends doing other work with the same tables, though.

Agreed.

It could be that in the OP's case the data set for the big query is so
big it blows out share_buffers completely / most of the way, and then
I/O for the other data has to hit the drives instead of memory and
that's why they're so slow.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to reduce impact of a query.

2008-11-17 Thread Craig Ringer
Scott Marlowe wrote:
 On Mon, Nov 17, 2008 at 11:10 PM, Craig Ringer
 [EMAIL PROTECTED] wrote:
 
 I also think it's a wee bit of a pity that there's no way to tell Pg
 that a job isn't important, so data shouldn't be permitted to push much
 else out of shared_buffers or the OS's cache. The latter can be ensured
 to an extent, at least on Linux, with posix_fadvise(...,
 POSIX_FADV_NOREUSE) or with madvise(...).

Unfortunately, this isn't as useful as I'd initially hoped:

http://kerneltrap.org/node/7563 :

Quoting Torvalds:

So O_DIRECT not only is a total disaster from a design standpoint (just
look at all the crap it results in), it also indirectly has hurt better
interfaces. For example, POSIX_FADV_NOREUSE (which _could_ be a useful
and clean interface to make sure we don't pollute memory unnecessarily
with cached pages after they are all done) ends up being a no-op ;/

Darn.

 It could be that in the OP's case the data set for the big query is so
 big it blows out share_buffers completely / most of the way, and then
 I/O for the other data has to hit the drives instead of memory and
 that's why they're so slow.

shared_buffers alone shouldn't be *too* bad (right?), but if it's
pushing data out of the OS's cache as well (or the OS does a bad job of
caching disk reads) then that'd really hurt, yeah.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Libpq.dll lite is here :-)

2008-11-17 Thread Tony Caduto
Finally got everything to compile with MinGW.  Thanks to everyone that 
helped me out getting it going.


Here are the download links:

Libpq.dll 8.3.5.8322 with SSL No Zlib (includes the latest openssl dlls 
0.9.8i also built with MinGW)


http://www.milwaukeesoft.com/libpq_mingw_ssl.zip

If you need a really lite deployment there is also:

http://www.milwaukeesoft.com/libpq_mingw_noSSL.zip  (just libpq.dll 
8.3.5.8322 with no other dependencies)


These also have no special MS VC++ runtime requirements, just the 
standard MSVCRT.DLL which means these all work
perfectly on WINE.  The current libpq.dll won't work on wine because of 
the VC++ runtime.


I would like to maintain these for future PostgreSQL releases and if 
anyone is interested in helping or has a suggestion on where to host 
them permanently please let me know.


Later,

Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using database to find file doublettes in my computer

2008-11-17 Thread Craig Ringer
Lothar Behrens wrote:

 But how to query for files to display a 'left / right view' for each
 file that is on multible places ?

One approach is to use a query to extract the names of all files with
duplicates, and store the results in a TEMPORARY table with a UNIQUE
index (or PRIMARY KEY) on the filename.

You then self-inner-join your paths table to its self, with the join
condition being that the filename part of the path EXISTS in the table
of files having duplicates. You should also filter out rows where the
first filename is = the second filename to avoid outputting (a,b) and
(b,a) for each duplicate, and to avoid outputting (a,a) and (b,b) rows.

You can do much the same thing in a single query without the temp table,
but I think you'll need a correlated subquery to check for duplicates
(or a join on a subquery that'll really expand the results to be
processed), so the temp table approach is probably going to be lots faster.

Quick example code (untested but should give you the idea), assumes you
have a function get_filename(...) that extracts just the filename part
of the path:

CREATE TABLE paths (
   path PRIMARY KEY,
   --- other fields
);

-- Populate paths

-- Once paths is populated, extract duplicates:

SELECT get_filename(path) AS fn, count(path) AS n
FROM paths HAVING count(path)  1
INTO TEMPORARY TABLE dup_files;

-- Creates UNIQUE index on PATH as well
ALTER TABLE dup_files ADD CONSTRAINT PRIMARY KEY (path);

-- Now build your side-by-side table of duplicates:

SELECT p1.path, p2.path
FROM paths p1 INNER JOIN paths p2
 ON (get_filename(p1.path) = get_filename(p2.path))
WHERE EXISTS(SELECT 1 FROM dup_files WHERE fn = get_filename(p1.path))
  AND p1.path  p2.path;


You can replace get_filename(fn) with appropriate code, but I'd write a
quick SQL function marked IMMUTABLE to neatly wrap up the pathname
extraction instead.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] inherit table and its data

2008-11-17 Thread Dilyan Berkovski
hi All,
I am using PostgreSQL 8.2, and I am interested in creating a table B that 
inherits table A, but with all it's data! 

create table B {a int} inherits A, just adds the structure of table A, not its 
data. 

Is it possible to do this without the classic way - trigger on table A so each 
insert/update is done also in B, or it's not possible without triggers.

thanks, Dilyan



  

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general