Re: [GENERAL] re-using RETURNING

2009-11-15 Thread Andreas Kretschmer
Andreas Kretschmer akretsch...@spamfence.net wrote:

 A. Kretschmer andreas.kretsch...@schollglas.com wrote:
 
  Hi,
  
  just to be sure, it is still (8.4) not possible to use RETURNING within an
  other INSERT?
 
 Thx for all replies. It is not a really problem, i will write a
 benchmark to compare the new writeable CTE (in 8.5 alpha) with the old
 style (8.4). That's all ;-)
 
 And yes, i will publish the result, of course.

http://akretschmer.blogspot.com/2009/11/writeable-cte-short-performance-test.html


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
If I was god, I would recompile penguin with --enable-fly.   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Experience with many schemas vs many databases

2009-11-15 Thread undisclosed user
Hello everyone,

I have hit a wall on completing a solution I am working on. Originally, the
app used a db per user (on MyIsam)the solution did not fair so well in
reliability and performance. I have been increasingly interested in Postgres
lately.

Currently, I have about 30-35k users/databases. The general table layout is
the sameonly the data is different. I don't need to share data across
databases. Very similar to a multi-tenant design.

Here are a few questions I have:

1. Could postgres support this many DBs? Are there any weird things that
happen when the postgres is used this way?
2. Is the schema method better? Performance, maintainability, backups,
vacuum? Weird issues?


Any incite is greatly appreciated.

Thanks.
Frank


Re: [GENERAL] Experience with many schemas vs many databases

2009-11-15 Thread John R Pierce

undisclosed user wrote:

Hello everyone,

I have hit a wall on completing a solution I am working on. 
Originally, the app used a db per user (on MyIsam)the solution did 
not fair so well in reliability and performance. I have been 
increasingly interested in Postgres lately. 

Currently, I have about 30-35k users/databases. The general table 
layout is the sameonly the data is different. I don't need to 
share data across databases. Very similar to a multi-tenant design.


35000 users with separate databases isn't going to scale well on ANY 
conventional system I'm familiar with



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Experience with many schemas vs many databases

2009-11-15 Thread Jorge Godoy
Frank,

I had the same questioning a while ago and another thing that made me think
was the amount of data per user.

In the end, I decided on using a single DB and single schema and add a
clause to split everything by each customer (customer_id).

I then added an index on that column and my code became simpler and fast
enough.

This also allowed me to work with some other aggregates that provided very
useful global statistics.

--
Jorge Godoy jgo...@gmail.com


On Sun, Nov 15, 2009 at 04:28, undisclosed user
lovetodrinkpe...@gmail.comwrote:

 Hello everyone,

 I have hit a wall on completing a solution I am working on. Originally, the
 app used a db per user (on MyIsam)the solution did not fair so well in
 reliability and performance. I have been increasingly interested in Postgres
 lately.

 Currently, I have about 30-35k users/databases. The general table layout is
 the sameonly the data is different. I don't need to share data across
 databases. Very similar to a multi-tenant design.

 Here are a few questions I have:

 1. Could postgres support this many DBs? Are there any weird things that
 happen when the postgres is used this way?
 2. Is the schema method better? Performance, maintainability, backups,
 vacuum? Weird issues?


 Any incite is greatly appreciated.

 Thanks.
 Frank



Re: [GENERAL] Fast Search on Encrypted Feild

2009-11-15 Thread Alban Hertroys
On 14 Nov 2009, at 22:27, Naoko Reeves wrote:

 I have a encrypted column use encrypt function.
 Querying against this column is almost not acceptable – returning 12 rows 
 took 25,908 ms.
 The query was simply Select decrypt(phn_phone_enc) FROM phn WHERE 
 decrypt(phn_phone_enc,’xxx’,’xxx’) LIKE ‘123%’
 So I built index like: CREATE INDEX idx_phn_phone_dec ON phn 
 (decrypt(phn_phone_enc, ‘xxx’, ‘xxx’))
 This returns 12 rows in 68 ms.
 Would this be the solution for the fast encrypted field search or does this 
 raise the security issue?

Is there some way you can invert the process?
Normally if you verify encrypted data (typically passwords) you would encrypt 
the user-specified data and compare that to the encrypted data in the database 
instead of decrypting both and comparing the actual data. I doubt you can do 
that with partial data though, and since you're showing a LIKE expression 
here...

One thing I notice in your query is that you're decrypting your data twice; 
you're calling two different functions for the same purpose. You may notice a 
speed-up if you use decrypt(text) instead of decrypt(text, text, text) in your 
where clause, provided that the function is defined stable[1] or immutable[2] 
as opposed to the default volatile[3].

Or is decrypting only a part of the encrypted string significantly faster? That 
would imply some linear encryption algorithm in which case you may be able to 
use my initial suggestion: Encrypting '123' would create something that's 
comparable to the first 3 encrypted characters of an encrypted phone number. A 
query would look like: SELECT decrypt(phn_phone_enc) FROM phn WHERE 
phn_phone_enc LIKE encrypt('123')||'%'. Here as well it is important that 
encrypt(text) is defined stable or immutable.

1. The same input data combined with data from the DB (as it is visible to the 
transaction) always yields the same result.
2. The same input data always yields the same result.
3. There is no correlation between the input data and the result.

Alban Hertroys

--
Screwing up is the best way to attach something to the ceiling.


!DSPAM:737,4affebf911071302014309!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Experience with many schemas vs many databases

2009-11-15 Thread Johan Nel

undisclosed user wrote:
I have hit a wall on completing a solution I am working on. Originally, 
the app used a db per user (on MyIsam)the solution did not fair so 
well in reliability and performance. I have been increasingly interested 
in Postgres lately. 

Currently, I have about 30-35k users/databases. The general table layout 
is the sameonly the data is different. I don't need to share data 
across databases. Very similar to a multi-tenant design.


Here are a few questions I have:

1. Could postgres support this many DBs? Are there any weird things that 
happen when the postgres is used this way?
As John indicated, not any traditional environment that will handle that 
well..


2. Is the schema method better? Performance, maintainability, backups, 
vacuum? Weird issues?
I would rather use schemas to logically group tables together.  Insert a 
user_id column in the tables and ensure each user can only see the rows he 
has access to via query design to limit user access.  Something in the 
line of:


CREATE OR REPLACE VIEW SomeTableQuery AS
  SELECT * FROM SomeTable WHERE user_id = current_user;

Where SomeTable has a column user_id that defaults to current_user.

Johan Nel
Pretoria, South Africa.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fast Search on Encrypted Feild

2009-11-15 Thread Merlin Moncure
On Sat, Nov 14, 2009 at 5:08 PM, John R Pierce pie...@hogranch.com wrote:
 Naoko Reeves wrote:

 I have a encrypted column use encrypt function.

 Querying against this column is almost not acceptable – returning 12 rows
 took 25,908 ms.

 The query was simply Select decrypt(phn_phone_enc) FROM phn WHERE
 decrypt(phn_phone_enc,’xxx’,’xxx’) LIKE ‘123%’

 So I built index like: CREATE INDEX idx_phn_phone_dec ON phn
 (decrypt(phn_phone_enc, ‘xxx’, ‘xxx’))

 This returns 12 rows in 68 ms.

 Would this be the solution for the fast encrypted field search or does
 this raise the security issue?


 very ugly for all the reasons given by the others.

 if all your LIKE searches are specifically for the first 3 chars of the
 phn_phone field, I dunno, but maybe you could have a seperate field (and
 index) which consists of just the first three characters encrypted.

 this would, however, allow people with access to the database but without
 the cipher key to group your data by this prefixm, and potentially the
 knowlege of that grouping could be used as an aid in cracking the keys.

I'm not good enough with encryption to know for sure, but I suspect
that doing this would give a _determined_ attacker enough information
to break into your data (just for starters, you reduce the pool of
numbers that would have to be guessed by a few orders of magnitude,
and you give away geographical location).  By determined I mean
someone with a complete copy of your database, time, cpu power, and a
bad attitude :-).  I'm not saying not to do it, but the risks should
be known.  If you are just looking to block casual access to the data,
it's probably ok.  Even a single digit would reduce the brute force
search by 90% if the numbers distribute well.

That said, I'm betting you can't really budget 25 seconds of cpu time
for a common lookup search.  So you either have to compromise security
or re-think the way you are doing lookups...

merlin

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fast Search on Encrypted Feild

2009-11-15 Thread Naoko Reeves
As Alban pointed out encrypting the search value and compare stored
encrypted value is very fast though it can't do LIKE search. 
After I received valuable input from Merlin, Bill and John, I did some
research regarding search against encrypted field in general and as in
everyone's advice, I must acknowledge the cost of encrypted data for
searching and considering alternative method (partial encryption, store
in different table etc).
Thank you very again much for all the advice.

 I have a encrypted column use encrypt function.
 Querying against this column is almost not acceptable - returning 12
rows took 25,908 ms.
 The query was simply Select decrypt(phn_phone_enc) FROM phn WHERE
decrypt(phn_phone_enc,'xxx','xxx') LIKE '123%'
 So I built index like: CREATE INDEX idx_phn_phone_dec ON phn
(decrypt(phn_phone_enc, 'xxx', 'xxx'))
 This returns 12 rows in 68 ms.
 Would this be the solution for the fast encrypted field search or does
this raise the security issue?

Is there some way you can invert the process?
Normally if you verify encrypted data (typically passwords) you would
encrypt the user-specified data and compare that to the encrypted data
in the database instead of decrypting both and comparing the actual
data. I doubt you can do that with partial data though, and since you're
showing a LIKE expression here...

One thing I notice in your query is that you're decrypting your data
twice; you're calling two different functions for the same purpose. You
may notice a speed-up if you use decrypt(text) instead of decrypt(text,
text, text) in your where clause, provided that the function is defined
stable[1] or immutable[2] as opposed to the default volatile[3].

Or is decrypting only a part of the encrypted string significantly
faster? That would imply some linear encryption algorithm in which case
you may be able to use my initial suggestion: Encrypting '123' would
create something that's comparable to the first 3 encrypted characters
of an encrypted phone number. A query would look like: SELECT
decrypt(phn_phone_enc) FROM phn WHERE phn_phone_enc LIKE
encrypt('123')||'%'. Here as well it is important that encrypt(text) is
defined stable or immutable.

1. The same input data combined with data from the DB (as it is visible
to the transaction) always yields the same result.
2. The same input data always yields the same result.
3. There is no correlation between the input data and the result.

Alban Hertroys

--
Screwing up is the best way to attach something to the ceiling.


!DSPAM:991,4affebf711071508957761!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Experience with many schemas vs many databases

2009-11-15 Thread Tom Lane
undisclosed user lovetodrinkpe...@gmail.com writes:
 I have hit a wall on completing a solution I am working on. Originally, the
 app used a db per user (on MyIsam)the solution did not fair so well in
 reliability and performance. I have been increasingly interested in Postgres
 lately.

 Currently, I have about 30-35k users/databases. The general table layout is
 the sameonly the data is different. I don't need to share data across
 databases. Very similar to a multi-tenant design.

Use multiple schemas, not multiple databases.  If you had it working in
mysql then what you were using was more nearly schemas than databases
anyway --- it's unfortunate that the two systems use the same word
database for what are really different structures.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Experience with many schemas vs many databases

2009-11-15 Thread Merlin Moncure
On Sun, Nov 15, 2009 at 1:28 AM, undisclosed user
lovetodrinkpe...@gmail.com wrote:
 Hello everyone,
 I have hit a wall on completing a solution I am working on. Originally, the
 app used a db per user (on MyIsam)the solution did not fair so well in
 reliability and performance. I have been increasingly interested in Postgres
 lately.
 Currently, I have about 30-35k users/databases. The general table layout is
 the sameonly the data is different. I don't need to share data across
 databases. Very similar to a multi-tenant design.
 Here are a few questions I have:
 1. Could postgres support this many DBs? Are there any weird things that
 happen when the postgres is used this way?
 2. Is the schema method better? Performance, maintainability, backups,
 vacuum? Weird issues?


Use schema.  Here's a pro tip: if you have any sql or pl/pgsql
functions you can use the same function body across all the schema as
long as you discard the plans when you want to move from schema to
schema.

I'm curious if those suggesting there is a practical upper limit of
the number of schema postgres can handle have any hard information to
back that up...

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Experience with many schemas vs many databases

2009-11-15 Thread Scott Marlowe
On Sun, Nov 15, 2009 at 11:54 AM, Merlin Moncure mmonc...@gmail.com wrote:

 Use schema.  Here's a pro tip: if you have any sql or pl/pgsql
 functions you can use the same function body across all the schema as
 long as you discard the plans when you want to move from schema to
 schema.

I too vote for schemas.

 I'm curious if those suggesting there is a practical upper limit of
 the number of schema postgres can handle have any hard information to
 back that up...

The real limit is performance of certain things over the catalogs, not
the number of schemas, but how many objects are in the db seem to
impact me more, and that's only with slony.  Everything else runs fine
with ~40k objects in my db.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Experience with many schemas vs many databases

2009-11-15 Thread John R Pierce

undisclosed user wrote:
Currently, I have about 30-35k users/databases. The general table 
layout is the sameonly the data is different. I don't need to 
share data across databases. Very similar to a multi-tenant design.


Do these users make their own arbitrary SQL queries?Or is all the 
coding canned, and they are simply running applications?   in the latter 
case, I would definitely suggest using a single database and schema, and 
one set of tables and having CustomerID be a field that you index in 
these tables.








--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Voting: pg_ctl init versus initdb

2009-11-15 Thread Greg Smith

Zdenek Kotala wrote:

1) Yeah I like pg_ctl init

pg_ctl  init will be preferred method and initdb will
disappear from usr/bin in the future.
I agree with this position.  My own database wrapper scripts work this 
way already, and it would be nice for them to have one more command that 
maps directly to a pg_ctl call rather than needing to special-case 
initdb instead.  There's also the precedent that the RPM scripts provide 
an initdb target so that the user doesn't need to know how to use 
initdb directly; in the field, that's what I tell people to use when in 
an RPM environment, rather than calling initdb directly.


I believe that the fact that there's this separate binary named initdb 
you only call once, and that has a name unlike all of the rest of the 
binaries, would be considered a bad design were that decision being made 
from a UI and packaging perspective right now.  Zdenek is completely 
correct to identify this inconsistency, the tiny bump it adds to the 
learning curve, and the difficulty it adds to packaging as things that 
should be improved.  Every unique thing you have to know in order to 
start using the database costs a little bit of time, and I'm always in 
favor of anything that removes one of those from the list, even if it's 
a small one.  If anything, I think you're not going far enough.  Not 
only should pg_ctl init work, pg_ctl start should be more helpful in 
the way service postgresql start is--suggesting to the user that they 
need the init step if the cluster doesn't exist.


That said, I wouldn't even bother trying to get such a change committed, 
as this project actively resists changes that impact backward 
compatibility merely to improve the inexperienced user experience.  It 
will be an uphill battle the whole way, beset by people who don't have 
to spend enough time with PostgreSQL newbies enough to appreciate what 
they struggle with.  If I could avoid having to teaching them initdb, 
and instead just mention it as another option during the pg_ctl lesson, 
that would be one less thing to have to train on.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Experience with many schemas vs many databases

2009-11-15 Thread undisclosed user
If I were to switch to a single DB/single schema format shared among all
users , how can I backup each user individually?

Frank

On Sat, Nov 14, 2009 at 10:28 PM, undisclosed user 
lovetodrinkpe...@gmail.com wrote:

 Hello everyone,

 I have hit a wall on completing a solution I am working on. Originally, the
 app used a db per user (on MyIsam)the solution did not fair so well in
 reliability and performance. I have been increasingly interested in Postgres
 lately.

 Currently, I have about 30-35k users/databases. The general table layout is
 the sameonly the data is different. I don't need to share data across
 databases. Very similar to a multi-tenant design.

 Here are a few questions I have:

 1. Could postgres support this many DBs? Are there any weird things that
 happen when the postgres is used this way?
 2. Is the schema method better? Performance, maintainability, backups,
 vacuum? Weird issues?


 Any incite is greatly appreciated.

 Thanks.
 Frank



Re: [GENERAL] Experience with many schemas vs many databases

2009-11-15 Thread John R Pierce

undisclosed user wrote:
If I were to switch to a single DB/single schema format shared among 
all users , how can I backup each user individually?


depending on how many tables, etc, I suppose you could use a seperate 
series of SELECT statements ...
but if this is a requirement, it certainly puts constraints on how you 
organize your data.   without a much deeper knowlege of your 
application, data, and requirements, its kind of hard to give any sort 
of recommendations.   you mentioned myISAM, so I gather this data isn't 
at all transactional, nor is relational integrity a priority.










--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Voting: pg_ctl init versus initdb

2009-11-15 Thread Simon Riggs
On Sat, 2009-11-14 at 15:07 +0100, Zdenek Kotala wrote:
 extend pg_ctl functionality and add init command which do same thing
 like initdb

If we did add an extra option then the option would be initdb not
init. It would take us all years to remove all evidence of the phrase
initdb from the mailing lists and our minds.

-- 
 Simon Riggs   www.2ndQuadrant.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Config help

2009-11-15 Thread BuyAndRead Test
Hi

I need some help with our postgresql.conf file. I would appreciate if
someone could look at the values and tell me if it looks alright or if I
need to change anything.

The db server has 4 GB of memory and one quad core CPU (2,53 GHz). 
The hard drives is on a iSCSI array and is configured as follows:
DB data: 4 x SAS (10.000 rpm) disks in RAID 10
DB WAL: 2 x SAS (10.000 rpm) disks in RAID 1
OS: Linux (Debian Lenny)
DB: PostgreSQL 8.4

The DB is used by a website. It has 75 tables and about a total of 10 mill
rows. The total size of the DB data (data+indexes?) is reported to be about
4 GB when I use the \l+ command in version 8.4.

I have used the following link as a guide:
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server 
This is the changes I have done to the default postgresql.conf file:
shared_buffers = 2048MB
work_mem = 6MB
wal_buffers = 256kB
checkpoint_segments = 20
random_page_cost = 3.0
default_statistics_target = 50

Should I change the default value of temp_buffers or maintenance_work_mem as
well, and what value should I choose? Is there any other values that should
be changed from the default?

And another question: Is there a way to find out the maximum simultaneous
connections that has been used? I think that I could reduce the max number
of connection to save some memory.

Regards

Bjørn Håkon




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Voting: pg_ctl init versus initdb

2009-11-15 Thread Tom Lane
Simon Riggs si...@2ndquadrant.com writes:
 On Sat, 2009-11-14 at 15:07 +0100, Zdenek Kotala wrote:
 extend pg_ctl functionality and add init command which do same thing
 like initdb

 If we did add an extra option then the option would be initdb not
 init. It would take us all years to remove all evidence of the phrase
 initdb from the mailing lists and our minds.

init is already embedded in various packagers' initscripts.  And
I thought the entire point of this proposal was that we could expunge
knowledge of initdb from users' minds.  (I'm dubious of that too,
but varying from what's already established at the script level will
not help.)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Config help

2009-11-15 Thread Scott Marlowe
On Sun, Nov 15, 2009 at 2:43 PM, BuyAndRead Test t...@buyandread.com wrote:
 Hi

 I need some help with our postgresql.conf file. I would appreciate if
 someone could look at the values and tell me if it looks alright or if I
 need to change anything.

 The db server has 4 GB of memory and one quad core CPU (2,53 GHz).
 The hard drives is on a iSCSI array and is configured as follows:
 DB data: 4 x SAS (10.000 rpm) disks in RAID 10
 DB WAL: 2 x SAS (10.000 rpm) disks in RAID 1

Is there a battery backed cache in there somewhere?  That would help
on handling high write loads.

 OS: Linux (Debian Lenny)
 DB: PostgreSQL 8.4

 The DB is used by a website. It has 75 tables and about a total of 10 mill
 rows. The total size of the DB data (data+indexes?) is reported to be about
 4 GB when I use the \l+ command in version 8.4.

The cheapest performance boost would be more memory.  Going to 8Gigs
would let the whole db get cached and leave enough memory over for
sorts and OS etc.

 I have used the following link as a guide:
 http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
 This is the changes I have done to the default postgresql.conf file:
 shared_buffers = 2048MB

A little high for a machine with only 4G ram.  With 8 G if you
allocate 4G for share_buffers you'd leave 4G for OS and pg.  Here
you're only leaving 2G.

 work_mem = 6MB

Depending on your workload it might be better to raise this and lower
shared_buffers.

 wal_buffers = 256kB
 checkpoint_segments = 20
 random_page_cost = 3.0
 default_statistics_target = 50

The new default is 100, I'd tend to stick with that unless you have
very uniform data.

 Should I change the default value of temp_buffers or maintenance_work_mem as
 well, and what value should I choose? Is there any other values that should
 be changed from the default?

Always consider cranking up maint work mem because not many things use
it and the things that do can really use it.

 And another question: Is there a way to find out the maximum simultaneous
 connections that has been used? I think that I could reduce the max number
 of connection to save some memory.

You'd really need to track that yourself with some kind of simple
script.   (bash)

while true; do psql mydb -c select count(*) from pg_stat_activity
;sleep 60;done | tee myconn.log

or something like that.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Config help

2009-11-15 Thread BuyAndRead Test
Thanks for the quick and helpful reply.

Yes, the storage array has a battery backed cache, it’s a Dell PowerVault
MD3000i, with dual controllers.

This is a virtual server, so I could give it as much as 8 GB of memory if
this will give much higher performance. What should shared_buffere be set to
if I use 8 GB, as much as 4 GB?

-Bjørn


 -Opprinnelig melding-
 Fra: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] På vegne av Scott Marlowe
 Sendt: 15. november 2009 23:21
 Til: BuyAndRead Test
 Kopi: pgsql-general@postgresql.org
 Emne: Re: [GENERAL] Config help
 
 On Sun, Nov 15, 2009 at 2:43 PM, BuyAndRead Test t...@buyandread.com
 wrote:
  Hi
 
  I need some help with our postgresql.conf file. I would appreciate if
  someone could look at the values and tell me if it looks alright or
 if I
  need to change anything.
 
  The db server has 4 GB of memory and one quad core CPU (2,53 GHz).
  The hard drives is on a iSCSI array and is configured as follows:
  DB data: 4 x SAS (10.000 rpm) disks in RAID 10
  DB WAL: 2 x SAS (10.000 rpm) disks in RAID 1
 
 Is there a battery backed cache in there somewhere?  That would help
 on handling high write loads.
 
  OS: Linux (Debian Lenny)
  DB: PostgreSQL 8.4
 
  The DB is used by a website. It has 75 tables and about a total of 10
 mill
  rows. The total size of the DB data (data+indexes?) is reported to be
 about
  4 GB when I use the \l+ command in version 8.4.
 
 The cheapest performance boost would be more memory.  Going to 8Gigs
 would let the whole db get cached and leave enough memory over for
 sorts and OS etc.
 
  I have used the following link as a guide:
  http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
  This is the changes I have done to the default postgresql.conf file:
  shared_buffers = 2048MB
 
 A little high for a machine with only 4G ram.  With 8 G if you
 allocate 4G for share_buffers you'd leave 4G for OS and pg.  Here
 you're only leaving 2G.
 
  work_mem = 6MB
 
 Depending on your workload it might be better to raise this and lower
 shared_buffers.
 
  wal_buffers = 256kB
  checkpoint_segments = 20
  random_page_cost = 3.0
  default_statistics_target = 50
 
 The new default is 100, I'd tend to stick with that unless you have
 very uniform data.
 
  Should I change the default value of temp_buffers or
 maintenance_work_mem as
  well, and what value should I choose? Is there any other values that
 should
  be changed from the default?
 
 Always consider cranking up maint work mem because not many things use
 it and the things that do can really use it.
 
  And another question: Is there a way to find out the maximum
 simultaneous
  connections that has been used? I think that I could reduce the max
 number
  of connection to save some memory.
 
 You'd really need to track that yourself with some kind of simple
 script.   (bash)
 
 while true; do psql mydb -c select count(*) from pg_stat_activity
 ;sleep 60;done | tee myconn.log
 
 or something like that.
 
 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
 No virus found in this incoming message.
 Checked by AVG - www.avg.com
 Version: 8.5.425 / Virus Database: 270.14.64/2501 - Release Date:
 11/14/09 19:42:00



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Config help

2009-11-15 Thread John R Pierce

BuyAndRead Test wrote:

This is a virtual server, so I could give it as much as 8 GB of memory if
this will give much higher performance. What should shared_buffere be set to
if I use 8 GB, as much as 4 GB?
  



I'd keep it around 1-2GB shared_buffers, and let the rest of the memory 
be used as file system cache.  postgres works quite happily that way.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [pgeu-general] pgday.eu

2009-11-15 Thread Thom Brown
2009/11/14 Thom Brown thombr...@gmail.com:
 2009/11/14 Thom Brown thombr...@gmail.com

 Mr Fetter has allowed me to post his lightning talk on lightning talks:
 http://vimeo.com/7602006
 Thom

 Harald's lightning talk also available with his
 permission: http://vimeo.com/7610987
 Thom

Sorry, I've only just noticed that I'd accidently set Harald's video
from password-protected to completely private.  Fixed now.

Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Triggering from a specific column update

2009-11-15 Thread Bob Pawley
PostgreSQL does not support specific column updates in triggers.

I found this statement on a blog.

Is there a workaround for this?

I've attempted using 'new' (refering to the specific column) without success.

Bob

Re: [GENERAL] Triggering from a specific column update

2009-11-15 Thread Tom Lane
Bob Pawley rjpaw...@shaw.ca writes:
 PostgreSQL does not support specific column updates in triggers.
 I found this statement on a blog.

 Is there a workaround for this?

If you'd explain what you think that statement means, maybe we could
help you ...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Triggering from a specific column update

2009-11-15 Thread Bob Pawley

I'm trying to trigger from an update.

However the trigger functions when any column has been updated.

I have columns pump1 and pump2 and column serial.

When pump1 is updated the trigger function performs properly. (one row is 
returned)


When pump2 is updated the trigger function returns  two rows )one row for 
column pump1 and one for column pump2) I end up with two rows of  pump1 and 
one row of pump2.


If I write the function with a null such as --
If new.pump1 = 'True'
then
Insert into p_id.devices (p_id_id, process_id,  fluid_id, status, 
process_graphics_id, device_description)
select (p_id.processes.p_id_id), (p_id.processes.process_id), 
(p_id.processes.fluid_id), ('Pump #1'), ('11'), ('Pump')

from p_id.processes
where new.pump1 = 'True'
and p_id.processes.pump2 is null;

it works fine returning what I want. However, when the serial column is 
updated I get a return which includes pump1 and pump2 as well as the serial 
column.


Hope this elucidates you?

Bob


- Original Message - 
From: Tom Lane t...@sss.pgh.pa.us

To: Bob Pawley rjpaw...@shaw.ca
Cc: Postgresql pgsql-general@postgresql.org
Sent: Sunday, November 15, 2009 3:43 PM
Subject: Re: [GENERAL] Triggering from a specific column update



Bob Pawley rjpaw...@shaw.ca writes:

PostgreSQL does not support specific column updates in triggers.
I found this statement on a blog.



Is there a workaround for this?


If you'd explain what you think that statement means, maybe we could
help you ...

regards, tom lane 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Triggering from a specific column update

2009-11-15 Thread Tom Lane
Bob Pawley rjpaw...@shaw.ca writes:
 Hope this elucidates you?

No, it's all handwaving.  In particular, showing only a fragment from
a case that does work as you expect doesn't illuminate what's not
working.   Please show the whole table definition, the whole trigger,
and the specific case that's not doing what you expect.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



[GENERAL] When running pgsql2shp it truncates fields that are over 10 characters. How can I prevent this from occurring?

2009-11-15 Thread John Mitchell
When running pgsql2shp it truncates fields that are over 10 characters.  How
can I prevent this from occurring?

John

-- 
John J. Mitchell


Re: [GENERAL] Triggering from a specific column update

2009-11-15 Thread Adrian Klaver
On Sunday 15 November 2009 5:18:20 pm Tom Lane wrote:
 Bob Pawley rjpaw...@shaw.ca writes:
  Hope this elucidates you?

 No, it's all handwaving.  In particular, showing only a fragment from
 a case that does work as you expect doesn't illuminate what's not
 working.   Please show the whole table definition, the whole trigger,
 and the specific case that's not doing what you expect.

   regards, tom lane

The above would help greatly with coming to a correct answer. In the mean time 
the problem seems to be that the trigger fires and inserts a row everytime it 
sees a NEW.pump* value = 'True'. Since an update in Postgres is basically an 
insert/delete operation everytime you update you will get back the existing 
values as well as any changed values in the current update. This means if you 
do sequential updates changing the pump1 to 'True',pump2 to 'True' and serial 
values the trigger will keep inserting rows because the new.pump1 value will 
meet the the IF condition. The way I have dealt with this is to do NEW.*/OLD.* 
comparisons to determine if I am truly looking at a changed value or a recycled 
one.

-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Voting: pg_ctl init versus initdb

2009-11-15 Thread Greg Smith

Tom Lane wrote:

Simon Riggs si...@2ndquadrant.com writes:
  

If we did add an extra option then the option would be initdb not
init. It would take us all years to remove all evidence of the phrase
initdb from the mailing lists and our minds.



init is already embedded in various packagers' initscripts.  And
I thought the entire point of this proposal was that we could expunge
knowledge of initdb from users' minds. 
Exactly.  I think the best transition design would be to make initdb 
and init both work.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com



[GENERAL] money binary representation

2009-11-15 Thread Konstantin Izmailov
I'm trying to read money field using PQgetvalue (PostgreSQL 8.3.7). The
function returns 9 bytes, smth like 0h 0h 0h 0h 0h 0h 14h 0h 0h, for the
value '$50.2'. I could not find description anywhere on how to convert the
binary data into, for example, a double precision number.

Would you please help me find a method of converting binary money data
into a double precision?

Thank you!


Re: [GENERAL] money binary representation

2009-11-15 Thread John R Pierce

Konstantin Izmailov wrote:
I'm trying to read money field using PQgetvalue (PostgreSQL 8.3.7). 
The function returns 9 bytes, smth like 0h 0h 0h 0h 0h 0h 14h 0h 0h, 
for the value '$50.2'. I could not find description anywhere on how to 
convert the binary data into, for example, a double precision number.
 
Would you please help me find a method of converting binary money 
data into a double precision?
 


Its my understanding that MONEY is deprecated  that you really should 
store money values as NUMERIC instead.


a wild guess says thats some variant on NUMERIC, which is stored in 
base 1 as a series of 16 bit integers, with a fuixed point fraction.


why would you convert money to floating point?  $0.10 in binary floating 
point is a repeating fraction which can't be represented exactly


btw, are you sure your value isn't $51.20 ?   0x1400 is 5120 decimal.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] money binary representation

2009-11-15 Thread Tom Lane
Konstantin Izmailov pgf...@gmail.com writes:
 I'm trying to read money field using PQgetvalue (PostgreSQL 8.3.7). The
 function returns 9 bytes, smth like 0h 0h 0h 0h 0h 0h 14h 0h 0h, for the
 value '$50.2'. I could not find description anywhere on how to convert the
 binary data into, for example, a double precision number.

 Would you please help me find a method of converting binary money data
 into a double precision?

It's a binary integer, either int4 or int8 depending on which PG version
you're working with, measured in pennies (or whatever the minimum unit
of your currency is).  So that should correspond to 5020.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] money binary representation

2009-11-15 Thread Konstantin Izmailov
Right, the value is '$51.20'! Now I understand how to interpret the bytes -
thank you!

I had to work with an existing database and I do not know why they still use
money fields.
On Sun, Nov 15, 2009 at 9:38 PM, John R Pierce pie...@hogranch.com wrote:

 Konstantin Izmailov wrote:

 I'm trying to read money field using PQgetvalue (PostgreSQL 8.3.7). The
 function returns 9 bytes, smth like 0h 0h 0h 0h 0h 0h 14h 0h 0h, for the
 value '$50.2'. I could not find description anywhere on how to convert the
 binary data into, for example, a double precision number.
  Would you please help me find a method of converting binary money data
 into a double precision?



 Its my understanding that MONEY is deprecated  that you really should store
 money values as NUMERIC instead.

 a wild guess says thats some variant on NUMERIC, which is stored in base
 1 as a series of 16 bit integers, with a fuixed point fraction.

 why would you convert money to floating point?  $0.10 in binary floating
 point is a repeating fraction which can't be represented exactly

 btw, are you sure your value isn't $51.20 ?   0x1400 is 5120 decimal.




[GENERAL] passing parameters to multiple statements

2009-11-15 Thread Konstantin Izmailov
I'm planning to use multiple statements via libpq. Before starting coding
I'm trying to understand are there any limitations on passing parameters.
E.g. would the following work:
  PQexecParams(conn, BEGIN;INSERT INTO tbl VALUES($1,$2);SELECT
lastval();SELECT * INTO AUDIT FROM (SELECT $3, 'tbl action',
lastval());COMMIT;, 3, ...);

Thank you!