Re: [GENERAL] Too far out of the mainstream

2012-09-05 Thread Chris Travers
On Tue, Sep 4, 2012 at 9:06 PM, Ondrej Ivanič ondrej.iva...@gmail.comwrote:

 Hi,

 On 5 September 2012 12:14, Chris Travers chris.trav...@gmail.com wrote:

  So people are using PostgreSQL in roles that aren't very visible anyway,
  DBA's are usually coming to PostgreSQL from other RDBMS's, and few
  applications are really distributed for PostgreSQL.
  snip
   Not only
  this but there was significant interest in moving more db's to
 PostgreSQL,
  but the big limitation is that everyone who knows PostgreSQL already has
 a
  job.

 Some shops are going opposite way -- from PostgreSQL to MySQL like
 databases because of missing replication features. The 9.1 caught up
 but there is no multi-master replication like in Percona's XtraDB
 cluster: http://www.percona.com/software/percona-xtradb-cluster/

 Postgres-XC can solve this missing multi-master replication issue but
 nobody knows that this project exists. Another project is Galera
 Cluster for PostgreSQL (Galera is used in XtraDB) but this looks like
 vaporware...


To be fair I was speaking specifically of the folks I talked to at
MYGOSSCON.  The major question was Do we really need Oracle?

Also I don't know about others but I have been trying to highlight
Postgres-XC wherever it seems appropriate.

Best Wishes,
Chris Travers


Re: [GENERAL] postgres process got stuck in notify interrupt waiting status

2012-09-05 Thread Albe Laurenz
John R Pierce wrote:
 was this a client process or a postgres process?   kill -9 on postgres
 processes can easily trigger data corruption.

It definitely shouldn't cause data corruption, otherwise
PostgreSQL would not be crash safe.

Yours,
Laurenz Albe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are there any options to parallelize queries?

2012-09-05 Thread Seref Arikan
Thanks Aleksey,
Definitely worth noting. Impressive scalability according to slides. The
use of Java is particularly interesting to me.

Best regards
Seref


On Wed, Sep 5, 2012 at 6:27 AM, Aleksey Tsalolikhin atsaloli.t...@gmail.com
 wrote:

 Hi, Seref.  You might want to take a look at Stado:
 http://www.slideshare.net/jim_mlodgenski/scaling-postresql-with-stado

 Best,
 -at



Re: [GENERAL] Too far out of the mainstream

2012-09-05 Thread Oliver Kohll - Mailing Lists
Here's a bit of positive news spin - in a backhanded way perhaps, but still a 
compliment:

http://www.theregister.co.uk/2012/08/31/postgresql_too_cool_for_school/

Oliver
www.agilebase.co.uk



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Too far out of the mainstream

2012-09-05 Thread Ivan Sergio Borgonovo
On Tue, 4 Sep 2012 19:14:28 -0700
Chris Travers chris.trav...@gmail.com wrote:

 So people are using PostgreSQL in roles that aren't very visible
 anyway, DBA's are usually coming to PostgreSQL from other RDBMS's,
 and few applications are really distributed for PostgreSQL.

I know a bunch of people working for huge sites that love Postgres but
use MySQL. The main reason is they build what Postgres is famous for at
a higher level and in a more specialized way with their own glue.

It's easy to get visibility if you're on the internet and you're huge.

But not everyone can rebuild eg. transactions at a higher level and
need as much specialized solutions.

On the other hand for historical reasons MySQL and PHP have nearly
monopolized the hosting space and for many web sites it's hard to
appreciate the difference between Postgres and MySQL (unless your DB
crash and burn).

That's what most people perceive as the mainstream if you don't have
a big marketing dept lying.

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres process got stuck in notify interrupt waiting status

2012-09-05 Thread Craig Ringer

On 09/05/2012 12:21 PM, John R Pierce wrote:

was this a client process or a postgres process?   kill -9 on postgres
processes can easily trigger data corruption.


It certainly shouldn't.

kill -9 of the postmaster, deletion of postmaster.pid, and re-starting 
postgresql *might* but AFAIK even then you'll have to bypass the shared 
memory lockout (unless you're on Windows).


--
Craig Ringer



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Maintaining a materialized view only on a replica

2012-09-05 Thread Herouth Maoz
It's not an issue with the replication software. The reason the parts of the 
transaction are written out of order is that the original system that writes 
them in the first place makes no guarantees as to the order of writing.

So basically my question is whether a trigger that runs a full aggregate SQL 
query on the table that triggered it, joining with another table, checking the 
rows returned and doing the insert in the second table only when the data is 
complete is feasible, because that's basically what I need to do.

Herouth


On 05/09/2012, at 00:52, Craig Ringer wrote:

 Subject changed to describe the problem. Reply in-line.
 
 On 09/04/2012 07:57 PM, Herouth Maoz wrote:
 
 The issue is that when an insert or an update is fired, I can't say
 whether all the segments of the same transaction have been written yet,
 and if only some of them were written, there is no guarantee on the
 order in which they are written.
 
 Does Slony-I provide stronger guarantees? If your replication doesn't 
 guarantee ordering then you're going to have a very hard time doing this.
 
 Is this
 feasible at all? How would you achieve it?
 
 I'd try to find a replication system that guaranteed ordering if at all 
 possible.
 
 --
 Craig Ringer


--
חרות מעוז
יוניסל פתרונות סלולריים מתקדמים
☎ 03-5181717 שלוחה 742



Re: [GENERAL] Error stopping postgresql service on a standby server.

2012-09-05 Thread Dipti Bharvirkar
Hi,

Has anyone encountered this issue? Why would the WAL receiver process not
stop when postmaster is shutdown?
Any suggestions on how to avoid running into this error or ways to recover
from it?

Thank you in advance for any inputs on this,

Dipti

On Fri, Aug 31, 2012 at 1:17 PM, Dipti Bharvirkar wrote:

 Hi,

 In our project, we use Postgres 9.1.3 version and asynchronous streaming
 replication.
 In recent times, on couple of our setups, we saw issues stopping Postgres
 service on the standby server after streaming replication was setup.

 The command service postgresql stop returned with a failure message. We
 use pg_ctl stop -D '$PGDATA' -s -m fast in the Postgres service script to
 stop the server.
 To see if there were some active client connections that were causing a
 failure in stopping Postgres service, I ran the query SELECT * FROM
 pg_stat_activity;.
 It failed with the following error: psql: FATAL:  the database system is
 shutting down

 ps -ef | grep postgres returned the following:
 postgres 14033 1  0 Aug28 ?00:00:01
 /usr/pgsql-9.1/bin/postmaster -p 5432 -D /var/lib/pgsql/data
 postgres 14044 14033  0 Aug28 ?00:00:00 postgres: logger process
 postgres 14046 14033  0 Aug28 ?00:00:00 postgres: writer process
 postgres 14047 14033  0 Aug28 ?00:00:00 postgres: stats collector
 process
 postgres 14912 14033  0 Aug28 ?00:00:00 postgres: wal receiver
 process
 root 31519  3003  0 06:18 pts/200:00:00 grep postgres

 netstat -anp | grep 5432 returns the following:
 tcp0  0 0.0.0.0:54320.0.0.0:*
   LISTEN  14033/postmaster
 tcp0  0 127.0.0.1:5432  127.0.0.1:60597
   TIME_WAIT   -
 tcp0  0 127.0.0.1:5432  127.0.0.1:60589
   TIME_WAIT   -
 tcp67288  0 1.1.1.1:61500 http://47.11.49.176:61500  2
 .2.2.2:5432 http://47.11.49.190:5432   ESTABLISHED
 14912/postgres: wal

 I had a few queries based on some of the observations -

1. On one of the setups where similar issue was observed, we stopped
Postgres service on the master server. As a result of this, the sender
process on the master server and consequently the receiver process on
standby stopped. After this, Postgres service could successfully be stopped
on the standby server. This fact coupled with the output of the two
commands mentioned above makes me believe that it is the wal receiver
process that is not getting terminated because of which the Postgres
service on standby server does not stop. Is this assumption right?
2. If yes, what could be the possible cause for the receiver process
to not terminate? Shouldn't it stop gracefully when a shutdown command is
received? When the issue occurred, we had minimal activity on the master
server. There were no long running transactions being committed to the
master and streamed to the standby when the issue occurred. Even if there
were, could it cause the receiver process to not terminate?
3. How can we avoid running into this issue? Could we be missing some
step that is essential for a graceful shutdown of the service on a standby?
4. On one setup where the issue was seen, since -m fast option with
pg_ctl stop did not help in stopping the service, I used the -m
immediate option. The service stopped (I understand that this option
aborts the processes without a clean shutdown and so is not a safe option).
The service would not start back up. We saw the invalid record length
error during the startup (I guess this was expected since it wasn't a clean
shutdown). A pg_resetxlog helped recover from this issue. However, that
seems risky too since there is a chance of data inconsistency. What is the
best way to recover from this error if it occurs again?

 Thanks,
 Dipti



Re: [GENERAL] Too far out of the mainstream

2012-09-05 Thread Achilleas Mantzios
On Τετ 05 Σεπτ 2012 10:51:49 Ivan Sergio Borgonovo wrote:
 On Tue, 4 Sep 2012 19:14:28 -0700
 Chris Travers chris.trav...@gmail.com wrote:
 
  So people are using PostgreSQL in roles that aren't very visible
  anyway, DBA's are usually coming to PostgreSQL from other RDBMS's,
  and few applications are really distributed for PostgreSQL.
 
 I know a bunch of people working for huge sites that love Postgres but
 use MySQL. The main reason is they build what Postgres is famous for at
 a higher level and in a more specialized way with their own glue.
 

Postgresql has more meaning in the enterprise than in a web site.
Web Content is never critical. The world will keep turning even if some
CSS file or some article gets lost. They are meant to be transient any way.
They are not part of anything bigger.

Postgresql shines whenever data matters. I cannot imagine running our app 
(single master, 80+ slaves in 80+ vessels in the 7 seas (80+ = 80 and 
growning)) in mysql.
We have not lost a single transaction. We have not had a single integrity issue.
All problems were due to our own fault and never postgresql's. 
Runing a variaty of 7.4 / 8.3 mixture (unfortunately upgrading to 9+ is a very 
hard task to manage)
(now all are on 8.3) we never had any issues. And the servers run unattended,
in almost military (marine) conditions, with frequent blackouts, hardware 
failures due to vibration,
disk failures, mother board failures, CPU failures, memory failures.
Postgresql just delivered.

And the thing is that postgresql really has no rivals either. No competitor 
when it comes
to full-featured OSS RDBMS. There are OSS rdbms (mysql) and full featured rdbms 
(DB2/Oracle)
but none besides pgsql which combines both worlds.

Also, as far as extensibility is concerned, postgresql is clearly the king.

 It's easy to get visibility if you're on the internet and you're huge.
 
 But not everyone can rebuild eg. transactions at a higher level and
 need as much specialized solutions.
 
 On the other hand for historical reasons MySQL and PHP have nearly
 monopolized the hosting space and for many web sites it's hard to
 appreciate the difference between Postgres and MySQL (unless your DB
 crash and burn).
 
 That's what most people perceive as the mainstream if you don't have
 a big marketing dept lying.
 
 
-
Achilleas Mantzios
IT DEPT


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Too far out of the mainstream

2012-09-05 Thread Chris Angelico
On Wed, Sep 5, 2012 at 7:40 PM, Achilleas Mantzios
ach...@smadev.internal.net wrote:
 (single master, 80+ slaves in 80+ vessels in the 7 seas (80+ = 80 and 
 growning))

Cool!! How do your nodes communicate with each other? Is it an
off-line resynchronization, or do you maintain long-range (satellite?)
comms?

The system I'm setting up at work kinda pales in comparison to that.
It's designed to scale to infinity and beyond (and that quote is
kinda appropriate, since we run this all on Debian Linux), but at the
moment, all the testing I've done has been on a half-dozen
off-the-shelf Dell laptops. But the same applies; we want absolute
guaranteed reliability, so we NEED a good database. Postgres all the
way! (Plus we need bindings for C++, Pike, and PHP, and I'm a lot
happier with Postgres than several other options in that area.)

ChrisA


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pgBadger 2.0 released

2012-09-05 Thread damien clochard
== pgBadger 2.0 released ==

''Paris, France - September 5th, 2012''

DALIBO is proud to announce the release of version 2.0 of pgBadger, the
new PostgreSQL log analyzer. pgBadger is  built for speed with fully
detailed reports from your PostgreSQL log file. It's a single and small
Perl script that aims to replace and outperform the old php script pgFouine.

Less than two month after the first release of pgBadger, this new major
version adds some features requested by many users such as :

* Better handling of the rotated log files by allowing incremental
runs with the -l option. This is useful if you don't rotate your log
files and your HTML reports at the same frequency.

* New SQL format beautifier. You can now click on any SQL query in
the HTML report and the new SQL Formatter will display it gracefully.

* CSV log parser was entirely rewritten to handle csv with multiline.

Every one should upgrade as soon as possible.

Please note that some changes break the backward compatibility with
previous versions. Options -p  and -g are not used anymore.  Progress
bar (-p) and graphs generation (-g) are now enabled by default. The
obsolete -l option previously used to specify the log file to parse has
been reused to specify an incremental file.

Please also note that the license has changed from BSD-like to the
PostgreSQL license.

DALIBO would like to thank the developpers who submitted patches and the
users who reported bug and features request. For the complete list of
pgBadger 2.0 bugfixes and new features, please read the changelog.

pgBadger is an open project. Any contribution to build a better tool is
welcome. You just have to send your ideas, features request or patches
using the GitHub tools or directly to our mailing list.

Links :

  * Changelog : https://github.com/dalibo/pgbadger/blob/master/ChangeLog
  * Download :  https://github.com/dalibo/pgbadger/downloads
  * Mailing List :
https://listes.dalibo.com/cgi-bin/mailman/listinfo/pgbagder




--

**About pgBadger** :

pgBagder is a new generation log analyzer for PostgreSQL, created by
Gilles Darold the author of ora2pg migration tool. pgBadger a fast and
easy tool to analyze your SQL traffic and create HTML5 reports with
dynamics graphs. pgBadger is the perfect tool to understand the
behaviour of your PostgreSQL server and identify which SQL queries need
to be optimized.

Docs, Download  Demo at http://dalibo.github.com/pgbadger/



--

**About DALIBO** :

DALIBO is the leading PostgreSQL company in France, providing support,
trainings and consulting to its customers since 2005. The company
contributes to the PostgreSQL community in various way, including :
code, articles, translations, free conferences and workshops

Check out DALIBO's open source projects at http://dalibo.github.com




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] alter view, add/drop column

2012-09-05 Thread Gauthier, Dave
I googled around and found that adding/dropping columns from views is not 
available. (if not true, I'm all ears).

Given that, what's the best way to do this?  I was thinking along the lines of 
a stored procedure that reads the view's definition, modifies it, drops the 
view, recreates the view.  The main concern here is that the view drop may fail 
if someone is using it.  (is that a valid concern?).  The procedure would take 
as args  a verb (add/delete/rename), an object (column to operate on), new name 
(optional, not used for delete), maybe a table name to know where in the view 
def to add the new column.

But I'm all ears for other suggestions.

Thanks !




Re: [GENERAL] postgres process got stuck in notify interrupt waiting status

2012-09-05 Thread Tom Lane
Craig Ringer ring...@ringerc.id.au writes:
 On 09/05/2012 12:21 PM, John R Pierce wrote:
 was this a client process or a postgres process?   kill -9 on postgres
 processes can easily trigger data corruption.

 It certainly shouldn't.

 kill -9 of the postmaster, deletion of postmaster.pid, and re-starting 
 postgresql *might* but AFAIK even then you'll have to bypass the shared 
 memory lockout (unless you're on Windows).

Correction on that: manually deleting postmaster.pid *does* bypass the
shared memory lock.  If there are still any live backends from the old
postmaster, you can get corruption as a result of this, because the old
backends and the new ones will be modifying the database independently.

This is why we recommend that you never delete postmaster.pid manually,
and certainly not as part of an automatic startup script.

Having said that, a kill -9 on an individual backend (*not* the
postmaster) should be safe enough, if you don't mind the fact that
it'll kill all your other sessions too.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres process got stuck in notify interrupt waiting status

2012-09-05 Thread Aleksey Tsalolikhin
On Wed, Sep 5, 2012 at 7:38 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Having said that, a kill -9 on an individual backend (*not* the
 postmaster) should be safe enough, if you don't mind the fact that
 it'll kill all your other sessions too.


Got it, thanks.

Why will it kill all your other sessions too?  Isn't there a separate backend
process for each session?

Best,
-at


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres process got stuck in notify interrupt waiting status

2012-09-05 Thread Kevin Grittner
Aleksey Tsalolikhin atsaloli.t...@gmail.com wrote:
 
 Why will it kill all your other sessions too?  Isn't there a
 separate backend process for each session?
 
When stopped that abruptly, the process has no chance to clean up
its pending state in shared memory.  A fresh copy of shared memory
is needed, so it is necessary to effectively do an immediate restart
on the whole PostgreSQL instance.
 
-Kevin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres process got stuck in notify interrupt waiting status

2012-09-05 Thread Tom Lane
Kevin Grittner kevin.gritt...@wicourts.gov writes:
 Aleksey Tsalolikhin atsaloli.t...@gmail.com wrote:
 Why will it kill all your other sessions too?  Isn't there a
 separate backend process for each session?
 
 When stopped that abruptly, the process has no chance to clean up
 its pending state in shared memory.  A fresh copy of shared memory
 is needed, so it is necessary to effectively do an immediate restart
 on the whole PostgreSQL instance.

Right.  On seeing one child die unexpectedly, the postmaster forcibly
SIGQUITs all its other children and initiates a crash recovery sequence.
The reason for this is exactly that we can't trust the contents of
shared memory anymore.  An example is that the dying backend may have
held some critical lock, which there is no way to release, so that every
other session will shortly be stuck anyway.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] alter view, add/drop column

2012-09-05 Thread Johan Nel
Dave,
On Wednesday, 5 September 2012 16:16:32 UTC+2, Gauthier, Dave  wrote:
 I googled around and found that adding/dropping columns from views is not 
 available. (if not true, I'm all ears).
 Given that, what's the best way to do this?  I was thinking along the lines 
 of a stored procedure that reads the view's definition, modifies it, drops 
 the view, recreates the view.  The main concern here is that the view drop 
 may fail if
  someone is using it.  (is that a valid concern?).  The procedure would take 
 as args  a verb (add/delete/rename), an object (column to operate on), new 
 name (optional, not used for delete), maybe a table name to know where in the 
 view def to add the new column.


You can do from pgAdmin by right-clicking on the view and select CREATE SCRIPT 
uncomment:
  DROP VIEW viewname;
and edit the CREATE VIEW viewname to drop the column.

However if the view is used in other views you have to drop them all and 
recreate in sequence.

Alternatively you can delete the columns from the pg_catalog.pg_attribute 
table. Make sure however that you know you deleting only the one's you want to 
delete and not other tables columns... using the below:

delete from pg_attribute where attrelid = regclass 'yourviewname' and attname = 
'columnnametodrop'

Best to first do couple of selects until you have the selection correct:

select attrelid::regclass as whatever, * from pg_attribute where attname = 
'columnnametodrop'

Johan Nel.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Where is the char and varchar length in pg_catalog for function input variables

2012-09-05 Thread jam3
I have searched and searched and just cannot find the maximum lengths for
input variables in a function

i.e.

CREATE FUNCTION test(input1 char(5), input2 varchar(50))
RETURNS void AS
$$RAISE NOTICE('%,%'), $1, $2;$$
LANGUAGE plpgsql;


Where do I find the 5 and the 50 it has to be somewhere I have searched
through 
pg_proc
pg_type
pg_attribute (whose attlen only relates to tables)
pg_type

and all possible manner of joining these tables.



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Where-is-the-char-and-varchar-length-in-pg-catalog-for-function-input-variables-tp5722845.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Too far out of the mainstream

2012-09-05 Thread Achilleas Mantzios
On Τετ 05 Σεπτ 2012 23:44:08 Chris Angelico wrote:
 On Wed, Sep 5, 2012 at 7:40 PM, Achilleas Mantzios
 ach...@smadev.internal.net wrote:
  (single master, 80+ slaves in 80+ vessels in the 7 seas (80+ = 80 and 
  growning))
 
 Cool!! How do your nodes communicate with each other? Is it an
 off-line resynchronization, or do you maintain long-range (satellite?)
 comms?

Hello,
our topology is star-like. The system is based on good ol' UUCP
running on top of either ISDN lines or (as of late) over TCP/IP.
It is asynchronous and off-line by design. Vessels connect to the central 
master server
and get all their data, receive replication updates, and also send their data
to the office (central master cerver).
UUCP does the management of the queues (for the unitiated, think of UUCP as 
something like JMS or AMQP or even better like JMS (API)+AMQP (wire protocol))
The comms (ISDN and TCPIP) are all done of course over a satellite service
(very expensive, so compression and minimal data replication were/are and will 
be
major concern)
In the case of ISDN, the billing is by time, so clearly this had to fit in the 
off-line category.
In the case of TCPIP, the billing is by data size, but we use that under UUCP
just like the ISDN off-line asynchronous mode.

Vessels can operate without connection to the office, and vice versa.

 
 The system I'm setting up at work kinda pales in comparison to that.
 It's designed to scale to infinity and beyond (and that quote is
 kinda appropriate, since we run this all on Debian Linux), but at the
 moment, all the testing I've done has been on a half-dozen
 off-the-shelf Dell laptops. But the same applies; we want absolute
 guaranteed reliability, so we NEED a good database. Postgres all the
 way! (Plus we need bindings for C++, Pike, and PHP, and I'm a lot
 happier with Postgres than several other options in that area.)
 
 ChrisA
 
 
 
-
Achilleas Mantzios
IT DEPT


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Where is the char and varchar length in pg_catalog for function input variables

2012-09-05 Thread Pavan Deolasee
On Wed, Sep 5, 2012 at 9:10 PM, jam3 jamort...@gmail.com wrote:

 I have searched and searched and just cannot find the maximum lengths for
 input variables in a function

 i.e.

 CREATE FUNCTION test(input1 char(5), input2 varchar(50))
 RETURNS void AS
 $$RAISE NOTICE('%,%'), $1, $2;$$
 LANGUAGE plpgsql;


 Where do I find the 5 and the 50 it has to be somewhere I have searched
 through
 pg_proc
 pg_type
 pg_attribute (whose attlen only relates to tables)
 pg_type

 and all possible manner of joining these tables.


Hmm. I only looked at the code and hence don't have a definite answer. But
it seems that information is not stored anywhere. That might explain why
the function you mentioned accepts parameters with any character length.

Thanks,
Pavan


Re: [GENERAL] Where is the char and varchar length in pg_catalog for function input variables

2012-09-05 Thread Pavel Stehule
2012/9/5 Pavan Deolasee pavan.deola...@gmail.com:


 On Wed, Sep 5, 2012 at 9:10 PM, jam3 jamort...@gmail.com wrote:

 I have searched and searched and just cannot find the maximum lengths for
 input variables in a function

 i.e.

 CREATE FUNCTION test(input1 char(5), input2 varchar(50))
 RETURNS void AS
 $$RAISE NOTICE('%,%'), $1, $2;$$
 LANGUAGE plpgsql;


 Where do I find the 5 and the 50 it has to be somewhere I have searched
 through
 pg_proc
 pg_type
 pg_attribute (whose attlen only relates to tables)
 pg_type

 and all possible manner of joining these tables.


 Hmm. I only looked at the code and hence don't have a definite answer. But
 it seems that information is not stored anywhere. That might explain why the
 function you mentioned accepts parameters with any character length.


yes, this information is just ignored - functions drops typmods

Regards

Pavel

 Thanks,
 Pavan




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres process got stuck in notify interrupt waiting status

2012-09-05 Thread Aleksey Tsalolikhin
Got it, thanks, Kevin, Tom.

So how about that this process that was in notify interrupt waiting
waiting status after I SIGTERM'ed it.  Is the double waiting
expected?

Aleksey


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Where is the char and varchar length in pg_catalog for function input variables

2012-09-05 Thread Tom Lane
jam3 jamort...@gmail.com writes:
 I have searched and searched and just cannot find the maximum lengths for
 input variables in a function

 CREATE FUNCTION test(input1 char(5), input2 varchar(50))
 RETURNS void AS
 $$RAISE NOTICE('%,%'), $1, $2;$$
 LANGUAGE plpgsql;

 Where do I find the 5 and the 50 it has to be somewhere

No, it doesn't have to be, and it isn't.  As far as PG is concerned,
the inputs to this function are just of type char and varchar.
You're allowed to write extra decoration but it's ignored.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres process got stuck in notify interrupt waiting status

2012-09-05 Thread Tom Lane
Aleksey Tsalolikhin atsaloli.t...@gmail.com writes:
 So how about that this process that was in notify interrupt waiting
 waiting status after I SIGTERM'ed it.  Is the double waiting
 expected?

That sounded a bit fishy to me too.  But unless you can reproduce it in
something newer than 8.4.x, nobody's likely to take much of an interest.
The LISTEN/NOTIFY infrastructure got completely rewritten in 9.0, so
any bugs in the legacy version are probably just going to get benign
neglect at this point ... especially if we don't know how to reproduce
them.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Where is the char and varchar length in pg_catalog for function input variables

2012-09-05 Thread jam3
Yeah thats what I was starting to wonder if those lengths basically mean
nothing. I am writing a ton of functions to unit test all of the functions
in our app and am generating random strings and would like to pass the
lengths to my random string generator so if it's varchar 50 I am generating
a string between 0 and 50 length but since I can't find the length value I
guess I am just going to put an arbitrary length in. 

Would be nice to know what exactly is going on when you have a length
specified on an input variable in pg_catalog.



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Where-is-the-char-and-varchar-length-in-pg-catalog-for-function-input-variables-tp5722845p5722850.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] values from txid_current()

2012-09-05 Thread Sahagian, David
Using 9.1.3

Start Transaction; DO $$ BEGIN raise info '%', txid_current(); END $$; ALTER 
TABLE MyCoolTable_1 DISABLE TRIGGER trg_foo_1 ; Commit;
Start Transaction; ALTER 
TABLE MyCoolTable_2 DISABLE TRIGGER trg_foo_2 ; Commit;
Start Transaction; DO $$ BEGIN raise info '%', txid_current(); END $$; ALTER 
TABLE MyCoolTable_3 DISABLE TRIGGER trg_foo_3 ; Commit;
Start Transaction; ALTER 
TABLE MyCoolTable_4 DISABLE TRIGGER trg_foo_4 ; Commit;
Start Transaction; DO $$ BEGIN raise info '%', txid_current(); END $$; ALTER 
TABLE MyCoolTable_5 DISABLE TRIGGER trg_foo_5 ; Commit;

Why are the Messages displayed by my pgAdmin sql window like this . . .
INFO:  7902
INFO:  7903
INFO:  7904
instead of what I expected . . .
INFO:  7902
INFO:  7904
INFO:  7906
???

Thanks,
-dvs-




Re: [GENERAL] values from txid_current()

2012-09-05 Thread Pavan Deolasee
On Wed, Sep 5, 2012 at 10:53 PM, Sahagian, David david.sahag...@emc.comwrote:



 Why are the Messages displayed by my pgAdmin sql window like this . . .
 INFO:  7902
 INFO:  7903
 INFO:  7904
 instead of what I expected . . .
 INFO:  7902
 INFO:  7904
 INFO:  7906
 ???



Are you sure those ALTER TABLE commands are executing without an error ?
The only way I can see you get those messages is when ALTER TABLE for
tables MyCoolTable_2 and MyCoolTable_4 (and possibly others too) are
failing for some reason.

Thanks,
Pavan


Re: [GENERAL] values from txid_current()

2012-09-05 Thread Sahagian, David
OK, now I will answer my own question.

It seems that
  ALTER TABLE MyCoolTable_1 DISABLE TRIGGER trg_foo_1 ;
is a no-op when it is currently disabled.

And so no txn id is needed.

When I alternate DISable and ENable statements, it behaves as I expect . . .

Start Transaction; DO $$ BEGIN raise info '%', txid_current(); END $$; ALTER 
TABLE MyCoolTable_1 DISABLE TRIGGER trg_foo_1 ; Commit;
Start Transaction; ALTER 
TABLE MyCoolTable_1 ENABLE  TRIGGER trg_foo_1 ; Commit;
Start Transaction; DO $$ BEGIN raise info '%', txid_current(); END $$; ALTER 
TABLE MyCoolTable_1 DISABLE TRIGGER trg_foo_1 ; Commit;
Start Transaction; ALTER 
TABLE MyCoolTable_1 ENABLE  TRIGGER trg_foo_1 ; Commit;
Start Transaction; DO $$ BEGIN raise info '%', txid_current(); END $$; ALTER 
TABLE MyCoolTable_1 DISABLE TRIGGER trg_foo_1 ; Commit;
Start Transaction; ALTER 
TABLE MyCoolTable_1 ENABLE  TRIGGER trg_foo_1 ; Commit;

INFO:  7958
INFO:  7960
INFO:  7962

Thanks,
-dvs-

From: Pavan Deolasee [mailto:pavan.deola...@gmail.com]
Sent: Wednesday, September 05, 2012 1:46 PM
To: Sahagian, David
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] values from txid_current()


On Wed, Sep 5, 2012 at 10:53 PM, Sahagian, David 
david.sahag...@emc.commailto:david.sahag...@emc.com wrote:


Why are the Messages displayed by my pgAdmin sql window like this . . .
INFO:  7902
INFO:  7903
INFO:  7904
instead of what I expected . . .
INFO:  7902
INFO:  7904
INFO:  7906
???


Are you sure those ALTER TABLE commands are executing without an error ? The 
only way I can see you get those messages is when ALTER TABLE for tables 
MyCoolTable_2 and MyCoolTable_4 (and possibly others too) are failing for some 
reason.

Thanks,
Pavan


[GENERAL] Moving several databases into one database with several schemas

2012-09-05 Thread Edson Richter

Dear list,

_*Scenario:*_

I'm using PostgreSQL 9.1 on Linux x64 running over CentOS 5. Everything 
is fine, but now I do have 4 separate databases running on different 
servers, and every server has some shared tables.


I've been working on a complex logic that is able to replicate these 
tables in a way that is almost impossible to use existing solutions 
(I've to merge records when replicating). My conclusion is that the 
logic is just too complex to maintain, and I would like to consolidate 
databases but keep data separate. Few days ago, I've posted this query 
in this list, and got the orientation to try using schemas.


That's what I want to do know: I would like to consolidate these 4 
separate databases in 1 database with 5 schemas:


- Main schema: will have all shared tables, that will be read only most 
of time;

- Schema1 to Schema4: will have their own tables, read write.

_*Now the questions:*_

1) Is there a way to backup database1 and restore in the 
consolidated database, but in schema1 (not overwriting everything)?
2) Is there a way to specify the default schema in JDBC url (or command 
I can issue to change the default schema at runtime, like set path...)?


Thanks in advance,

Edson Richter




Re: [GENERAL] postgres process got stuck in notify interrupt waiting status

2012-09-05 Thread Aleksey Tsalolikhin
On Wed, Sep 5, 2012 at 10:03 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 That sounded a bit fishy to me too.  But unless you can reproduce it in
 something newer than 8.4.x, nobody's likely to take much of an interest.
 The LISTEN/NOTIFY infrastructure got completely rewritten in 9.0, so
 any bugs in the legacy version are probably just going to get benign
 neglect at this point ... especially if we don't know how to reproduce
 them.

Got it, thanks, Tom!  Will urge our shop to upgrade to 9.1.

Best,
-at


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] max_connections

2012-09-05 Thread Modumudi, Sireesha
Hi all,

 

I am using postgres 8.3.9 on SUSE 64 bit. By default max_connections is
100, but I want to know if this can be increased, if so, what should we
take into consideration? 

 

Thank you,

Sireesha



Re: [GENERAL] max_connections

2012-09-05 Thread Scott Marlowe
On Wed, Sep 5, 2012 at 2:30 PM, Modumudi, Sireesha
sireesha.modum...@emc.com wrote:
 Hi all,

 I am using postgres 8.3.9 on SUSE 64 bit. By default max_connections is 100,
 but I want to know if this can be increased, if so, what should we take into
 consideration?

If you're considering raising this then you should probably be looking
at some kind of pooler like pgbouncer or pgool.That said I've run
servers that did things like sessions (small transactions and lots of
idle connects) to ~1000 before but make sure you've got enough memory
free as each backend will use about 6MB of memory.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Would my postgresql 8.4.12 profit from doubling RAM?

2012-09-05 Thread jam3
Here is a bash script I wrote to print out mem config ffrom postgresconf.sql
and os (centos 5.5 in this case). According to Gregory Smith in Postgresql
9.0 shared buffers should be appx 25-40% of avail Physical RAM. Also
considerPostgres uses the OS Buffer as it access the physical data and log
files and while doing so has the potential to double buffer blocks.

WorkMEM is suggested at 5% but if you do alot of CLUSTER/ANALYZE/VACUUM you
will want to up this, I usually round off to the highest power of 2 is 5% is
328mb i'll set it to 512. 
Most of the conversions are done in the script and the Shared Memory checks
are just that, checks, a modern OS should be way above pg required kernel
settings.

also look at your ipcs -m this will show you the shared memory in use and is
you have other processes aside from postgres using shared memory.

I also have a 9.0 script if anyone wants it.

#
# Postgresql Memory Configuration and Sizing Script
# By: James Morton
# Last Updated 06/18/2012
#
# Note This script is meant to be used with by the postgres user with a
configured .pgpass file
# It is for Postgres version 8 running on Linux and only tested on Centos 5
#
# Reference -
http://www.postgresql.org/docs/8.0/static/kernel-resources.html
#
# This script should be run after changing any of the following in the
postgresconf.sql
#
# maximum_connections
# block_size
# shared_buffers
#
# or after changing the following OS kernel values
#
# SHMMAX
# SHMALL
# SHMMNI
# SEMMNS
# SEMMNI
# SEMMSL

#!/bin/bash

#Input Variables
DBNAME=$1
USERNAME=$2


clear
echo
echo Postgresql Shared Memory Estimates
echo

echo
echo Local Postgres Configuration settings
echo

#Postgresql Version
PSQL=psql $DBNAME -U $USERNAME
PG_VERSION=$($PSQL --version)
echo PG_VERSION:$PG_VERSION

#Postgresql Block Size
PG_BLKSIZ=$($PSQL -t -c show block_size;)
echo PG_BLKSIZ:$PG_BLKSIZ

#Maximum Connections
PG_MAXCON=$($PSQL -t -c show max_connections;)
echo PG_MAXCON:$PG_MAXCON

#Shared Buffers
PG_SHABUF=$($PSQL -t -c show shared_buffers;)
echo PG_SHABUF: $PG_SHABUF

#maintainance_work_mem
PG_MNTWKM=$($PSQL -t -c show maintenance_work_mem;)
echo PG_MNTWKM:$PG_MNTWKM

#work_mem
PG_WRKMEM=$($PSQL -t -c show work_mem;)
echo PG_WRKMEM:$PG_WRKMEM

echo
echo
echo Kernel Shared Memory Settings
echo


CUR_SHMMAX_IN_B=$(cat /proc/sys/kernel/shmmax)
#echo CUR_SHMMAX_IN_B: $CUR_SHMMAX_IN_B
CUR_SHMMAX_IN_MB=$(( (CUR_SHMMAX_IN_B / 1024) / 1024 )) 
echo CUR_SHMMAX_IN_MB: $CUR_SHMMAX_IN_MB
#Estimate SHMMAX per Postgresql 8.0 table 16-2
SHMMAX_MAXCON=$(( PG_MAXCON * 14541 ))
#echo SHMMAX_MAXCON: $SHMMAX_MAXCON
SHMMAX_SHABUF=$(( PG_SHABUF * 9832 ))
#echo SHMMAX_SHABUF: $SHMMAX_SHABUF
PG_REC_SHMMAX_TOTAL_B=$(( 256000 + SHMMAX_MAXCON + SHMMAX_SHABUF ))
#echo PG_REC_SHMMAX_TOTAL_B: $PG_REC_SHMMAX_TOTAL_B
PG_REC_SHMMAX_TOTAL_MB=$(( (PG_REC_SHMMAX_TOTAL_B / 1024) / 1024 ))
echo PG_REC_SHMMAX_TOTAL_MB: $PG_REC_SHMMAX_TOTAL_MB
if [ $PG_REC_SHMMAX_TOTAL_B -lt $CUR_SHMMAX_IN_B ]; then
echo SHMMAX is within Postgresql's needs
elif [ $PG_REC_SHMMAX_TOTAL_B -ge $CUR_SHMMAX_IN_B ]; then
echo SHMMAX should be set greater than $PG_REC_SHMMAX_TOTAL_B
else
echo SHHMAX setting cannot be determined
fi
echo

CUR_SHMALL=$(cat /proc/sys/kernel/shmall) 
#note: SHMALL on CENTOS is in Bytes
#echo CUR_SHMALL: $CUR_SHMALL 
CUR_SHMALL_IN_MB=$(( (CUR_SHMALL / 1024) / 1024 ))
echo CUR_SHMALL_IN_MB: $CUR_SHMALL_IN_MB
if [ $PG_REC_SHMMAX_TOTAL_B -lt $CUR_SHMALL ]; then
echo SHMALL is within Postgresql's needs
elif [ $PG_REC_SHMMAX_TOTAL_B -ge $CUR_SHMALL ]; then
echo SHMALL should be set greater than $PG_REC_SHMMAX_TOTAL_B
else
echo SHMALL setting cannot be determined
fi
echo

CUR_SHMMNI=$(cat /proc/sys/kernel/shmmni)
echo CUR_SHMMNI: $CUR_SHMMNI
if [ $CUR_SHMMNI -ge 1 ]; then
echo SHMMNI is within Postgresql's needs
elif [ $CUR_SHMMNI -lt 1 ]; then
echo SHMMNI should be set greater than 1
else
echo SHMMNI setting cannot be determined
fi

echo
echo
echo Kernel Semaphore Settings
echo

CUR_SEMMNI=$( cat /proc/sys/kernel/sem | awk '{print $4}' )
echo CUR_SEMMNI: $CUR_SEMMNI
PG_RECSET_SEMMNI=$(printf %.0f $(echo scale=2;($PG_MAXCON) / 16 | bc))
echo PG_RECSET_SEMMNI: $PG_RECSET_SEMMNI
if [ $CUR_SEMMNI -ge $PG_RECSET_SEMMNI ]; then
echo SEMMNI is within Postgresql's needs
elif [ $CUR_SEMMNI -lt $PG_RECSET_SEMMNI ]; then
echo SEMMNI should be set greater than or equal to $PG_RECSET_SEMMNI
else
echo SEMMNI setting cannot be determined
fi
echo

CUR_SEMMNS=$( cat /proc/sys/kernel/sem | awk '{print $2}' )
echo CUR_SEMMNS: $CUR_SEMMNS
PG_RECSET_SEMMNS=$(printf %.0f $(echo scale=2;(($PG_MAXCON) / 16)*17 |
bc))
echo PG_RECSET_SEMMNS: $PG_RECSET_SEMMNS
if [ $CUR_SEMMNS -ge $PG_RECSET_SEMMNS ]; then
echo SEMMNS is within Postgresql's needs
elif [ $CUR_SEMMNS -lt $PG_RECSET_SEMMNS ]; then
echo SEMMNS should be set greater than or equal to $PG_RECSET_SEMMNS
else
echo SEMMNS setting cannot be 

[GENERAL] When does Postgres cache query plans?

2012-09-05 Thread Mike Christensen
I'm curious under what circumstances Postgres will cache an execution
plan for a query.

Obviously if you create it with the PREPARE statement, it will be cached..

However, if I just run an ad-hoc query such as:

select * from Foo where X  5;

A few hundred times, will that be cached?  What if I run:

select * from Foo where X  :value;

Can that be cached, or will it always be re-evaluated based on the
value of :value?  Thanks!

Mike


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Where is the char and varchar length in pg_catalog for function input variables

2012-09-05 Thread Kevin Grittner
jam3 jamort...@gmail.com wrote:
 
 create or replace function test1(c1 char(10), c2 varchar(20))
 
 Just showing that it does indeed not use the length in at all
 
Correct.  That is functioning as intended and is not likely to
change any time soon.
 
You might consider using domains:
 
drop function if exists test1(c1 t1, c2 t2);
drop table if exists test_table;
drop domain if exists t1;
drop domain if exists t2;

create domain t1 varchar(10);
create domain t2 varchar(20);
create table test_table
(
  column1 char(20),
  column2 varchar(40)
) without oids;
create or replace function test1(c1 t1, c2 t2)
returns void as
$$
BEGIN
insert into test_table values ($1, $2);
END
$$
language plpgsql;
select
test1('12345678900123456789',
  'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789ABCD');
select * from test_table;
 
-Kevin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Where is the char and varchar length in pg_catalog for function input variables

2012-09-05 Thread David Johnston
 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of jam3
 Sent: Wednesday, September 05, 2012 3:34 PM
 To: pgsql-general@postgresql.org
 Subject: [GENERAL] Re: Where is the char and varchar length in pg_catalog
 for function input variables
 
 This is what I meant to post
 
 ..
 
 Just showing that it does indeed not use the length in at all, and this
just
 seems wrong. I can definetly see situations where someone would put a
 length on a in put var and get an an unexpected result, like the one
above.
 

You can argue it is wrong, and I'd tend to agree.  But that is how things
are until someone decides it is painful enough to implement a better way.

It is a documented situation though suggestions for improvements there are
always welcome.

If/when you care you can implement adhoc validation inside the function.

Discoverability via meta-data is the nice but lacking ability with the
current model but for arbitrary length and precision/scale specifications
that ability has limited (but non-zero) value. For better and worse you can
extend the system tables and include the meta-data that you feel is
necessary to make the system work.  It is a much less invasive procedure
than altering the catalogs themselves.

David J.






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Where is the char and varchar length in pg_catalog for function input variables

2012-09-05 Thread jam3
Duh never mind I call brain cloud on that one, and thanks for all the help.



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Where-is-the-char-and-varchar-length-in-pg-catalog-for-function-input-variables-tp5722845p5722880.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Where is the char and varchar length in pg_catalog for function input variables

2012-09-05 Thread jam3
This is what I meant to post

drop table test_table;
create table test_table
(
column1 char(20),
column2 varchar(40)
) without oids;


drop function test1(char(10), varchar(20)); 
create or replace function test1(c1 char(10), c2 varchar(20))
returns void as
$$
BEGIN
insert into test_table values ($1, $2);
END
$$
language plpgsql 

select
test1('12345678900123456789','ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789ABCD')

select * from test_table;
12345678900123456789, ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789ABCD


Just showing that it does indeed not use the length in at all, and this just
seems wrong. I can definetly see situations where someone would put a length
on a in put var and get an an unexpected result, like the one above.



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Where-is-the-char-and-varchar-length-in-pg-catalog-for-function-input-variables-tp5722845p5722881.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Would my postgresql 8.4.12 profit from doubling RAM?

2012-09-05 Thread Scott Marlowe
On Wed, Sep 5, 2012 at 2:16 PM, jam3 jamort...@gmail.com wrote:
 Here is a bash script I wrote to print out mem config ffrom postgresconf.sql
 and os (centos 5.5 in this case). According to Gregory Smith in Postgresql
 9.0 shared buffers should be appx 25-40% of avail Physical RAM. Also
 considerPostgres uses the OS Buffer as it access the physical data and log
 files and while doing so has the potential to double buffer blocks.

 WorkMEM is suggested at 5% but if you do alot of CLUSTER/ANALYZE/VACUUM you
 will want to up this, I usually round off to the highest power of 2 is 5% is
 328mb i'll set it to 512.

I think you mean maintenance_work_mem here, as regular vacuum or
analyze don't use work_mem.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Where is the char and varchar length in pg_catalog for function input variables

2012-09-05 Thread jam3
How does postgres figure this out to throw the error msg?

create table test_table
(
column1 char(10),
column2 varchar(20)
) without oids;


create or replace function test1(c1 char(10), c2 varchar(20))
returns void as
$$
BEGIN
insert into test_table values ($1, $2);
END
$$
language plpgsql 

select test1('1234567890','ABCDEFGHIJKLMNOPQRST')

select * from test_table;
-- 1234567890, ABCDEFGHIJKLMNOPQRST

select test1('this is way way longer than 10 characters','this is way way
way way way way way way way way way way longer than 20 characters')

ERROR:  value too long for type character(10)
CONTEXT:  SQL statement insert into test_table values ($1, $2)
PL/pgSQL function test1 line 3 at SQL statement

** Error **

ERROR: value too long for type character(10)



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Where-is-the-char-and-varchar-length-in-pg-catalog-for-function-input-variables-tp5722845p5722876.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Where is the char and varchar length in pg_catalog for function input variables

2012-09-05 Thread David Johnston
 
 How does postgres figure this out to throw the error msg?
 
 
 select test1('this is way way longer than 10 characters','this is way way
way
 way way way way way way way way way longer than 20 characters')
 
 ERROR:  value too long for type character(10)
 CONTEXT:  SQL statement insert into test_table values ($1, $2)
 PL/pgSQL function test1 line 3 at SQL statement
 
 ** Error **
 
 ERROR: value too long for type character(10)
 

When it goes to execute:

INSERT INTO test_table ('this is way way ...', 'this is way way way...')

The char(10) type definition for test_table.column1 is too short to hold the
supplied value (stored in $1 in the function) and throws an error.

The length of $1 and $2 inside the function are however long the input
values are because they ignore the length specifier on the function call
types.

If you want to guarantee that the INSERT will work you would need to write:

INSERT INTO test_table VALUES ( $1::char(10), $2::varchar(20) )

This tells PostgreSQL to truncate the supplied value at whatever specified
length is noted; the same as writing substring($1, 1, 10)::char or
substring($1, 1, 20)::varchar though whether char and varchar differ in
their behavior in this respect I do not know.  It is generally not
recommended to use char

David J.







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Where is the char and varchar length in pg_catalog for function input variables

2012-09-05 Thread Kevin Grittner
David Johnston pol...@yahoo.com wrote:
 
 If you want to guarantee that the INSERT will work you would need
 to write:
 
 INSERT INTO test_table VALUES ( $1::char(10), $2::varchar(20) )
 
Note that this will quietly cut off the tail end of the supplied
data, so it should only be used when that is desirable.  It is
generally better to throw an error than to lose data.
 
 whether char and varchar differ in their behavior in this
 respect I do not know.
 
Per the SQL standard, they both throw an error on attempts to assign
an oversized value, but allow truncation through explicit casts.
 
 It is generally not recommended to use char
 
Right.  It is supported because the standard specifies it and its
behavior, but the semantics of char(n) are weird and the
performance, in PostgreSQL, is generally worse for char(n) than
varchar(n) or text.
 
-Kevin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Too far out of the mainstream

2012-09-05 Thread jam3
MySQL doesn't even support self referential updates like

update t1 set c1 ='value' where t1.id not in (select id from t1 where id 
100);

Nor is it fully ACID compliant.
And its online documentation is a nightmare.
PgAdmin is infintely better than mysql workbench, heck anything is better
than MySQLWorkbench

Postgres as of 9 will do pretty much anything Oracle or mssql will do minus
robust tools (where mssql is a clear winner with ssrs and ssis and ssms). 
Oracles tools are coming around with developer, modeler, and analytics but
really oracle is for when you need serious distributed transaction balancing
via RAC. Honestly if your not using RAC there is no reason to use Oracle.

So There is not one reason to go with MySQL over Postgres and tons of reason
to use Postgres over MySQL, arrays, ORM, Tools, Documentation,
Cross-Language Support, Faster, ACID compliant, etc

And if you want a really rich toolset and you have bought into the .NET
library model, which once you start digging is quite cool, go read petzolds
DotNETZero, then go with mssql.

And if your running a transaction volume to rival Amazon and want a db that
can come as close to a true parrallel load balancing as RAC then fork aout
the shiny and go with Oracle.



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Too-far-out-of-the-mainstream-tp5722177p5722878.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_connections

2012-09-05 Thread Kevin Grittner
Modumudi, Sireesha sireesha.modum...@emc.com wrote:
 
 I am using postgres 8.3.9 on SUSE 64 bit. By default
 max_connections is 100, but I want to know if this can be
 increased, if so, what should we take into consideration? 
 
http://wiki.postgresql.org/wiki/Number_Of_Database_Connections
 
-Kevin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Too far out of the mainstream

2012-09-05 Thread Magnus Hagander
On Sat, Sep 1, 2012 at 1:24 PM, Peter Bex peter@xs4all.nl wrote:
 On Sat, Sep 01, 2012 at 12:43:15AM +0200, Geert Mak wrote:
 There is this case studies section as well -

 http://www.postgresql.org/about/casestudies/

 Which appear to me a little old and a little too little, one could try to 
 add more, perhaps.

 I noticed that the Share Your Story link is broken.
 I don't know how long it's been broken, but this might be a reason
 there are no new ones.

Thanks for reporting that - link fixed.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Moving several databases into one database with several schemas

2012-09-05 Thread Edson Richter

Em 05/09/2012 15:30, Edson Richter escreveu:

Dear list,

_*Scenario:*_

I'm using PostgreSQL 9.1 on Linux x64 running over CentOS 5. 
Everything is fine, but now I do have 4 separate databases running on 
different servers, and every server has some shared tables.


I've been working on a complex logic that is able to replicate these 
tables in a way that is almost impossible to use existing solutions 
(I've to merge records when replicating). My conclusion is that the 
logic is just too complex to maintain, and I would like to consolidate 
databases but keep data separate. Few days ago, I've posted this query 
in this list, and got the orientation to try using schemas.


That's what I want to do know: I would like to consolidate these 4 
separate databases in 1 database with 5 schemas:


- Main schema: will have all shared tables, that will be read only 
most of time;

- Schema1 to Schema4: will have their own tables, read write.

_*Now the questions:*_

1) Is there a way to backup database1 and restore in the 
consolidated database, but in schema1 (not overwriting everything)?
2) Is there a way to specify the default schema in JDBC url (or 
command I can issue to change the default schema at runtime, like set 
path...)?


Thanks in advance,

Edson Richter


I've tried following command (on Windows platform), but command returns 
without any import, and exit code 0 (output translated, because I do 
use PT-BR):


-
pg_restore.exe --host localhost --port 5432 --username postgres 
--dbname consolidado --role MyUser --no-password  --schema main 
--verbose E:\backups\maindatabase.bk

pg_restore: connecting to database for restore

Process returned exit code 0.
-

I'm sure database is running, backup file exists, everything seems to be 
fine - except that nothing is imported.
I could not find directions in documentation. I suspect that I'll not be 
able to use Custom format for backups...


Please, help!

Edson


Re: [GENERAL] When does Postgres cache query plans?

2012-09-05 Thread Tom Lane
Mike Christensen m...@kitchenpc.com writes:
 I'm curious under what circumstances Postgres will cache an execution
 plan for a query.

If you're writing raw SQL, never.  The assumption is that the
application knows its usage pattern a lot better than the server does,
and if the application is going to re-execute the same/similar statement
a lot of times, the app ought to make use of a prepared statement for
that.

Some client-side code (such as the JDBC driver) will make use of
prepared statements under the hood, so a lot depends on context.
But sending plain SQL with PQexec() does not result in any cached plan.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] When does Postgres cache query plans?

2012-09-05 Thread Mike Christensen
On Wed, Sep 5, 2012 at 3:51 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Mike Christensen m...@kitchenpc.com writes:
 I'm curious under what circumstances Postgres will cache an execution
 plan for a query.

 If you're writing raw SQL, never.  The assumption is that the
 application knows its usage pattern a lot better than the server does,
 and if the application is going to re-execute the same/similar statement
 a lot of times, the app ought to make use of a prepared statement for
 that.

 Some client-side code (such as the JDBC driver) will make use of
 prepared statements under the hood, so a lot depends on context.
 But sending plain SQL with PQexec() does not result in any cached plan.

Excellent, that's pretty much what I figured (and would expect)..

It seems SQL Server and Oracle have some weird caching behavior that's
hard to understand and/or predict..  Postgres also seems to be unique
in the fact it even has a PREPARE statement..  MS SQL and Oracle only
provide that feature through the API..

Mike


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] regexp_matches question

2012-09-05 Thread Sergio Basurto
On Tue, 2012-09-04 at 21:58 -0400, David Johnston wrote:

 On Sep 4, 2012, at 21:39, Sergio Basurto sbasu...@soft-gator.com wrote:
 
  I am using regexp_matches in a function like this
  
  create or replace function test (v_string   in text) returns varchar as 
  $$
  declare
  i_strings   text[];
  i_stringtext[];
  
  i_strings := 
  regexp_matches(v_string,E'[a-zA-Z0-9:\\s\\-\\.#%]*:[A-Za-z0-9\\s\\-\\.#%]+','g');
 
 You can store a single array value into i_strings.  It does not magically 
 convert a multi-row result into an array.  You can use ARRAY_AGG to do so or 
 execute the query directly as part of the loop while using a record 
 variable to store the current row's value(s). 
 
  
  -- Then I use  the results
  foreach i_string slice 1 in array i_strings
  loop
  raise notice 'row = %',i_string;
  end loop;
  
  when I run the function like this:
  
  select test('1:Warehouse1;2:Warehouse2;');
  
  postgresql complains:
  ERROR:  query SELECT 
  regexp_matches(v_string,E'[a-zA-Z0-9:\\s\\-\\.#%]*:[A-Za-z0-9\\s\\-\\.#%]+','g')
   returned more than one row
  
  Why postgres is sending the ERROR?
  
  Off course I am expecting more than one row!, that's why is in a foreach 
  loop in the first place.
  
  If I run:
  select 
  regexp_matches('1:Warehouse1;2:Warehouse2;',E'[a-zA-Z0-9:\\s\\-\\.#%]*:[A-Za-z0-9\\s\\-\\.#%]+','g');
  regexp_matches 
  
  {1:Warehouse1}
  {2:Warehouse2}
  (2 rows)
  
  I am doing something wrong?
 
 Note that because you do not use grouping in your expression there is only a 
 single array cell in each row - but there could be more than one in which 
 case your for-each above would effectively loop through each sub-component of 
 the match.
 
  
  Regards,
 
 
 David J.
 

Thanks for your response David, but my doubt arise because if I use this

i_strings text[] := array [[1:Warehouse1],[2:Warehouse2]];

loops without problem. Is not the same thing?

it prints:

NOTICE: row = {1:Warehouse1}
NOTICE: row = {2:Warehouse2}


Re: [GENERAL] regexp_matches question

2012-09-05 Thread David Johnston
On Sep 5, 2012, at 19:02, Sergio Basurto sbasu...@soft-gator.com wrote:

 On Tue, 2012-09-04 at 21:58 -0400, David Johnston wrote:
 
 On Sep 4, 2012, at 21:39, Sergio Basurto sbasu...@soft-gator.com wrote:
 
  I am using regexp_matches in a function like this
  
  create or replace function test (v_string   in text) returns varchar 
  as $$
  declare
  i_strings   text[];
  i_stringtext[];
  
  i_strings := 
  regexp_matches(v_string,E'[a-zA-Z0-9:\\s\\-\\.#%]*:[A-Za-z0-9\\s\\-\\.#%]+','g');
 
 You can store a single array value into i_strings.  It does not magically 
 convert a multi-row result into an array.  You can use ARRAY_AGG to do so or 
 execute the query directly as part of the loop while using a record 
 variable to store the current row's value(s). 
 
  
  -- Then I use  the results
  foreach i_string slice 1 in array i_strings
  loop
  raise notice 'row = %',i_string;
  end loop;
  
  when I run the function like this:
  
  select test('1:Warehouse1;2:Warehouse2;');
  
  postgresql complains:
  ERROR:  query SELECT 
  regexp_matches(v_string,E'[a-zA-Z0-9:\\s\\-\\.#%]*:[A-Za-z0-9\\s\\-\\.#%]+','g')
   returned more than one row
  
  Why postgres is sending the ERROR?
  
  Off course I am expecting more than one row!, that's why is in a foreach 
  loop in the first place.
  
  If I run:
  select 
  regexp_matches('1:Warehouse1;2:Warehouse2;',E'[a-zA-Z0-9:\\s\\-\\.#%]*:[A-Za-z0-9\\s\\-\\.#%]+','g');
  regexp_matches 
  
  {1:Warehouse1}
  {2:Warehouse2}
  (2 rows)
  
  I am doing something wrong?
 
 Note that because you do not use grouping in your expression there is only a 
 single array cell in each row - but there could be more than one in which 
 case your for-each above would effectively loop through each sub-component 
 of the match.
 
  
  Regards,
 
 
 David J.
 
 Thanks for your response David, but my doubt arise because if I use this
 
 i_strings text[] := array [[1:Warehouse1],[2:Warehouse2]];
 
 loops without problem. Is not the same thing?
 
 it prints:
 
 NOTICE: row = {1:Warehouse1}
 NOTICE: row = {2:Warehouse2}

A 2-dimensional array is not the same as a set of 1-dimensional arrays.

David J.




Re: [GENERAL] Too far out of the mainstream

2012-09-05 Thread David Boreham
I dunno, perhaps I don't get out the office enough, but I just don't 
hear about MySQL any more.

I think this thread is tilting at windmills.

A few years ago about 1 in 2 contracts we had was with a start-up using 
MySQL.

The other half were using either PG or Oracle or SQLServer. The years before
that, pre-dot-com-crash, every start-up used Oracle, presumably because 
Larry had

some Vulcan mind grip over the VCs.

Then Oracle acquired MySQL any anyone with a brain and some imagination
figured out where that would lead eventually.

So today everyone I meet is either using PostgreSQL or some web scale 
store

like Raik, MondoDB, Cassandra. MySQL is nowhere to be seen. I'm not
sure if that's because folks migrated from MySQL to something else, or
because the MySQL-using companies were the ones that went out of business.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Too far out of the mainstream

2012-09-05 Thread Chris Travers
On Wed, Sep 5, 2012 at 2:40 AM, Achilleas Mantzios 
ach...@smadev.internal.net wrote:

 On Τετ 05 Σεπτ 2012 10:51:49 Ivan Sergio Borgonovo wrote:
  On Tue, 4 Sep 2012 19:14:28 -0700
  Chris Travers chris.trav...@gmail.com wrote:
 
   So people are using PostgreSQL in roles that aren't very visible
   anyway, DBA's are usually coming to PostgreSQL from other RDBMS's,
   and few applications are really distributed for PostgreSQL.
 
  I know a bunch of people working for huge sites that love Postgres but
  use MySQL. The main reason is they build what Postgres is famous for at
  a higher level and in a more specialized way with their own glue.
 

 Postgresql has more meaning in the enterprise than in a web site.
 Web Content is never critical. The world will keep turning even if some
 CSS file or some article gets lost. They are meant to be transient any way.
 They are not part of anything bigger.


On top of that, you also have to recognize this:  In most content
management areas, data truncation, etc. is perfectly reasonable (and in
fact desirable) as a way of handling data that is too long.  Most of
MySQL's historical gotchas were features built in for light-weight content
management.  If a comment on a blog is too long, why make the application
specify truncation?  Just truncate it and get it over with.

Of course this meant MySQL couldn't move beyond content management safely
until they addressed that, but now they have gone to a different niche
which is again entirely different from ours:  one-app-per-database capable
of customized behavior in order to achieve portability.  However since
every session can set sql_mode, this approach thus again limits MySQL to
that specific lowest common denominator market.  Sure you can set sql_mode
= 'TRADITIONAL' but you have to cope with the fact that every other
application writing to the tables could set their own sql_mode and that
means you can't count on strict mode to mean anything.

For historical and software licensing reasons, however, this second
one-app-per-db market is *huge.*



 Postgresql shines whenever data matters. I cannot imagine running our app
 (single master, 80+ slaves in 80+ vessels in the 7 seas (80+ = 80 and
 growning)) in mysql.
 We have not lost a single transaction. We have not had a single integrity
 issue.
 All problems were due to our own fault and never postgresql's.
 Runing a variaty of 7.4 / 8.3 mixture (unfortunately upgrading to 9+ is a
 very hard task to manage)
 (now all are on 8.3) we never had any issues. And the servers run
 unattended,
 in almost military (marine) conditions, with frequent blackouts, hardware
 failures due to vibration,
 disk failures, mother board failures, CPU failures, memory failures.
 Postgresql just delivered.


Now there's a case study.   You should write it up or even just submit what
you wrote above.


 And the thing is that postgresql really has no rivals either. No
 competitor when it comes
 to full-featured OSS RDBMS. There are OSS rdbms (mysql) and full featured
 rdbms (DB2/Oracle)
 but none besides pgsql which combines both worlds.

 Also, as far as extensibility is concerned, postgresql is clearly the king.


No kidding there.

Best Wishes,
Chris Travers


Re: [GENERAL] Too far out of the mainstream

2012-09-05 Thread Chris Travers
Regarding MySQL vs PostgreSQL:

MySQL is what you get when app developers build a database server.
PostgreSQL is what you get when db developers build a development platform.

There really isn't anything more to say about it.

Best Wishes,
Chris Travers


Re: [GENERAL] Too far out of the mainstream

2012-09-05 Thread Damian Carey
On Sat, Sep 1, 2012 at 5:25 AM, Andy Yoder ayo...@airfacts.com wrote:

 Hello all,

 I would like the community's input on a topic.  The words too far out of
 the mainstream are from an e-mail we received from one of our clients,
 describing the concern our client's IT group has about our use of
 PostgreSQL in our shop.  The group in question supports multiple different
 databases, including Oracle, MySQL, SQLServer, DB2, and even some
 non-relational databases (think Cobol and file-based storage), each type
 with a variety of applications and support needs.  We are in the running
 for getting a large contract from them and need to address their question:
  What makes PostgreSQL no more risky than any other database?

 Thanks in advance for your input.

 Andy Yoder



Hi all,

I really don't want to waste your time too much on this - so please ignore
if so - but I have been watching this group for many years. There are more
than 26K emails in my inbox! You never hear from me because really I'm more
a Java guy (PG hides behind Hibernate here) - and in the end PG just works
perfectly forever on any dodgy customer PC we install on. So on this
esteemed group I'm no more than a novice, although I would claim much dev
and management experience.

FWIW, my comments on this thread are as follows ...
- The issue is one of mind share - MySQL has it, PG deserves it.
- PG desperately needs to have amunition available for this OP
- PG already convinces highly astute people who have time, knowledge, and
inclination
- PG misses too many people in influencial positions who don't have the
above

This mainly calls for ...
(1) to have a visible community
(2) to have endorsements
(3) to be seen regularly

So what to do? ...
- Every PG conference (or gathering, expo trade show etc) should be
expected to submit photos and a brief story of what went on. Who (half
famous or important) was there, what was discussed, issues of the day etc.
- Rock stars within or close enough to PG should be asked to write
endorsements. CEOs, CIOs, gun devs, consultants.
- Key PG people should be rostered to contribute one or two articles per
year to mags, sites, etc
- All the above should be posted on the website under the banner
Community or something.
- Anything more than 12-18 months old is trashed.

I'm trying to think of things that take 1-4 hours here and there.

Yip - I know I'm allocating work around where I have no right to do so, but
I think that these soft issues are as important as ACID and replication.

Anyway - nuff said - I'll return to my OutOfMemory exception. At least I
know the data is safe.

Cheers,
-Damian


Re: [GENERAL] Too far out of the mainstream

2012-09-05 Thread Edson Richter

Em 05/09/2012 23:49, Chris Travers escreveu:

Regarding MySQL vs PostgreSQL:

MySQL is what you get when app developers build a database server.
PostgreSQL is what you get when db developers build a development 
platform.


There really isn't anything more to say about it.


This kind of claim is just to feed flame wars. Don't think I need to 
state that a db developer becomes a app developer as soon as he 
start to develop any database server code, right?


Edson.



Best Wishes,
Chris Travers




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Too far out of the mainstream

2012-09-05 Thread Scott Marlowe
On Wed, Sep 5, 2012 at 8:56 PM, Edson Richter edsonrich...@hotmail.com wrote:
 Em 05/09/2012 23:49, Chris Travers escreveu:

 Regarding MySQL vs PostgreSQL:

 MySQL is what you get when app developers build a database server.
 PostgreSQL is what you get when db developers build a development
 platform.

 There really isn't anything more to say about it.


 This kind of claim is just to feed flame wars. Don't think I need to state
 that a db developer becomes a app developer as soon as he start to
 develop any database server code, right?

I kind of agree with both of you somewhat.

A lot of developers think of their data in a hierarchical manner. If
so a simple key-value data store is often your best answer.

MySQL's basic design is that of a simple key-value data store
parading as a relational database.  While it's had a lot done to it to
make it better in the role of relational data manager, it's still got
a lot of baggage from back in the day that means that as you go from
simple data store to complex relational data management, you start to
notice warts, like a planner that's dumb as a stump, and very simple
join methods that make complex queries painfully slow.  For people who
are just storing and retrieving lots of simple data, it's still great.

PostgreSQL's heritage was correctness in SQL and set theory.   The
obvious example is queries that MySQL, or at least older versions of
it, would run that Postgresql would, correctly, throw an error on.
Simple example is:

select a,b,c from sometable group by a;

assuming there's no PK on a, this query SHOULD throw an error because
in that case which values you get for b and c are both undefined, and
the SQL standard says that it should therefore throw an error.
Performance and easy use were not a priority for most of its early
life, so the MySQL philosophy of just run the query and give me the
wrong answer like I asked wasn't good enough.

They started from very different places, and while they've moved
towards each other over the last decade, their heritages mean they
still have very different strengths and weaknesses.

If you write code by grabbing small globs of data from the db, doing
the mangling in the CPU, then stuffing them back out to the db, MySQL
might be your best choice. If you write code by transforming data sets
in the database, then PostgreSQL is likely your best bet.

The problem you run into is that if you're only familiar with one db
and you're trying to use it like the other one.  MySQL will dominate
at apps that mostly read a lot of tiny bits of data and occasionally
write chunks of code out.  PostgreSQL will dominate at lots of atomic
updates or large data transformations all taking place in the db
layer, not in app code.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_connections

2012-09-05 Thread jam3
According to 

http://www.postgresql.org/docs/8.3/static/kernel-resources.html

The maximum shared memory usage of a connection in bytes is

1800 + 270 * max_locks_per_transaction

max_locks_per_transaction default is 64

19080 Bytes

or .018 mb's per connection

or

1.819 mb at 100 default connections

With a Gig of Phsical Ram setting Shared Buffers to use 25% - 256 mb
dedicated to Postgres

default is using roughly 0.75% for connections

You can extrapolate this out taking into consideration all your specific
variables, total physical RAM, postgresql.conf settings etc but I wouldn;t
run off to use pgpool unless your in an extremly connection heavy
environment as it does add an additional layer within the client server
connection and is another component to config and admin.







--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/max-connections-tp5722890p5722899.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Too far out of the mainstream

2012-09-05 Thread Edson Richter

Em 06/09/2012 00:39, Scott Marlowe escreveu:

On Wed, Sep 5, 2012 at 8:56 PM, Edson Richter edsonrich...@hotmail.com wrote:

Em 05/09/2012 23:49, Chris Travers escreveu:


Regarding MySQL vs PostgreSQL:

MySQL is what you get when app developers build a database server.
PostgreSQL is what you get when db developers build a development
platform.

There really isn't anything more to say about it.


This kind of claim is just to feed flame wars. Don't think I need to state
that a db developer becomes a app developer as soon as he start to
develop any database server code, right?

I kind of agree with both of you somewhat.

A lot of developers think of their data in a hierarchical manner. If
so a simple key-value data store is often your best answer.

MySQL's basic design is that of a simple key-value data store
parading as a relational database.  While it's had a lot done to it to
make it better in the role of relational data manager, it's still got
a lot of baggage from back in the day that means that as you go from
simple data store to complex relational data management, you start to
notice warts, like a planner that's dumb as a stump, and very simple
join methods that make complex queries painfully slow.  For people who
are just storing and retrieving lots of simple data, it's still great.

PostgreSQL's heritage was correctness in SQL and set theory.   The
obvious example is queries that MySQL, or at least older versions of
it, would run that Postgresql would, correctly, throw an error on.
Simple example is:

select a,b,c from sometable group by a;

assuming there's no PK on a, this query SHOULD throw an error because
in that case which values you get for b and c are both undefined, and
the SQL standard says that it should therefore throw an error.
Performance and easy use were not a priority for most of its early
life, so the MySQL philosophy of just run the query and give me the
wrong answer like I asked wasn't good enough.

They started from very different places, and while they've moved
towards each other over the last decade, their heritages mean they
still have very different strengths and weaknesses.

If you write code by grabbing small globs of data from the db, doing
the mangling in the CPU, then stuffing them back out to the db, MySQL
might be your best choice. If you write code by transforming data sets
in the database, then PostgreSQL is likely your best bet.

The problem you run into is that if you're only familiar with one db
and you're trying to use it like the other one.  MySQL will dominate
at apps that mostly read a lot of tiny bits of data and occasionally
write chunks of code out.  PostgreSQL will dominate at lots of atomic
updates or large data transformations all taking place in the db
layer, not in app code.


Yes, I heard from a beginner devel that he likes MySQL because it gives 
less errors. PostgreSQL was always bugging his app complaining about 
some foreign keys.

I just had to get out for laugh :-)

Nevertheless, I've a large app, and I admit: I tried to run with 
MySQL+InnoDB. I'll never do the same mistake twice. My data got corrupt 
(foreign keys have been ignored, as well primary keys), and I got lots 
of zombie records in database.


Nowadays, I just limit my self to adults databases: PostgreSQL (my 
preferred on last 5 years because it just works), MS SQL (because I 
worked with it for most of my professional life since 1990s: and yes, I 
used it when it was just Sybase's core), Oracle (besides I think it's 
like a big expensive White Elephant) and Db2, that surprised me in its 
last incarnation.


What I feel missing in PgSQL? Tools that help me to improve performance. 
Every time I need to analyze a query, I miss the MS SQL Server 
performance analyzer tool, and the Db2 optimizer :-)


But life is like that, and I get used to it. And PostgreSQL have been 
working very weel since 8.4 for me (by today, all my databases 
(development and production) run 9.1 without any trouble).


Regards,

Edson



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Too far out of the mainstream

2012-09-05 Thread Chris Travers
On Wed, Sep 5, 2012 at 7:56 PM, Edson Richter edsonrich...@hotmail.comwrote:

 Em 05/09/2012 23:49, Chris Travers escreveu:

  Regarding MySQL vs PostgreSQL:

 MySQL is what you get when app developers build a database server.
 PostgreSQL is what you get when db developers build a development
 platform.

 There really isn't anything more to say about it.


 This kind of claim is just to feed flame wars. Don't think I need to state
 that a db developer becomes a app developer as soon as he start to
 develop any database server code, right?


I don't mean it that way.

The basic thing is that MySQL's view of data integrity is extremely
application centric.  Even today, applications get to tell the server
whether to throw an error when you try to insert -00-00 into a date
field (this is via the sql_mode setting and admins can't restrict what an
app can do there).  MySQL makes perfect sense when you are an application
developer looking at the database as a place to store information for your
own private use. In essence, MySQL makes perfect sense when you realize
that my = private in OO terms.

This isn't necessarily a bad thing if that's what you are using it for, and
because of ways the db market has developed there are a huge number of
developers who are very happy with a lowest common denominator RDBMS where
you can assume one app writing to the db (or at least any given relation),
and possibly other apps reading.  In short if you want an easy db to port
SQL code that was intended to be portable to, MySQL is the RDBMS for you.
 For people who want to avoid putting business logic in the db, and want to
put all the API's for interoperability and integration in their app logic,
it's a good RDBMS.  In fact, I can't actually think of better.  This is
*especially true* if you want to make it dangerous for other apps to write
to the db, perhaps in order to say this is not supported and ask people to
purchase more client access licenses

MySQL behavior that seems incorrect is not necessarily incorrect in
that context.  It's a data store for one app to write to and optionally
other apps to read from.  The app can be trusted to not do crazy things
with sql_mode settings or the like, and if it does, whatever the app tells
the db is correct behavior, the db is supposed to do.

PostgreSQL on the other hand has been engineered from the beginning (as I
understand it) with the idea that you have multiple applications writing to
the same relations.  So a lot of the things like sql_mode settings, which
are great for porting applications to MySQL, would be dangerous in a
PostgreSQL context.  The relations are a public API, while in MySQL they
are at least semi-private.  Additionally from the beginning you have had a
very strong emphasis on being able to do advanced data modelling in
PostgreSQL perhaps to an extent even today unparalleled elsewhere.  If you
are going to do db-level programming in PostgreSQL, you shouldn't IMO think
like an application developer but rather like a database developer.

What I am getting at is that if you are an app developer looking at
databases, MySQL looks fine, and the warts more or less match how you would
tend to think a db should act anyway.  If you are a db developer,
PostgreSQL tries hard where we all agree on correct db behavior to do the
right thing without respect to what the app might have intended.  On the
other hand, this is mostly a platform for data modelling, and if you are an
app developer a lot of things will seem weird in that context until you get
used to it.

Like it or not, the perspectives are very different.  If all you want is an
information store for your app with reporting capabilities, then you end up
with a different solution then if you want to manage data in a centralized
way.

Best Wishes,
Chris Travers