Hello all,
We are exploring possible strategies on deploying PostgreSQL with
application which will store fairly much data (current implementation
stores around 345 GB of data, and it will be subject of up to 10 times
more data stored in the database).
Now, not to go into too much details to
On Wed, Apr 10, 2013 at 9:07 AM, Vedran Krivokuca vkrivok...@gmail.com wrote:
1) we can go with different instances of PostgreSQL service, let's say
(for pure theory) 10 of them on the same HA cluster setup. Every
instance would hold let's say 1/10th of that big recordset, and around
3.000
On Wed, Apr 10, 2013 at 10:10 AM, Vedran Krivokuca vkrivok...@gmail.comwrote:
On Wed, Apr 10, 2013 at 9:07 AM, Vedran Krivokuca vkrivok...@gmail.com
wrote:
1) we can go with different instances of PostgreSQL service, let's say
(for pure theory) 10 of them on the same HA cluster setup. Every
All,
We have to migrate to a new hardware infrastructure and we want to use a
Postgres version of EnterpriseDB in the new hardware infrastructure.
On the old infrastructure, postgres comes from postgresql.org
The version of postgres on the old and new infrastructure is the same, 9.1
(old 9.1.5
I'd agree, certainly in my experiences.
You need to ensure OS parameters such as the max open files
(fs.file-max if we're talking Linux) is set appropriately. Baring
in mind each user will have an open file on each underlying
datafile for the
Yes, the log shipping/streaming replication is possible between these
versions as the major versions are same i.e 9.1.X
**
*Thanks Regards,*
*** *
*Prashanth Ranjalkar*
*Database Consultant Architect*
*Skype:prashanth.ranjalkar*
*www.postgresdba.net*
On Wed, Apr 10, 2013 at 1:58 PM, Herman
Hello,
Yes, it's possible, but I recommend you to update the current version
coming soon.
Regards
2013/4/10 Prashanth Ranjalkar prashant.ranjal...@gmail.com
Yes, the log shipping/streaming replication is possible between these
versions as the major versions are same i.e 9.1.X
**
*Thanks
On Wed, Apr 10, 2013 at 10:43 AM, Dale Betts dale.be...@hssnet.com wrote:
I'd agree, certainly in my experiences.
You need to ensure OS parameters such as the max open files (fs.file-max if
we're talking
Linux) is set appropriately. Baring in mind each user will have an open file
on each
Every table or index is created in the form of OS files therefore max open
files need to be set appropriately in order to achieve the larger table
count. There would be no limitation for creation of tables in PostgreSQL
and performance would be the major criteria as catalogs get overburdened.
**
On 10-04-2013 13:58, Herman Pool wrote:
All,
We have to migrate to a new hardware infrastructure and we want to use
a Postgres version of EnterpriseDB in the new hardware infrastructure.
On the old infrastructure, postgres comes from postgresql.org
The version of postgres on the old and new
Hi all,
Can someone please point me to detailed documentation on how to secure/encrypt
connections between PGBouncer and Postgresql database (version 8.4.3)?
Thanks in advance!
Bhanu M. Gandikota
Cell: (415) 420-7740
AFAIK, you have to use stunnel to do it (which is not hard to setup, but
it almost makes you wonder whether you should go to the trouble of using
pgbouncer at all).
I just went through this and I ended up just testing direct connections
through the tunnel without pgbouncer in the middle. It
On Wed, Apr 10, 2013 at 11:06:32AM -0700, Bhanu Murthy wrote:
Hi all,
Can someone please point me to detailed documentation on how to
secure/encrypt connections between PGBouncer and Postgresql database (version
8.4.3)?
Thanks in advance!
Bhanu M. Gandikota
Cell: (415) 420-7740
Hi,
Could some please explain what these warnings mean in postgres.
I see these messages a lot when automatic vacuum runs.
1 tm:2013-04-10 11:39:20.074 UTC db: pid:13766 LOG: automatic vacuum
of table DB1.nic.pvxt: could not (re)acquire exclusive lock for truncate
scan
1
That's just that some other process has some DML going on in the table that is
supposed to be truncated. No lock, no truncate.
HTH,
Bambi.
From: pgsql-admin-ow...@postgresql.org
[mailto:pgsql-admin-ow...@postgresql.org] On Behalf Of Nik Tek
Sent: Wednesday, April 10, 2013 4:58 PM
To:
Hi Bambi,
Thank you the prompt reply.
This table is very volatile, lot of inserts/updates happen on this
tables(atleast 20~30 inserts/min).
When auto vacuum tries to run on this table, I get this warning.
Is there a way, I force it to happen, because the table/indexes statistics
are becoming
On Wed, Apr 10, 2013 at 4:59 PM, Armin Resch resc...@gmail.com wrote:
Not sure this is the right list to vent about this but here you go:
I) select regexp_replace('BEFORE.AFTER','(.*)\..*','\1','g') Substring
II) select regexp_replace('BEFORE.AFTER','(.*)\\..*','\\1','g') Substring
Thx for clarification, Craig. Your Perl snippet comes in handy, too.
-ar
On Apr 10, 2013, at 8:08 PM, Craig James cja...@emolecules.com wrote:
On Wed, Apr 10, 2013 at 4:59 PM, Armin Resch resc...@gmail.com wrote:
Not sure this is the right list to vent about this but here you go:
I) select
18 matches
Mail list logo