alex1 is a table with 47 million rows and a rule that deletes from another
table whenever an insert is made into alex1. Insertions into alex1 are
causing
a seq scan that takes a very long time:
"EXPLAIN insert into alex1 (fileid, archiveset) select 35352974, 10003;"
(EXPLAIN output is at
pgbench initialization has been going on for almost 5 hours now and still
stuck before vacuum starts .. something is definitely wrong as I don't
remember it took so long first time I created the db. Here are the current
stats now:
*iostat (xbd13-14 are WAL zpool)*
device r/s w/skr/s
On 09/12/12 10:42 PM, Dann Corbit wrote:
I just let it sit. 3.5 hours later, it completed.
...
I have several versions of PostgreSQL already installed on this
machine. We need to test compatibility with PostgreSQL database
systems with our products.
I tried to install 9.2 64 bit using th
The reboot might have purged the installation log.
On Thu, Sep 13, 2012 at 11:22 AM, Dann Corbit wrote:
> I should mention also that the new PostgreSQL instance is working fine.
> I am looking forward to testing it.
>
> ** **
>
> *From:* Dann Corbit
> *Sent:* Wednesday, September 12, 2012 1
If possible, can you share the installation log?
(%TEMP%\install-postgresql.log)
On Thu, Sep 13, 2012 at 11:12 AM, Dann Corbit wrote:
> I just let it sit. 3.5 hours later, it completed.
>
> ** **
>
> *From:* pgsql-general-ow...@postgresql.org [mailto:
> pgsql-general-ow...@postgresql.org]
I just let it sit. 3.5 hours later, it completed.
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Dann Corbit
Sent: Wednesday, September 12, 2012 1:24 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] how long to wait on 9.2 bitrock installe
On 09/12/12 10:01 PM, Sébastien Lorion wrote:
pgbench initialization has been going on for almost 5 hours now and
still stuck before vacuum starts .. something is definitely wrong as I
don't remember it took so long first time I created the db
pgbench initialization with a high scale facto
On Tue, Aug 21, 2012 at 1:18 AM, Vincent Veyron wrote:
> Le mardi 21 août 2012 à 01:33 -0400, Sébastien Lorion a écrit :
>
> >
> >
> > Since Amazon has added new high I/O instance types and EBS volumes,
> > anyone has done some benchmark of PostgreSQL on them ?
> >
>
> I wonder : is there a reaso
On Wed, Sep 12, 2012 at 8:43 PM, Mike Christensen wrote:
> On Tue, Sep 11, 2012 at 6:49 AM, Adrian Klaver
> wrote:
>> On 09/10/2012 09:43 PM, Mike Christensen wrote:
>>> Is there a place to download pgAdmin 1.16 for openSuse (or a
>>> repository I can add?)
>>>
>>> All I can find is packages for
On Tue, Sep 11, 2012 at 6:49 AM, Adrian Klaver wrote:
> On 09/10/2012 09:43 PM, Mike Christensen wrote:
>> Is there a place to download pgAdmin 1.16 for openSuse (or a
>> repository I can add?)
>>
>> All I can find is packages for 1.14, however this version is unable to
>> connect to Postgres 9.2
Wells Oliver writes:
> It seems like the following is redundant:
> host all all 0.0.0.0/0md5
> hostnossl replication replicationuser 0.0.0.0/0md5
> The first one allows either SSL or non SSL to all users for all DBs from
> any address via MD5. Good for e
It seems like the following is redundant:
host all all 0.0.0.0/0md5
hostnossl replication replicationuser 0.0.0.0/0md5
The first one allows either SSL or non SSL to all users for all DBs from
any address via MD5. Good for everything, no? Yet if I remove
On Wed, Sep 12, 2012 at 5:00 PM, Wells Oliver wrote:
> We have a few tables that are updated nightly on the order of deleting and
> inserting 500k records.
Out of how many in total?
> I assumed autovacuum would do its thing and clean up the dead tuples, but in
> looking at pg_stat_user_tables, I
Forgot to say that this is it with new values suggested (see included
postgresql.conf) and ARC cache size set to 32GB.
Sébastien
On Wed, Sep 12, 2012 at 9:16 PM, Sébastien Lorion
wrote:
> I recreated the DB and WAL pools, and launched pgbench -i -s 1. Here
> are the stats during the load (st
I recreated the DB and WAL pools, and launched pgbench -i -s 1. Here
are the stats during the load (still running):
*iostat (xbd13-14 are WAL zpool)*
device r/s w/skr/skw/s qlen svc_t %b
xbd8 0.0 471.5 0.0 14809.3 40 67.9 84
xbd7 0.0 448.1 0.0 14072.6 3
One more question .. I could not set wal_sync_method to anything else but
fsync .. is that expected or should other choices be also available ? I am
not sure how the EC2 SSD cache flushing is handled on EC2, but I hope it is
flushing the whole cache on every sync .. As a side note, I got corrupted
Em 12/09/2012 15:09, Kevin Grittner escreveu:
VACUUM FREEZE VERBOSE ANALYZE
Sorry, most of the messages are in portuguese, but I guess numbers are
more important, right?
INFO: índice "pk_notafiscalarq" agora contém 715084 versões de
registros em 1963 páginas
DETALHE: 0 versões de registros
Is dedicating 2 drives for WAL too much ? Since my whole raid is comprised
of SSD drives, should I just put it in the main pool ?
Sébastien
On Wed, Sep 12, 2012 at 8:28 PM, Sébastien Lorion
wrote:
> Ok, make sense .. I will update that as well and report back. Thank you
> for your advice.
>
> Sé
Ok, make sense .. I will update that as well and report back. Thank you for
your advice.
Sébastien
On Wed, Sep 12, 2012 at 8:04 PM, John R Pierce wrote:
> On 09/12/12 4:49 PM, Sébastien Lorion wrote:
>
>> You set shared_buffers way below what is suggested in Greg Smith book
>> (25% or more of R
The DB back-end of my application has 2 use cases:
- a normalized master DB, sharded by userid (based on their activity, not a
formula such as modulo, because some users can be 1-2 order of magnitude
more active than others)
- many denormalized read-only slaves, with some different models dependi
On 09/12/12 5:00 PM, Wells Oliver wrote:
We have a few tables that are updated nightly on the order
of deleting and inserting 500k records.
I assumed autovacuum would do its thing and clean up the dead tuples,
but in looking at pg_stat_user_tables, I notice there are lots and
lots of dead tup
On 09/12/12 4:49 PM, Sébastien Lorion wrote:
You set shared_buffers way below what is suggested in Greg Smith book
(25% or more of RAM) .. what is the rationale behind that rule of
thumb ? Other values are more or less what I set, though I could lower
the effective_cache_size and vfs.zfs.arc_ma
We have a few tables that are updated nightly on the order of deleting and
inserting 500k records.
I assumed autovacuum would do its thing and clean up the dead tuples, but
in looking at pg_stat_user_tables, I notice there are lots and lots of dead
tuples, on the order of a 500-600k.
What can I d
You set shared_buffers way below what is suggested in Greg Smith book (25%
or more of RAM) .. what is the rationale behind that rule of thumb ? Other
values are more or less what I set, though I could lower the
effective_cache_size and vfs.zfs.arc_max and see how it goes.
Sébastien
On Wed, Sep 12
On 09/12/12 4:03 PM, Sébastien Lorion wrote:
I agree 1GB is a lot, I played around with that value, but it hardly
makes a difference. Is there a plateau in how that value affects query
performance ? On a master DB, I would set it low and raise as
necessary, but what would be a good average valu
I agree 1GB is a lot, I played around with that value, but it hardly makes
a difference. Is there a plateau in how that value affects query
performance ? On a master DB, I would set it low and raise as necessary,
but what would be a good average value on a read-only DB with same spec and
max_connec
On 09/12/12 3:17 PM, François Beausoleil wrote:
What are the iostat / vmstat numbers during the test?
note you need to run iostat with -x and ignore the first
sample as its average since reboot. I usually use 5, 10, or 30 second
intervals when analyzing IO performance problems.
on a s
Hi,
On 12 September 2012 16:41, Kenaniah Cerny wrote:
> In the service script that gets installed to /etc/rc.d/init.d/, there is a
> hard-coded value for PGPORT. Would it be possible to have this variable and
> the corresponding -p flag set when calling postgres removed?
My init.d script has the
Le 2012-09-12 à 17:08, Sébastien Lorion a écrit :
> As you can see, I am nowhere near the results John mentioned for a 10,000
> scale (about 8000 tps) and I am not sure why. My instance setup and
> configuration should be ok, but I am far from an expert (a startup founder
> has to wear many ha
Finally I got time to setup an instance and do some tests.
Instance:
High-Mem 4x large (8 cores, 68 GB)
EBS-Optimized flag set (allow up to 1000 Mbits/s transfer)
10GB standard EBS for OS
8x100GB in RAID10 for data (max 1000 iops)
2x100GB in RAID0 for WAL (max 1000 iops)
FreeBSD 9.0
PostgreSQL
I have several versions of PostgreSQL already installed on this machine. We
need to test compatibility with PostgreSQL database systems with our products.
I tried to install 9.2 64 bit using the one click installer from this location:
http://www.enterprisedb.com/products/pgdownload.do#windows
I
On Wed, Sep 12, 2012 at 09:54:57PM +0200, Henry C. wrote:
>
> Thanks to all who responded - upgrade was successful!
>
> One final note, when using pg_upgrade ... --link, it finally recommends
> use of delete_old_cluster.sh to remove the old data files. I'm tempted,
> but --link re-uses old data
Thanks to all who responded - upgrade was successful!
One final note, when using pg_upgrade ... --link, it finally recommends
use of delete_old_cluster.sh to remove the old data files. I'm tempted,
but --link re-uses old data files,... bit of a contradiction there, if you
follow my meaning?
Is
In an attempt to get a hackfix for
http://pgfoundry.org/tracker/index.php?func=detail&aid=1011203&group_id=1000411&atid=1376,
I'm wonder if it's true that, when looking at pg_locks, the only pid which
will have virtualxid = '1/1' and virtualtransaction = '-1/0' will be the
bgwriter. That seems
"Henry C." writes:
> Anyway, thanks for the feedback. Ok, so it's simply a case of:
> update pg_database set datcollate='C', datctype='C';
> Correct? Mine are all POSIX, so no where clause needed.
Yeah, that should do it.
regards, tom lane
--
Sent via pgsql-general m
> "C" is the official name of that locale. Not sure how you got it to say
> "POSIX" ... maybe we didn't have normalization of the locale name back
> then?
>
> Anyway, simplest fix seems to be to update the 9.0 installation's
> pg_database to say "C" in those entries.
Never ceases to amaze me wher
"Henry C." writes:
> 2. My 9.0 (old) db is using encoding/Collation/Ctype:
> SQL_ASCII/POSIX/POSIX, and for the life of me I cannot initdb the new
> (9.2) data/ folder to the same to ensure pg_upgrade works (currently keep
> getting "old and new cluster lc_collate values do not match" error).
"
Hi,
On Wed, 2012-09-12 at 20:12 +0200, Henry C. wrote:
>
> 1. I ran into the (usual?) issue with ld libraries conflicting, so
> renamed /etc/ld.so.conf.d/postgresql-9.0-libs.conf to blah, and reran
> ldconfig, then I could initdb the new data/ folder
For the records, I fixed this issue in the
Hi all,
Using centos 5.x
I'm trying to upgrade (without having to dump/restore a 1.5TB db) from 9.0
to 9.2 using pg_upgrade, but am having a few issues.
1. I ran into the (usual?) issue with ld libraries conflicting, so
renamed /etc/ld.so.conf.d/postgresql-9.0-libs.conf to blah, and reran
ldcon
Edson Richter wrote:
> SELECT pg_size_pretty(pg_database_size('mydatabase'));
> pg_size_pretty
>
> 7234 MB
> SELECT nspname || '.' || relname AS "relation",
> pg_size_pretty(pg_relation_size(C.oid)) AS "size"
>FROM pg_class C
>LEFT JOIN pg_namespace N ON (N.oi
Em 12/09/2012 09:16, Kevin Grittner escreveu:
Edson Richter wrote:
Em 12/09/2012 00:37, Edson Richter escreveu:
Em 11/09/2012 14:59, Kevin Grittner escreveu:
Edson Richter wrote:
[biggest relation was a table heap with 29321 pages]
[block size is 8 KB]
So your biggest table is actual
Thanks. I was hoping there was a way to enable individual operators through the
postgresql configuration file or something.
Anyway, I replaced the functionality with $content =~
s/([a-fA-F0-9]{2})/chr(hex($1))/eg; which seems to be doing the same thing as
unpack( "H*", $content ), which is basi
Hi Jeremy,
On Wed, September 12, 2012 6:23 pm, Jeremy Whiting wrote:
> I tried installing using the Direct Download RPMs for two systems I am
> using. The installation using the rpm files did not install in one case
> and the other did not install all files to the target directory I was
> expec
Hi,
On Wed, September 12, 2012 7:09 pm, Scott Marlowe wrote:
> Sorry wasn't clear. I meant does RH support running > 1 cluster of
> the same version with simple commands from the command line. Like
> pg_createcluster does.
Well, no. Except running
cp /etc/init.d/postgresql-9.2 /etc/init.d/se
On Wed, Sep 12, 2012 at 9:59 AM, Devrim GUNDUZ wrote:
>
> Hi,
>
> On Wed, September 12, 2012 6:49 pm, Scott Marlowe wrote:
>
>> Does RH support > 1 cluster of the same version of postgres yet?
>> Looking at the link I can't really tell. Looks like not.
>
> That link exactly tells you how to do t
Hi,
On Wed, September 12, 2012 6:49 pm, Scott Marlowe wrote:
> Does RH support > 1 cluster of the same version of postgres yet?
> Looking at the link I can't really tell. Looks like not.
That link exactly tells you how to do that. Running > 1 9.2 clusters in
parallel.
Regards,
--
Temporaril
On Wed, Sep 12, 2012 at 9:16 AM, Devrim GUNDUZ wrote:
>
> Hi,
>
> On Wed, September 12, 2012 5:54 pm, Scott Marlowe wrote:
>> While RHEL is a solid and reliable OS, it was never built to run > 1
>> version etc of pgsql easily.
>
> I would disagree with this. All you need is this:
>
> http://svn.pg
Hi,
I tried installing using the Direct Download RPMs for two systems I am
using. The installation using the rpm files did not install in one case
and the other did not install all files to the target directory I was
expecting.
1) Upgrading postgresql on a 64 bit Fedora 16 desktop from 9.1.5 to 9
Hi,
On Wed, September 12, 2012 5:54 pm, Scott Marlowe wrote:
> While RHEL is a solid and reliable OS, it was never built to run > 1
> version etc of pgsql easily.
I would disagree with this. All you need is this:
http://svn.pgrpms.org/browser/rpm/redhat/9.2/postgresql/EL-6/README.rpm-dist#L187
On Wed, Sep 12, 2012 at 8:54 AM, Scott Marlowe wrote:
> Your best bet for running > 1 versions and / or > 1 clusters of the
> same version, is to run debian or any debian based distro. You create
> a new cluster like so:
Just a quick note that back when Centos / RHEL was my main db server
OS, I
On Wed, Sep 12, 2012 at 12:41 AM, Kenaniah Cerny wrote:
> Hi all,
>
> I would first like to thank everyone involved for all of the hard work that
> goes into the postgres and the RPMs.I have a small request:
>
> In the service script that gets installed to /etc/rc.d/init.d/, there is a
> hard-code
Herouth Maoz writes:
> I created a function that does some heavy string manipulation, so I needed to
> use pl/perl rather than pl/pgsql.
> I'm not experienced in perl, but the function works well when used as an
> independent perl subroutine - it depends only on its arguments. I use the
> Encod
Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= writes:
> On Tue, 2012-09-11 at 23:41 -0700, Kenaniah Cerny wrote:
>> In the service script that gets installed to /etc/rc.d/init.d/, there is a
>> hard-coded value for PGPORT. Would it be possible to have this variable and
>> the corresponding -p flag set when c
I created a function that does some heavy string manipulation, so I needed to
use pl/perl rather than pl/pgsql.
I'm not experienced in perl, but the function works well when used as an
independent perl subroutine - it depends only on its arguments. I use the
Encode package (in postgresql config
Edson Richter wrote:
> Em 12/09/2012 00:37, Edson Richter escreveu:
>> Em 11/09/2012 14:59, Kevin Grittner escreveu:
>>> Edson Richter wrote:
[biggest relation was a table heap with 29321 pages]
[block size is 8 KB]
>>> So your biggest table is actually 229 MB. Something is not add
On 11/09/2012 20:42, Alexander Reichstadt wrote:
> Thanks for the push, I found the information I needed. It works now.
Glad you got sorted. For people looking at the archives, what was the
problem, and how did you fix it?
Ray.
--
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie
--
Sent v
Hi all!
Why must pg_dump create a fresh new directory everytime? I'm running some tests
where I dump a database to a directory, git init and git add --all, then dump
again. When I did that after doing some modifications (specifically creating a
new table and adding a few hundred thousand record
Em 12/09/2012 00:37, Edson Richter
escreveu:
Em 11/09/2012 14:59, Kevin Grittner escreveu:
Edson Richter
wrote:
Em 11/09/2012 14:34, Kevin Grittner
escreveu:
Edson Richter
wrote:
Hi,
On Tue, 2012-09-11 at 23:41 -0700, Kenaniah Cerny wrote:
> In the service script that gets installed to /etc/rc.d/init.d/, there is a
> hard-coded value for PGPORT. Would it be possible to have this variable and
> the corresponding -p flag set when calling postgres removed?
>
> Explicitly s
Hi.
I am running streaming replication with PostgreSQL 9.1.5, and using
hot_standby_feedback=on.
Per previous messages, I'm still experiencing query cancellations on
the hot standbys triggered by vacuums on the primary
(http://postgresql.1045698.n5.nabble.com/pg-dump-on-hot-standby-canceled-despi
Hi,
On Wed, 2012-09-12 at 10:58 +0200, Albe Laurenz wrote:
> You should ask the people who roll the RPMs for CentOS,
> they are the ones who created that script.
It is actually Tom and me who are responsible for those init scripts,
CentOS does not roll their own packages.
Regards,
--
Devrim GÜ
Kenaniah Cerny wrote:
> I would first like to thank everyone involved for all of the hard work that
> goes into the postgres and
> the RPMs.I have a small request:
>
> In the service script that gets installed to /etc/rc.d/init.d/, there is a
> hard-coded value for
> PGPORT. Would it be possible
I haven't checked more recent versions,
but in 8.2 using
case when new.val is null then 'U' else new.val end
worked a hell of a lot faster then coalesce.
However, just going into the trigger is significant overhead.
Alban's suggestion of using a
Gustav Potgieter wrote:
> Hope you can assist and that I am posting to the right forum.
Sending the question twice with the same wording does not
make it clearer...
> We currently have multiple Postgresql 9 instances running with warm
standby, and the replication work
> wonderfully.
>
> The prob
On 12 September 2012 08:49, Willy-Bas Loos wrote:
> Hi,
>
> I want to force deafults, and wonder about the performance.
> The trigger i use (below) makes the query (also below) take 45% more time.
> The result is the same now, but i do have a use for using the trigger (see
> "background info").
>
Hi all,
I would first like to thank everyone involved for all of the hard work that
goes into the postgres and the RPMs.I have a small request:
In the service script that gets installed to /etc/rc.d/init.d/, there is a
hard-coded value for PGPORT. Would it be possible to have this variable and
th
66 matches
Mail list logo