Can you please help me understand what "blkno" column refers to ?
Thanks
Venkat
On Wed, Sep 21, 2011 at 11:08 AM, Venkat Balaji wrote:
> Thank Everyone for your inputs !
>
> Mark,
>
> We are using 9.0, so, i should be able to make use of this "freespacemap"
> contrib module and would get back to
Thank Everyone for your inputs !
Mark,
We are using 9.0, so, i should be able to make use of this "freespacemap"
contrib module and would get back to you with the results.
I was using below query (which i got it by googling)..
But, was not sure, if its picking up the correct information. I want
Thanks but...did not work different error though
ERROR: relation "history" already exists
ERROR: relation "idx_history_pagegroupid" already exists
ERROR: constraint "cdocumentid" for relation "history" already exists
On Wed, Sep 21, 2011 at 4:16 PM, Derrick Rice wrote:
> You don't need "-t
You don't need "-t history" on the psql part. It doesn't do what you think
it does, and it's reading the next part ("history") as the database name.
try:
pg_dump -h -U postgres -t history DATABASENAME | psql -h
hostname2 -U postgres DATABASENAME > db.sql
Derrick
On Tue, Sep 20, 2011 at 11:57
Hi all,
I am trying to update / refresh one table (history) only from prod. database
to my test environment database
my query as follows:
pg_dump -h -U postgres -t history DATABASENAME | psql -h
hostname2 -U postgres -t history DATABASENAME > db.sql
but I am getting the following error
psql: F
Stephen Frost writes:
> * Tom Lane (t...@sss.pgh.pa.us) wrote:
>> really pretty simple: decide whether to use a custom (parameter-aware)
>> plan or a generic (not-parameter-aware) plan.
> Before I go digging into this, I was wondering, is this going to address
> our current problem of not being
Tom,
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> really pretty simple: decide whether to use a custom (parameter-aware)
> plan or a generic (not-parameter-aware) plan.
Before I go digging into this, I was wondering, is this going to address
our current problem of not being able to use prepared que
* Royce Ausburn (royce...@inomial.com) wrote:
> > Tom just mentioned that 9.1 will be able to re-plan parameterized prepared
> > statements, so this issue will go away. In the mean time you can only
> > really use the standard workaround of setting the prepare theshold to 0 to
> > disable server
Andy Lester writes:
> On Sep 20, 2011, at 7:36 PM, Tom Lane wrote:
>> 9.2, sorry, not 9.1. We could use some motivated people testing that
>> aspect of GIT HEAD, though, since I doubt the policy for when to re-plan
>> is quite ideal yet.
> Is motivation and a box enough? I have motivation, but
On Sep 20, 2011, at 7:36 PM, Tom Lane wrote:
> 9.2, sorry, not 9.1. We could use some motivated people testing that
> aspect of GIT HEAD, though, since I doubt the policy for when to re-plan
> is quite ideal yet.
Is motivation and a box enough? I have motivation, but not knowledge of
interna
Craig Ringer writes:
> On 21/09/2011 7:27 AM, Royce Ausburn wrote:
>> We've been worst hit by this query on an 8.3 site. Another site is
>> running 8.4. Have there been improvements in this area recently?
>> Upgrading to 9.0 might be viable for us.
> Tom just mentioned that 9.1 will be able t
On 21/09/2011, at 9:39 AM, Craig Ringer wrote:
> On 21/09/2011 7:27 AM, Royce Ausburn wrote:
>> Hi all,
>>
>> It looks like I've been hit with this well known issue. I have a
>> complicated query that is intended to run every few minutes, I'm using
>> JDBC's Connection.prepareStatement() most
On 21/09/2011 7:27 AM, Royce Ausburn wrote:
Hi all,
It looks like I've been hit with this well known issue. I have
a complicated query that is intended to run every few minutes, I'm
using JDBC's Connection.prepareStatement() mostly for nice
parameterisation, but postgres produces a suboptima
Hi all,
It looks like I've been hit with this well known issue. I have a complicated
query that is intended to run every few minutes, I'm using JDBC's
Connection.prepareStatement() mostly for nice parameterisation, but postgres
produces a suboptimal plan due to its lack of information when the
On 21/09/11 10:05, Mark Kirkwood wrote:
...then using the freespacemap contrib module should give very
accurate results:
Sorry, should have said - for 8.4 and later!
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://w
On 21/09/11 06:09, Josh Berkus wrote:
Venkat,
I see lot of free spaces or free pages in Tables and Indexes. But, I need to
give an exact calculation on how much space will be reclaimed after VACUUM
FULL and RE-INDEXING.
At present, there is no way to calculate this precisely. You can only
est
Venkat,
> I see lot of free spaces or free pages in Tables and Indexes. But, I need to
> give an exact calculation on how much space will be reclaimed after VACUUM
> FULL and RE-INDEXING.
At present, there is no way to calculate this precisely. You can only
estimate, and estimates have significa
W dniu 2011-09-20 18:22, Venkat Balaji pisze:
> Hello Everyone,
>
> I had posted a query in "GENERAL" category, not sure if that was the
> correct category to post.
>
> Please help me understand how to calculate free space in Tables and
> Indexes even after vacuuming and analyzing is performed.
Merlin Moncure writes:
> just selects. update test is also very interesting -- the only test I
> did for for updates is 'update foo set x=x+1' which was a win for
> btree (20-30% faster typically). perhaps this isn't algorithm induced
> though -- lack of wal logging could actually hurt time to
Hello Everyone,
I had posted a query in "GENERAL" category, not sure if that was the
correct category to post.
Please help me understand how to calculate free space in Tables and Indexes
even after vacuuming and analyzing is performed.
What i understand is that, even if we perform VACUUM ANALY
On Mon, Sep 19, 2011 at 1:53 PM, Claudio Freire wrote:
> On Mon, Sep 19, 2011 at 3:43 PM, Merlin Moncure wrote:
>> To make the test into i/o bound, I change the setrandom from 10 to
>> 1000; this produced some unexpected results. The hash index is
>> pulling about double the tps (~80 vs ~
21 matches
Mail list logo