On Sat, 17 Mar 2012 10:46:00 -0500, dennis jenkins
wrote:
Aleksey, a suggestion: The vast majority of the postgresql wire
protocol compresses well. If your WAN link is not already compressed,
construct a compressed SSH tunnel for the postgresql TCP port in the
WAN link. I've done this when
On 17 Mar 2012, at 4:21, Aleksey Tsalolikhin wrote:
> And I still don't know why I have holes in my pages... should I have
> holes
> in my pages?? why only on the slave and not on the master? (the slave has
> 4x the number of pages compared to the master, and there's less rows per
> page
On Fri, Mar 16, 2012 at 2:20 PM, Aleksey Tsalolikhin
wrote:
> On Thu, Mar 15, 2012 at 6:43 AM, Aleksey Tsalolikhin
> wrote:
> Our database is about 200 GB - over a WAN link, last time it took 8
> hours to do a full sync, I expect it'll be
> more like 9 or 10 hours this time.
>
Aleksey, a sugges
On Fri, Mar 16, 2012 at 2:03 PM, Steve Crawford
wrote:
>
>
> I *think* you can get away with only sufficient free space to store the
> *new* table and indexes
Yeah; I don't have that much free space. Just 30 GB short. :(
> Depending on your schema and which tables are using space, you might b
On Fri, Mar 16, 2012 at 1:20 PM, Aleksey Tsalolikhin
wrote:
> On Thu, Mar 15, 2012 at 6:43 AM, Aleksey Tsalolikhin
> wrote:
>> Is there any way to consolidate the pages on the slave without taking
>> replication offline?
>
> Filip Rembiałkowski suggested: maybe CLUSTER?
>
> Greg Williamson sugg
On 16 Mar 2012, at 20:20, Aleksey Tsalolikhin wrote:
> CLUSTER requires free space at least equal to the sum of the table
> size and the index sizes.
>
> pg_reorg rquires amount of space twice larger than target table and indexes.
>
> Too bad I can't say "CLUSTER TABLE tablename USING_ARRAY
> /d
I'm coming into this conversation *way* late so forgive me if this has
been hashed out already
On 03/16/2012 12:20 PM, Aleksey Tsalolikhin wrote:
CLUSTER requires free space at least equal to the sum of the tablesize
and the index sizes.
Although it is not documented in an absolutely clear
On Thu, Mar 15, 2012 at 6:43 AM, Aleksey Tsalolikhin
wrote:
> Is there any way to consolidate the pages on the slave without taking
> replication offline?
Filip Rembiałkowski suggested: maybe CLUSTER?
Greg Williamson suggested: pg_reorg
Thank you, Filip and Greg. They would both work IF I h
Filip Rembiałkowski suggested:
>> On Thu, Mar 15, 2012 at 6:43 AM, Aleksey Tsalolikhin
>> wrote:
>>
>>
>> Is there any way to consolidate the pages on the slave without taking
>> replication offline?
>>
>>
>maybe CLUSTER?
>
<...>
>
>Of course events destined to this table will be queued by
On Thu, Mar 15, 2012 at 6:43 AM, Aleksey Tsalolikhin <
atsaloli.t...@gmail.com> wrote:
>
> Is there any way to consolidate the pages on the slave without taking
> replication offline?
>
maybe CLUSTER?
filip@dev=> create table foobar (id serial primary key, load text);
CREATE TABLE
filip@dev=> in
On Wed, Mar 14, 2012 at 9:57 PM, Scott Marlowe wrote:
> Are you sure you're checking the toast table that goes with whatever
> parent table?
Yep. I find out the relation id of the TOAST table:
"select reltoastrelid from pg_class where relname = 'parent_table_name';"
Find out the relation name
On Wed, Mar 14, 2012 at 10:57 PM, Scott Marlowe wrote:
> If the du -s numbers don't change or only a little then feel free to
> either run a single update while running
>
> watch "df -h /var/lib/where/my/data/dir/lives"
>
> and being ready to hit CTRL-C if you see if running your machine out of
>
On Wed, Mar 14, 2012 at 9:06 PM, Aleksey Tsalolikhin
wrote:
>
> On Wed, Mar 14, 2012 at 7:38 PM, Scott Marlowe
> wrote:
>>
>> The good news is that if the table is
>> bloated, it should be able to just write to the free space in the
>> table that's already there.
>
> Thank you, I got it. The ta
On Wed, Mar 14, 2012 at 8:06 PM, Aleksey Tsalolikhin
wrote:
> Executive summary: Why would the TOAST table on the slave have 4x the
> page count of the master?
Hypothesis: the pages of the TOAST table on the slave have more
unused space in them than the pages of the TOAST table on the master.
Executive summary: Why would the TOAST table on the slave have 4x the
page count of the master? Is there a way to compact it if I don't
have enough disk space to duplicate the table? How do I prevent this
situation from recurring?
On Wed, Mar 14, 2012 at 7:38 PM, Scott Marlowe wrote:
>
> The g
On Wed, Mar 14, 2012 at 8:24 PM, Aleksey Tsalolikhin
wrote:
> On Tue, Mar 13, 2012 at 5:14 AM, Vick Khera wrote:
>> I'll bet what happened was postgres re-wrote your table for you,
>> effectively doing a compaction. You can get similar effect by doing
>> an alter table and "changing" an INTEGER
On Tue, Mar 13, 2012 at 5:14 AM, Vick Khera wrote:
> I'll bet what happened was postgres re-wrote your table for you,
> effectively doing a compaction. You can get similar effect by doing
> an alter table and "changing" an INTEGER field to be INTEGER.
> Postgres does not optimize that do a no-op,
On Mon, Mar 12, 2012 at 3:06 AM, Nur Hidayat wrote:
> I once have the same problem. In my case it's because most of my table using
> text datatype.
> When I change the field type to character varying (1000) database size
> reduced significantly
I'll bet what happened was postgres re-wrote your ta
On 03/12/12 2:28 PM, Nur Hidayat wrote:
If I didn't change the data type to charcater varying vaccuming the database
doesn't reduce the database size
changing the data type required every tuple to get rewritten. a
vacuum full, or a cluster likely would have done the same or better
reduct
ubject: Re: [GENERAL] A 154 GB table swelled to 527 GB on the Slony slave.
How to compact it?
On 12 March 2012 09:20, Nur Hidayat wrote:
> FYI, after I changed text field into character varying, I vaccuum the whole
> database, resulting in much smaller database size
What I think that happened
18:09
> To: John R Pierce;
> Reply-To: hidayat...@gmail.com
> Subject: Re: [GENERAL] A 154 GB table swelled to 527 GB on the Slony slave.
> How to compact it?
>
> Yes, I am aware of that, but that's the fact I'm facing
> Right now I'am happy enough my system ru
n R Pierce;
Reply-To: hidayat...@gmail.com
Subject: Re: [GENERAL] A 154 GB table swelled to 527 GB on the Slony slave. How
to compact it?
Yes, I am aware of that, but that's the fact I'm facing
Right now I'am happy enough my system runs well without eating up my drive :)
I'
Original Message-
From: John R Pierce
Sender: pgsql-general-owner@postgresql.orgDate: Mon, 12 Mar 2012 00:39:28
To:
Subject: Re: [GENERAL] A 154 GB table swelled to 527 GB on the Slony slave.
How to compact it?
On 03/12/12 12:06 AM, Nur Hidayat wrote:
>
> I once have the same problem. In
On 03/12/12 12:06 AM, Nur Hidayat wrote:
I once have the same problem. In my case it's because most of my table
using text datatype.
When I change the field type to character varying (1000) database size
reduced significantly
Unfortunately, I haven't investigate more, but it looks like how
Hai Aleksey,
I once have the same problem. In my case it's because most of my table
using text datatype.
When I change the field type to character varying (1000) database size
reduced significantly
Unfortunately, I haven't investigate more, but it looks like how postgres
stores data
Regards,
Nur
Dear Scott,
When I pg_dump -t bigtablename on the Slony slave, the dump file is
212G in size.
I am unable to perform the same test on the master until I get a
maintenance window, which may not be for a few weeks, as it does
impact our production system when we dump from the master (the web ap
On Thu, Mar 8, 2012 at 1:10 PM, Aleksey Tsalolikhin
wrote:
>> On Thu, Mar 8, 2012 at 12:10 PM, Aleksey Tsalolikhin
>> wrote:
>>> On Wed, Mar 7, 2012 at 10:05 AM, Aleksey Tsalolikhin
>>> wrote:
We're replicating a PostgreSQL 8.4.x database using Slony1-1.2.x
My biggest table
> On Thu, Mar 8, 2012 at 12:10 PM, Aleksey Tsalolikhin
> wrote:
>> On Wed, Mar 7, 2012 at 10:05 AM, Aleksey Tsalolikhin
>> wrote:
>>> We're replicating a PostgreSQL 8.4.x database using Slony1-1.2.x
>>>
>>>
>>> My biggest table measures 154 GB on the origin, and 533 GB on
>>> the slave.
>>>
>>
On Thu, Mar 8, 2012 at 12:10 PM, Aleksey Tsalolikhin
wrote:
> On Wed, Mar 7, 2012 at 10:05 AM, Aleksey Tsalolikhin
> wrote:
>> We're replicating a PostgreSQL 8.4.x database using Slony1-1.2.x
>>
>>
>> My biggest table measures 154 GB on the origin, and 533 GB on
>> the slave.
>>
>> Why is my
On Wed, Mar 7, 2012 at 10:05 AM, Aleksey Tsalolikhin
wrote:
> We're replicating a PostgreSQL 8.4.x database using Slony1-1.2.x
>
>
> My biggest table measures 154 GB on the origin, and 533 GB on
> the slave.
>
> Why is my slave bigger than my master? How can I compact it, please?
On Wed, Ma
On Wed, Mar 7, 2012 at 10:05 AM, Aleksey Tsalolikhin
wrote:
> We're replicating a PostgreSQL 8.4.x database using Slony1-1.2.x
>
> The origin database "data/base" directory is 197 GB in size.
>
> The slave database "data/base" directory is 562 GB in size and is
> over 75% filesystem utilizatio
Dear Joshua,
You wrote:
> Try disabling replication on that table and clustering the table and then
> re-enabling
> replication.
...
> Alternatively you could disable replication on that table, truncate the
> table, and then
> re-enable replication for that table. A concern would be is that i
We're replicating a PostgreSQL 8.4.x database using Slony1-1.2.x
The origin database "data/base" directory is 197 GB in size.
The slave database "data/base" directory is 562 GB in size and is
over 75% filesystem utilization which has set off the "disk free" siren.
My biggest table* measures
33 matches
Mail list logo