In later kernels these have been renamed:
Welcome to Ubuntu 12.04.1 LTS (GNU/Linux 3.2.0-32-generic x86_64)
$ sysctl -a|grep dirty
vm.dirty_background_ratio = 5
vm.dirty_background_bytes = 0
vm.dirty_ratio = 10
vm.dirty_bytes = 0
vm.dirty_writeback_centisecs = 500
vm.dirty_expire_centisecs = 30
On Wed, Nov 28, 2012 at 8:28 PM, Jeff Davis wrote:
>
> The main problem with a long-running delete or update transaction is
> that the dead tuples (deleted tuples or the old version of an updated
> tuple) can't be removed until the transaction finishes. That can cause
> temporary "bloat", but 1.5M
Hi, I have started to implement your suggestions . I have a small error so far
though. The "vm.dirty_writeback_ratio = 1" command rerurns:
error: "vm.dirty_writeback_ratio" is an unknown key
I'm on ubuntu 12.04
Den 28/11/2012 kl. 17.54 skrev Shaun Thomas :
> On 11/28/2012 10:19 AM, Niels Kri
On Tue, 2012-11-27 at 16:04 -0600, Mike Blackwell wrote:
> I need to delete about 1.5 million records from a table and reload it
> in one transaction. The usual advice when loading with inserts seems
> to be group them into transactions of around 1k records. Committing
> at that point would leave
On Tue, 2012-11-27 at 22:16 -0300, Claudio Freire wrote:
> Updates, are faster if batched, if your business logic allows it,
> because it creates less bloat and creates more opportunities for with
> HOT updates. I don't think it applies to inserts, though, and I
> haven't heard it either.
Huge upd
On Wed, Nov 28, 2012 at 12:37 PM, John Lister wrote:
> Hi, I've just been benchmarking a new box I've got and running pgbench
> yields what I thought was a slow tps count. It is dificult to find
> comparisons online of other benchmark results, I'd like to see if I have the
> box set up reasonably
Hi, I've just been benchmarking a new box I've got and running pgbench
yields what I thought was a slow tps count. It is dificult to find
comparisons online of other benchmark results, I'd like to see if I have
the box set up reasonably well.
I know oracle, et al prohibit benchmark results, bu
I recommend SymmetricDS - http://www.symmetricds.org
-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of suhas.basavaraj12
Sent: Wednesday, November 28, 2012 1:12 PM
To: pgsql-performance@postgresql.org
Subject: [PE
On Wed, Nov 28, 2012 at 3:12 PM, suhas.basavaraj12 wrote:
> We are planning to migrate our production databases to different
> servers.We have around 8 servers with 8 different clusters.We are planning
> t shuffle databases and make them as 7 cluster and migrate to new remote
> servers .
> We c
Hi,
We are planning to migrate our production databases to different
servers.We have around 8 servers with 8 different clusters.We are planning
t shuffle databases and make them as 7 cluster and migrate to new remote
servers .
We cannot use streaming replication as we are migrating different
On Wed, Nov 28, 2012 at 4:41 AM, Niels Kristian Schjødt
wrote:
>
> So my main concern is actually about the cars table, since this one currently
> has a lot of columns (151 - I expect thats quite a lot?), and a lot of data
> (4 mil. rows, and growing). Now you might start by thinking, this could
On 11/28/2012 11:44 AM, Niels Kristian Schjødt wrote:
Thanks a lot - on the server I already have one additional SSD 250gb
disk, that I don't use for anything at the moment.
God. An SSD would actually be better for your data, as it follows
more random access patterns, and xlogs are more se
On 11/28/2012 10:19 AM, Niels Kristian Schjødt wrote:
https://rpm.newrelic.com/public/charts/h2dtedghfsv
Doesn't this answer your question?
That iowait is crushing your server into the ground. It's no surprise
updates are taking several seconds. That update you sent us *should*
execute on the
max_connections = 1000 looks bad... why not a pooler in place?
Cheers
Bèrto
On 28 November 2012 16:19, Niels Kristian Schjødt
wrote:
> max_connections = 1000
--
==
If Pac-Man had affected us as kids, we'd all be running around in a
darkened room munching pills and
Okay guys,
Thanks for all the great help and advice already! Let me just clear some
things, to make my question a little easier to answer :-)
Now my site is a search engine for used cars - not just a car shop with a few
hundred cars.
The update query you look at, is an update that is executed on
Commitmarks are written to disk after each transaction. So transactionsize has
impact on performance.
Date: Wed, 28 Nov 2012 09:18:20 -0600
Subject: Re: [PERFORM] Savepoints in transactions for speed?
From: mike.blackw...@rrd.com
To: klaussfre...@gmail.com
CC: pgsql-performance@postgresql.org
On Tue, Nov 27, 2012 at 7:16 PM, Claudio Freire wrote:
> Updates, are faster if batched, if your business logic allows it,
> because it creates less bloat and creates more opportunities for with
> HOT updates. I don't think it applies to inserts, though, and I
> haven't heard it either.
>
Ah. Th
I assume that SQL databases ( Banks? Telecom?) can handle an used car shop. No
need for an unstructured data tool.
> +1, sql databases has limited number of inserts/updates per second. Even
> with highend hardware you won't have more than XXX operations per
> second. As Thomas said, you shoul
Niels Kristian Schjødt wrote:
> PS. I'm on postgres 9.2 on a server with 32gb ram, 8 cores and
> two 3T disks in a software raid 1 setup.
In addtion to the excellent advice from Shaun, I would like to
point out a few other things.
One query runs on one core. In a test of a single query, the othe
Let me be devil advocate here :)
First of all, even if you read any basics about normalization, don't take
it to your heart :) Think.
Know that each normalization/denormalization step has it's cons and pros.
E.g. in NoSQL world they often don't normalize much.
What's interesting with PosgreSQL is t
Niels Kristian Schjødt wrote:
> So my main concern is actually about the cars table, since this
> one currently has a lot of columns (151 - I expect thats quite a
> lot?),
That's pretty wide, but not outrageous.
> and a lot of data (4 mil. rows, and growing).
That's not a big deal. It's not unu
W dniu 28.11.2012 15:07, Shaun Thomas pisze:
> On 11/28/2012 06:57 AM, Niels Kristian Schjødt wrote:
>
> Before I go crazy, here... you really need to tell us what "not enough"
> means. You didn't provide an explain analyze, so we don't know what your
> actual performance is. But I have my suspici
On 11/28/2012 06:57 AM, Niels Kristian Schjødt wrote:
Before I go crazy, here... you really need to tell us what "not enough"
means. You didn't provide an explain analyze, so we don't know what your
actual performance is. But I have my suspicions.
So as you can see, it's already pretty optimize
Hi Kristian,
> " I can't see why it would make sense to put that into a separate table and
> join in the values "
> You don't normalize for performance. People DEnormalize for performance.
Yes. In short, you seem more of a developer than a RDBMS guy. This is
not a personal fault, but it's a *very
In relation to question 2, I read somewhere in the documentation that
because of MVCC, the whole row has to be rewritten even though I just
update one single column in that row. Hence if the table is wider (has
more columns), the update will be slower. Does this match your
understanding?
No.
Thanks for the advice.
Currently I see a lot of I/O related to update/inserts, so I'm trying to track
down these guys at first. In relation to question 2, I read somewhere in the
documentation that because of MVCC, the whole row has to be rewritten even
though I just update one single column in
Niels,
" I can't see why it would make sense to put that into a separate table and
join in the values "
You don't normalize for performance. People DEnormalize for performance.
Questions: (AFAIK)
1) This is a way to disaster. Get yourself a book on RDBMS from for example
Celko. Do
Hi, i have these update queries, that run very often, and takes too long time,
in order for us to reach the throughput we are aiming at. However, the update
query is very simple, and I can't figure out any way to improve the situation.
The query looks like this:
UPDATE "adverts" SET "last_obser
Hi,
I'm on the hunt for some solid knowledge on a theoretical level about the
performance of postgresql. My question is regarding best practices, and how
architectural decisions might influence the performance. First a little
background:
The setup:
I have a database which holds informations on
suhas.basavaraj12 wrote:
> Can i delete the content of this folder. I have observed couple of
times ,
> this folder got cleaned automatically.
These files are in use and you should not delete them.
If you need them to go right now, cancel the queries that
create temporary files.
If there are any
Hi,
Can i delete the content of this folder. I have observed couple of times ,
this folder got cleaned automatically.
Which backend process deletes the data from this folder .Any Idea?
Rgrds
Suhas
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/pgsql-tmp-Temporar
suhas.basavaraj12 wrote:
> This folde( Temporary tablespace) is getting filled and size increases
in
> the day where there lots of sorting operations.But after some times
the data
> in the is deleted automatically . Can any one explain what is going
on ?
Must be temporary files created by the sor
32 matches
Mail list logo