On 6/30/2010 2:21 PM, Jignesh Shah wrote:
If the underlying WAL disk is SSD then it seems I can get
synchronous_commit=on to work faster than
synchronous_commit=off..
The first explanation that pops to mind is that synchronous_commit is
writing all the time, which doesn't have the same sort
On Tue, Jun 29, 2010 at 9:39 PM, Bruce Momjian wrote:
> Jignesh Shah wrote:
>> On Tue, Jun 29, 2010 at 2:45 PM, Bruce Momjian wrote:
>> > Tom Lane wrote:
>> >> Bruce Momjian writes:
>> >> >>> I asked on IRC and was told it is true, and looking at the C code it
>> >> >>> looks true. ?What synchro
On 6/30/10 9:42 AM, Dave Crooke wrote:
I haven't jumped in yet on this thread, but here goes
If you're really looking for query performance, then any database which
is designed with reliability and ACID consistency in mind is going to
inherently have some mis-fit features.
Some other ideas
I haven't jumped in yet on this thread, but here goes
If you're really looking for query performance, then any database which is
designed with reliability and ACID consistency in mind is going to
inherently have some mis-fit features.
Some other ideas to consider, depending on your query mix
Brad Nicholson wrote:
> > > > Ah, very good point. ?I have added a C comment to clarify why this is
> > > > the current behavior; ?attached and applied.
> > > >
> > > > --
> > > > ?Bruce Momjian ? ? ? ? ?http://momjian.us
> > > > ?EnterpriseDB ? ? ? ? ? ? ? ? ? ? ? ? ? ? http://enterprisedb.com
> >
On Tue, 2010-06-29 at 21:39 -0400, Bruce Momjian wrote:
> Jignesh Shah wrote:
> > On Tue, Jun 29, 2010 at 2:45 PM, Bruce Momjian wrote:
> > > Tom Lane wrote:
> > >> Bruce Momjian writes:
> > >> >>> I asked on IRC and was told it is true, and looking at the C code it
> > >> >>> looks true. ?What s
Jignesh Shah wrote:
> On Tue, Jun 29, 2010 at 2:45 PM, Bruce Momjian wrote:
> > Tom Lane wrote:
> >> Bruce Momjian writes:
> >> >>> I asked on IRC and was told it is true, and looking at the C code it
> >> >>> looks true. ?What synchronous_commit = false does is to delay writing
> >> >>> the wal
On Tue, Jun 29, 2010 at 2:45 PM, Bruce Momjian wrote:
> Tom Lane wrote:
>> Bruce Momjian writes:
>> >>> I asked on IRC and was told it is true, and looking at the C code it
>> >>> looks true. ?What synchronous_commit = false does is to delay writing
>> >>> the wal buffers to disk and fsyncing the
Tom Lane wrote:
> Bruce Momjian writes:
> >>> I asked on IRC and was told it is true, and looking at the C code it
> >>> looks true. ?What synchronous_commit = false does is to delay writing
> >>> the wal buffers to disk and fsyncing them, not just fsync, which is
> >>> where the commit loss due t
Bruce Momjian writes:
>>> I asked on IRC and was told it is true, and looking at the C code it
>>> looks true. ?What synchronous_commit = false does is to delay writing
>>> the wal buffers to disk and fsyncing them, not just fsync, which is
>>> where the commit loss due to db process crash comes f
Robert Haas wrote:
> On Tue, Jun 29, 2010 at 9:32 AM, Bruce Momjian wrote:
> > Robert Haas wrote:
> >> On Mon, Jun 28, 2010 at 5:57 PM, Bruce Momjian wrote:
> >> >> The patch also documents that synchronous_commit = false has
> >> >> potential committed transaction loss from a database crash (as
On Tue, Jun 29, 2010 at 9:32 AM, Bruce Momjian wrote:
> Robert Haas wrote:
>> On Mon, Jun 28, 2010 at 5:57 PM, Bruce Momjian wrote:
>> >> The patch also documents that synchronous_commit = false has
>> >> potential committed transaction loss from a database crash (as well as
>> >> an OS crash).
>
Bruce Momjian wrote:
> What synchronous_commit = false does is to delay writing
> the wal buffers to disk and fsyncing them, not just fsync
Ah, that answers the question Josh Berkus asked here:
http://archives.postgresql.org/pgsql-performance/2010-06/msg00285.php
(which is something I was
Robert Haas wrote:
> On Mon, Jun 28, 2010 at 5:57 PM, Bruce Momjian wrote:
> >> The patch also documents that synchronous_commit = false has
> >> potential committed transaction loss from a database crash (as well as
> >> an OS crash).
>
> Is this actually true?
I asked on IRC and was told it is
On Mon, Jun 28, 2010 at 5:57 PM, Bruce Momjian wrote:
>> The patch also documents that synchronous_commit = false has
>> potential committed transaction loss from a database crash (as well as
>> an OS crash).
Is this actually true?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The En
Bruce Momjian wrote:
> Tom Lane wrote:
> > Dimitri Fontaine writes:
> > > Josh Berkus writes:
> > >> a) Eliminate WAL logging entirely
> > >> b) Eliminate checkpointing
> > >> c) Turn off the background writer
> > >> d) Have PostgreSQL refuse to restart after a crash and instead call an
> > >> ex
2010/6/24 Josh Berkus :
>
>> this is similar MySQL's memory tables. Personally, I don't see any
>> practical sense do same work on PostgreSQL now, when memcached exists.
>
> Thing is, if you only have one table (say, a sessions table) which you
> don't want logged, you don't necessarily want to fir
> this is similar MySQL's memory tables. Personally, I don't see any
> practical sense do same work on PostgreSQL now, when memcached exists.
Thing is, if you only have one table (say, a sessions table) which you
don't want logged, you don't necessarily want to fire up a 2nd software
application
2010/6/24 A.M. :
>
> On Jun 24, 2010, at 4:01 PM, Pavel Stehule wrote:
>
>> 2010/6/24 Joshua D. Drake :
>>> On Thu, 2010-06-24 at 21:14 +0200, Pavel Stehule wrote:
2010/6/24 Josh Berkus :
>
>> And I'm also planning to implement unlogged tables, which have the
>> same contents for a
On Jun 24, 2010, at 4:01 PM, Pavel Stehule wrote:
> 2010/6/24 Joshua D. Drake :
>> On Thu, 2010-06-24 at 21:14 +0200, Pavel Stehule wrote:
>>> 2010/6/24 Josh Berkus :
> And I'm also planning to implement unlogged tables, which have the
> same contents for all sessions but are not WA
2010/6/24 Joshua D. Drake :
> On Thu, 2010-06-24 at 21:14 +0200, Pavel Stehule wrote:
>> 2010/6/24 Josh Berkus :
>> >
>> >> And I'm also planning to implement unlogged tables, which have the
>> >> same contents for all sessions but are not WAL-logged (and are
>> >> truncated on startup).
>>
>> this
On Thu, 2010-06-24 at 21:14 +0200, Pavel Stehule wrote:
> 2010/6/24 Josh Berkus :
> >
> >> And I'm also planning to implement unlogged tables, which have the
> >> same contents for all sessions but are not WAL-logged (and are
> >> truncated on startup).
>
> this is similar MySQL's memory tables. P
2010/6/24 Josh Berkus :
>
>> And I'm also planning to implement unlogged tables, which have the
>> same contents for all sessions but are not WAL-logged (and are
>> truncated on startup).
this is similar MySQL's memory tables. Personally, I don't see any
practical sense do same work on PostgreSQL
> And I'm also planning to implement unlogged tables, which have the
> same contents for all sessions but are not WAL-logged (and are
> truncated on startup).
Yep. And it's quite possible that this will be adequate for most users.
And it's also possible that the extra CPU which Robert isn't get
On Thu, Jun 24, 2010 at 4:40 AM, Rob Wultsch wrote:
> On Fri, Jun 18, 2010 at 1:55 PM, Josh Berkus wrote:
>>
>>> It must be a setting, not a version.
>>>
>>> For instance suppose you have a session table for your website and a
>>> users table.
>>>
>>> - Having ACID on the users table is of course
On Fri, Jun 18, 2010 at 1:55 PM, Josh Berkus wrote:
>
>> It must be a setting, not a version.
>>
>> For instance suppose you have a session table for your website and a
>> users table.
>>
>> - Having ACID on the users table is of course a must ;
>> - for the sessions table you can drop the "D"
>
>
Tom Lane writes:
> The problem with a system-wide no-WAL setting is it means you can't
> trust the system catalogs after a crash. Which means you are forced to
> use initdb to recover from any crash, in return for not a lot of savings
> (for typical usages where there's not really much churn in t
Tom Lane wrote:
> Dave Page writes:
> > On Wed, Jun 23, 2010 at 9:25 PM, Robert Haas wrote:
> >> I don't think we need a system-wide setting for that. ?I believe that
> >> the unlogged tables I'm working on will handle that case.
>
> > Aren't they going to be truncated at startup? If the entire
Dave Page writes:
> On Wed, Jun 23, 2010 at 9:25 PM, Robert Haas wrote:
>> I don't think we need a system-wide setting for that. I believe that
>> the unlogged tables I'm working on will handle that case.
> Aren't they going to be truncated at startup? If the entire system is
> running without
Robert Haas wrote:
> On Wed, Jun 23, 2010 at 3:37 PM, Bruce Momjian wrote:
> > Tom Lane wrote:
> >> Dimitri Fontaine writes:
> >> > Josh Berkus writes:
> >> >> a) Eliminate WAL logging entirely
> >
> > If we elimiate WAL logging, that means a reinstall is required for even
> > a postmaster crash
On Wed, Jun 23, 2010 at 9:25 PM, Robert Haas wrote:
> On Wed, Jun 23, 2010 at 3:37 PM, Bruce Momjian wrote:
>> Tom Lane wrote:
>>> Dimitri Fontaine writes:
>>> > Josh Berkus writes:
>>> >> a) Eliminate WAL logging entirely
>>
>> If we elimiate WAL logging, that means a reinstall is required for
On Wed, Jun 23, 2010 at 3:37 PM, Bruce Momjian wrote:
> Tom Lane wrote:
>> Dimitri Fontaine writes:
>> > Josh Berkus writes:
>> >> a) Eliminate WAL logging entirely
>
> If we elimiate WAL logging, that means a reinstall is required for even
> a postmaster crash, which is a new non-durable behavi
Pavel Stehule wrote:
> 2010/6/23 Bruce Momjian :
> > Tom Lane wrote:
> >> Dimitri Fontaine writes:
> >> > Josh Berkus writes:
> >> >> a) Eliminate WAL logging entirely
> >
> > If we elimiate WAL logging, that means a reinstall is required for even
> > a postmaster crash, which is a new non-durabl
2010/6/23 Bruce Momjian :
> Tom Lane wrote:
>> Dimitri Fontaine writes:
>> > Josh Berkus writes:
>> >> a) Eliminate WAL logging entirely
>
> If we elimiate WAL logging, that means a reinstall is required for even
> a postmaster crash, which is a new non-durable behavior.
>
> Also, we just added w
Tom Lane wrote:
> Dimitri Fontaine writes:
> > Josh Berkus writes:
> >> a) Eliminate WAL logging entirely
> >> b) Eliminate checkpointing
> >> c) Turn off the background writer
> >> d) Have PostgreSQL refuse to restart after a crash and instead call an
> >> exteral script (for reprovisioning)
>
Tom Lane wrote:
> Dimitri Fontaine writes:
> > Josh Berkus writes:
> >> a) Eliminate WAL logging entirely
If we elimiate WAL logging, that means a reinstall is required for even
a postmaster crash, which is a new non-durable behavior.
Also, we just added wal_level = minimal, which might end up
On Thu, Jun 17, 2010 at 1:29 PM, Josh Berkus wrote:
> a) Eliminate WAL logging entirely
In addition to global temporary tables, I am also planning to
implement unlogged tables, which are, precisely, tables for which no
WAL is written. On restart, any such tables will be truncated. That
should g
On 6/18/10 2:15 AM, Matthew Wakeling wrote:
> I'd like to point out the costs involved in having a whole separate
> "version" of Postgres that has all this safety switched off. Package
> managers will not thank anyone for having to distribute another version
> of the system, and woe betide the user
> It must be a setting, not a version.
>
> For instance suppose you have a session table for your website and a
> users table.
>
> - Having ACID on the users table is of course a must ;
> - for the sessions table you can drop the "D"
You're trying to solve a different use-case than the one I am
I'd like to point out the costs involved in having a whole separate
"version"
It must be a setting, not a version.
For instance suppose you have a session table for your website and a users
table.
- Having ACID on the users table is of course a must ;
- for the sessions table you can dro
Dimitri Fontaine wrote:
Well I guess I'd prefer a per-transaction setting
Not possible, as many others have said. As soon as you make an unsafe
transaction, all the other transactions have nothing to rely on.
On Thu, 17 Jun 2010, Pierre C wrote:
A per-table (or per-index) setting makes more
Josh Berkus writes:
>> (a) and (d) are probably simple, if by "reprovisioning" you mean
>> "rm -rf $PGDATA; initdb".
> Exactly. Followed by "scp database_image". Or heck, just replacing the
> whole VM.
Right, that would work. I don't think you really need to implement that
inside Postgres. I
> Well I guess I'd prefer a per-transaction setting, allowing to bypass
> WAL logging and checkpointing.
Not even conceiveable. For this to work, we're talking about the whole
database installation. This is only a set of settings for a database
*server* which is considered disposable and repla
Well I guess I'd prefer a per-transaction setting, allowing to bypass
WAL logging and checkpointing. Forcing the backend to care itself for
writing the data I'm not sure is a good thing, but if you say so.
Well if the transaction touches a system catalog it better be WAL-logged...
A per-table
Josh Berkus wrote:
a) Eliminate WAL logging entirely
c) Turn off the background writer
Note that if you turn off full_page_writes and set
bgwriter_lru_maxpages=0, you'd get a substantial move in both these
directions without touching any code. Would help prove those as useful
directions to
Dimitri Fontaine writes:
> Josh Berkus writes:
>> a) Eliminate WAL logging entirely
>> b) Eliminate checkpointing
>> c) Turn off the background writer
>> d) Have PostgreSQL refuse to restart after a crash and instead call an
>> exteral script (for reprovisioning)
> Well I guess I'd prefer a per-
Hi,
Josh Berkus writes:
> a) Eliminate WAL logging entirely
> b) Eliminate checkpointing
> c) Turn off the background writer
> d) Have PostgreSQL refuse to restart after a crash and instead call an
> exteral script (for reprovisioning)
Well I guess I'd prefer a per-transaction setting, allowing
Especially as, in repeated tests, PostgreSQL with persistence turned off
is just as fast as the fastest nondurable NoSQL database. And it has a
LOT more features.
An option to completely disable WAL for such use cases would make it a lot
faster, especially in the case of heavy concurrent
All,
So, I've been discussing this because using PostgreSQL on the caching
layer has become more common that I think most people realize. Jonathan
is one of 4 companies I know of who are doing this, and with the growth
of Hadoop and other large-scale data-processing technologies, I think
dem
On Jun 14, 7:14 pm, "jgard...@jonathangardner.net"
wrote:
> We have a fairly unique need for a local, in-memory cache. This will
> store data aggregated from other sources. Generating the data only
> takes a few minutes, and it is updated often. There will be some
> fairly expensive queries of arb
We have a fairly unique need for a local, in-memory cache. This will
store data aggregated from other sources. Generating the data only
takes a few minutes, and it is updated often. There will be some
fairly expensive queries of arbitrary complexity run at a fairly high
rate. We're looking for high
I'm not surprised that Python add is so slow, but I am surprised that
I didn't remember it was... ;-)
it's not the add(), it's the time.time()...
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mail
On 6/16/10 12:00 PM, Josh Berkus wrote:
* fsync=off => 5,100
* fsync=off and synchronous_commit=off => 5,500
Now, this *is* interesting ... why should synch_commit make a difference
if fsync is off?
Anyone have any ideas?
I found that pgbench has "noise" of about 20% (I posted about this
On Wed, Jun 16, 2010 at 12:51 AM, Pierre C wrote:
>
> Have you tried connecting using a UNIX socket instead of a TCP socket on
> localhost ? On such very short queries, the TCP overhead is significant.
>
Unfortunately, this isn't an option for my use case. Carbonado only
supports TCP connections.
ill
controlled by synchronous_commit parameter. guessing here...
> Date: Wed, 16 Jun 2010 12:19:20 -0700
> Subject: Re: [PERFORM] PostgreSQL as a local in-memory cache
> From: jgard...@jonathangardner.net
> To: j...@agliodbs.com
> CC: pgsql-performance@postgresql.org
>
> On
On Wed, Jun 16, 2010 at 12:00 PM, Josh Berkus wrote:
>
>> * fsync=off => 5,100
>> * fsync=off and synchronous_commit=off => 5,500
>
> Now, this *is* interesting ... why should synch_commit make a difference
> if fsync is off?
>
> Anyone have any ideas?
>
I may have stumbled upon this by my ignora
On Wed, Jun 16, 2010 at 4:22 AM, Pierre C wrote:
>
> import psycopg2
> from time import time
> conn = psycopg2.connect(database='peufeu')
> cursor = conn.cursor()
> cursor.execute("CREATE TEMPORARY TABLE test (data int not null)")
> conn.commit()
> cursor.execute("PREPARE ins AS INSERT INTO test V
On Wed, Jun 16, 2010 at 1:27 AM, Greg Smith wrote:
>
> I normally just write little performance test cases in the pgbench scripting
> language, then I get multiple clients and (in 9.0) multiple driver threads
> all for free.
>
See, this is why I love these mailing lists. I totally forgot about
pg
> * fsync=off => 5,100
> * fsync=off and synchronous_commit=off => 5,500
Now, this *is* interesting ... why should synch_commit make a difference
if fsync is off?
Anyone have any ideas?
> tmpfs, WAL on same tmpfs:
> * Default config: 5,200
> * full_page_writes=off => 5,200
> * fsync=off => 5,25
Excerpts from jgard...@jonathangardner.net's message of mié jun 16 02:30:30
-0400 2010:
> NOTE: If I do one giant commit instead of lots of littler ones, I get
> much better speeds for the slower cases, but I never exceed 5,500
> which appears to be some kind of wall I can't break through.
>
> I
FYI I've tweaked this program a bit :
import psycopg2
from time import time
conn = psycopg2.connect(database='peufeu')
cursor = conn.cursor()
cursor.execute("CREATE TEMPORARY TABLE test (data int not null)")
conn.commit()
cursor.execute("PREPARE ins AS INSERT INTO test VALUES ($1)")
cursor.execu
jgard...@jonathangardner.net wrote:
NOTE: If I do one giant commit instead of lots of littler ones, I get
much better speeds for the slower cases, but I never exceed 5,500
which appears to be some kind of wall I can't break through.
That's usually about where I run into the upper limit on ho
Have you tried connecting using a UNIX socket instead of a TCP socket on
localhost ? On such very short queries, the TCP overhead is significant.
Actually UNIX sockets are the default for psycopg2, had forgotten that.
I get 7400 using UNIX sockets and 3000 using TCP (host="localhost")
--
Sen
Have you tried connecting using a UNIX socket instead of a TCP socket on
localhost ? On such very short queries, the TCP overhead is significant.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailp
On 16/06/10 18:30, jgard...@jonathangardner.net wrote:
On Jun 15, 4:18 pm, j...@agliodbs.com (Josh Berkus) wrote:
On 6/15/10 10:37 AM, Chris Browne wrote:
I'd like to see some figures about WAL on RAMfs vs. simply turning off
fsync and full_page_writes. Per Gavin's tests, PostgreSQL is alr
On Jun 15, 4:18 pm, j...@agliodbs.com (Josh Berkus) wrote:
> On 6/15/10 10:37 AM, Chris Browne wrote:
>
> I'd like to see some figures about WAL on RAMfs vs. simply turning off
> fsync and full_page_writes. Per Gavin's tests, PostgreSQL is already
> close to TokyoCabinet/MongoDB performance just w
On 6/15/10 10:37 AM, Chris Browne wrote:
> swamp...@noao.edu (Steve Wampler) writes:
>> Or does losing WAL files mandate a new initdb?
>
> Losing WAL would mandate initdb, so I'd think this all fits into the
> set of stuff worth putting onto ramfs/tmpfs. Certainly it'll all be
> significant to th
On Tue, Jun 15, 2010 at 12:37 PM, Chris Browne wrote:
> swamp...@noao.edu (Steve Wampler) writes:
>> Or does losing WAL files mandate a new initdb?
>
> Losing WAL would mandate initdb, so I'd think this all fits into the
> set of stuff worth putting onto ramfs/tmpfs. Certainly it'll all be
> sign
swamp...@noao.edu (Steve Wampler) writes:
> Or does losing WAL files mandate a new initdb?
Losing WAL would mandate initdb, so I'd think this all fits into the
set of stuff worth putting onto ramfs/tmpfs. Certainly it'll all be
significant to the performance focus.
--
select 'cbbrowne' || '@' ||
On Jun 15, 8:47 am, Chris Browne wrote:
> "jgard...@jonathangardner.net" writes:
> > My question is how can I configure the database to run as quickly as
> > possible if I don't care about data consistency or durability? That
> > is, the data is updated so often and it can be reproduced fairly
>
[oops, didn't hit "reply to list" first time, resending...]
On 6/15/10 9:02 AM, Steve Wampler wrote:
Chris Browne wrote:
"jgard...@jonathangardner.net" writes:
My question is how can I configure the database to run as quickly as
possible if I don't care about data consistency or durability? T
Chris Browne wrote:
"jgard...@jonathangardner.net" writes:
My question is how can I configure the database to run as quickly as
possible if I don't care about data consistency or durability? That
is, the data is updated so often and it can be reproduced fairly
rapidly so that if there is a serv
"jgard...@jonathangardner.net" writes:
> My question is how can I configure the database to run as quickly as
> possible if I don't care about data consistency or durability? That
> is, the data is updated so often and it can be reproduced fairly
> rapidly so that if there is a server crash or ran
73 matches
Mail list logo