On Jun 14, 7:14 pm, "jgard...@jonathangardner.net"
wrote:
> We have a fairly unique need for a local, in-memory cache. This will
> store data aggregated from other sources. Generating the data only
> takes a few minutes, and it is updated often. There will be some
> fairly expensive queries of arb
Balkrishna Sharma wrote:
> I wish to do performance testing of 1000 simultaneous read/write
> to the database.
You should definitely be using a connection pool of some sort. Both
your throughput and response time will be better that way. You'll
want to test with different pool sizes, but I'v
Thanks. I will try with a more sensible value of wal_buffers.. I was
hoping to keep more in memory and therefore reduce the frequency of disk
IOs..
Any suggestions for good monitoring software for linux?
On 15/06/2010 00:08, Robert Haas wrote:
On Mon, Jun 14, 2010 at 2:53 PM, Tom Wilcox wrot
We have a fairly unique need for a local, in-memory cache. This will
store data aggregated from other sources. Generating the data only
takes a few minutes, and it is updated often. There will be some
fairly expensive queries of arbitrary complexity run at a fairly high
rate. We're looking for high
Excerpts from Tom Lane's message of lun jun 14 23:57:11 -0400 2010:
> Scott Carey writes:
> > Great points. There is one other option that is decent for the WAL:
> > If splitting out a volume is not acceptable for the OS and WAL --
> > absolutely split those two out into their own partitions. I
Hello,I will have a web application having postgres 8.4+ as backend. At any
given time, there will be max of 1000 parallel web-users interacting with the
database (read/write)I wish to do performance testing of 1000 simultaneous
read/write to the database.
I can do a simple unix script on the p
David Jarvis writes:
>> Fair enough. How about something like make_timestamp? It's at least
>> shorter and easier than construct :-)
> Agreed.
No objection here either.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To m
I'm not surprised that Python add is so slow, but I am surprised that
I didn't remember it was... ;-)
it's not the add(), it's the time.time()...
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mail
On 6/16/10 12:00 PM, Josh Berkus wrote:
* fsync=off => 5,100
* fsync=off and synchronous_commit=off => 5,500
Now, this *is* interesting ... why should synch_commit make a difference
if fsync is off?
Anyone have any ideas?
I found that pgbench has "noise" of about 20% (I posted about this
On Wed, Jun 16, 2010 at 12:51 AM, Pierre C wrote:
>
> Have you tried connecting using a UNIX socket instead of a TCP socket on
> localhost ? On such very short queries, the TCP overhead is significant.
>
Unfortunately, this isn't an option for my use case. Carbonado only
supports TCP connections.
On Wed, Jun 16, 2010 at 7:46 AM, Juan Pablo Sandoval Rivera
wrote:
> Good morning List
>
> In relation to the process of tuning the engine PostgreSQL database,
> especially
> 7.3.7 version that is being used currently, agreaceria me clarify a procedure
I concur with the other poster on keeping s
http://www.postgresql.org/docs/current/static/wal-async-commit.html
" the server waits for the transaction's WAL records to be flushed to permanent
storage before returning a success indication to the client."
I think with fynch=off, whether WAL gets written to disk or not is still
controlled by
Juan Pablo Sandoval Rivera wrote:
> In relation to the process of tuning the engine PostgreSQL
> database, especially 7.3.7 version that is being used currently,
Do you really mean PostgreSQL version 7.3.7 (not 8.3.7)? If so, you
should really consider upgrading. Performance is going to be m
On Wed, Jun 16, 2010 at 12:00 PM, Josh Berkus wrote:
>
>> * fsync=off => 5,100
>> * fsync=off and synchronous_commit=off => 5,500
>
> Now, this *is* interesting ... why should synch_commit make a difference
> if fsync is off?
>
> Anyone have any ideas?
>
I may have stumbled upon this by my ignora
On Wed, Jun 16, 2010 at 4:22 AM, Pierre C wrote:
>
> import psycopg2
> from time import time
> conn = psycopg2.connect(database='peufeu')
> cursor = conn.cursor()
> cursor.execute("CREATE TEMPORARY TABLE test (data int not null)")
> conn.commit()
> cursor.execute("PREPARE ins AS INSERT INTO test V
On Wed, Jun 16, 2010 at 1:27 AM, Greg Smith wrote:
>
> I normally just write little performance test cases in the pgbench scripting
> language, then I get multiple clients and (in 9.0) multiple driver threads
> all for free.
>
See, this is why I love these mailing lists. I totally forgot about
pg
Good morning List
In relation to the process of tuning the engine PostgreSQL database, especially
7.3.7 version that is being used currently, agreaceria me clarify a procedure
If I have a server with 2 GB of RAM, it is said that the shared memory segment
for
the engine of the database on a d
Good morning List
In relation to the process of tuning the engine PostgreSQL database, especially
7.3.7 version that is being used currently, agreaceria me clarify a procedure
If I have a server with 2 GB of RAM, it is said that the shared memory segment
for
the engine of the database on a d
> * fsync=off => 5,100
> * fsync=off and synchronous_commit=off => 5,500
Now, this *is* interesting ... why should synch_commit make a difference
if fsync is off?
Anyone have any ideas?
> tmpfs, WAL on same tmpfs:
> * Default config: 5,200
> * full_page_writes=off => 5,200
> * fsync=off => 5,25
Excerpts from jgard...@jonathangardner.net's message of mié jun 16 02:30:30
-0400 2010:
> NOTE: If I do one giant commit instead of lots of littler ones, I get
> much better speeds for the slower cases, but I never exceed 5,500
> which appears to be some kind of wall I can't break through.
>
> I
FYI I've tweaked this program a bit :
import psycopg2
from time import time
conn = psycopg2.connect(database='peufeu')
cursor = conn.cursor()
cursor.execute("CREATE TEMPORARY TABLE test (data int not null)")
conn.commit()
cursor.execute("PREPARE ins AS INSERT INTO test VALUES ($1)")
cursor.execu
> Fair enough. How about something like make_timestamp? It's at least
> shorter and easier than construct :-)
>
Agreed.
Dave
jgard...@jonathangardner.net wrote:
NOTE: If I do one giant commit instead of lots of littler ones, I get
much better speeds for the slower cases, but I never exceed 5,500
which appears to be some kind of wall I can't break through.
That's usually about where I run into the upper limit on ho
Have you tried connecting using a UNIX socket instead of a TCP socket on
localhost ? On such very short queries, the TCP overhead is significant.
Actually UNIX sockets are the default for psycopg2, had forgotten that.
I get 7400 using UNIX sockets and 3000 using TCP (host="localhost")
--
Sen
Have you tried connecting using a UNIX socket instead of a TCP socket on
localhost ? On such very short queries, the TCP overhead is significant.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailp
On 16/06/10 18:30, jgard...@jonathangardner.net wrote:
On Jun 15, 4:18 pm, j...@agliodbs.com (Josh Berkus) wrote:
On 6/15/10 10:37 AM, Chris Browne wrote:
I'd like to see some figures about WAL on RAMfs vs. simply turning off
fsync and full_page_writes. Per Gavin's tests, PostgreSQL is alr
On Jun 15, 4:18 pm, j...@agliodbs.com (Josh Berkus) wrote:
> On 6/15/10 10:37 AM, Chris Browne wrote:
>
> I'd like to see some figures about WAL on RAMfs vs. simply turning off
> fsync and full_page_writes. Per Gavin's tests, PostgreSQL is already
> close to TokyoCabinet/MongoDB performance just w
27 matches
Mail list logo