te than we expect ;)
... and they know just when they can happen despite all the odds to
maximise the pain and chaos caused.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
probably in the order of a few hundred bytes.
Compressing PDF documents is generally a waste of time.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
when
you otherwise want the PDFs to remain compressed.
Anyway, if you're going for extreme compression, these days 7zip is
often a better option than bzip2.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes t
les that may need VACUUM FULL + REINDEX to
properly clean up.
It's probably better to fix your fsm/autovac settings then CLUSTER the
table so it doesn't happen again, though.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
Anything in `dmesg' (command) or /var/log/syslog ?
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Brian Cox wrote:
> On 04/05/2010 09:53 PM, Craig Ringer [cr...@postnewspapers.com.au] wrote:
>> Anything in `dmesg' (command) or /var/log/syslog ?
> nothing out of the ordinary. Brian
I'm wondering if the issue is with strace rather than Pg. That is to
say, that strace is tr
On 7/04/2010 12:24 AM, Brian Cox wrote:
On 04/06/2010 01:18 AM, Craig Ringer [cr...@postnewspapers.com.au] wrote:
I'm wondering if the issue is with strace rather than Pg. That is to
say, that strace is trying to print:
Thanks, Craig: I do think that this is a strace issue.
As for what
more generic plan.
Again only IIRC there's a configurable threshold for prepared statement
switch-over. I thought all this was in the PgJDBC documentation and/or
javadoc - if it's not, it needs to be.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@pos
, so it can be accessed vastly quicker.
--
Craig Ringer
Tech-related writing: http://soapyfrogs.blogspot.com/
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
eter. The short answer is "work
around it by using EXECUTE ... USING to invoke your query dynamically".
( Oddly, this FAQ doesn't seem to be on the FAQ list at
http://wiki.postgresql.org/wiki/FAQ )
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@p
On 27/05/2010 11:33 PM, Craig Ringer wrote:
On 21/05/2010 9:54 PM, Tyler Hildebrandt wrote:
We're using a function that when run as a select statement outside of the
function takes roughly 1.5s to complete whereas running an identical
query within a function is taking around 55s to com
an be vastly
quicker.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ip_proto,SUM("bytes") FROM "acct_2010_25" WHERE
> "stamp_inserted">='2010-06-20 10:10' AND "stamp_inserted"<'2010-06-21
> 10:10' GROUP BY ip_src, port_src, ip_dst, port_dst, tcp_flags, ip_proto
> LIMIT 50 OFFSET 0;
Same deal
p is very much the way to go.
In case it is important, there are 2000 clients involved, so
that would be 2000 databases if I followed my current FoxPro related
structure.
Nonono! Definitely use different schema if you need to separate things
this way.
--
Craig Ringer
--
Sent via pgsql-perfor
hes out. On Windows, most of those garbage tools that
claim to "free" memory do this - it's about the only time you'd ever
want to use one, since they do such horrid things to performance.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.o
gresql data files. It also forces out things like caches of running
binaries. Things will grind to an absolute crawl for a minute or two
before resuming normal speed, because *everything* has to come back from
disk at once. The same is true of using /proc/sys/vm/drop_caches to drop
all caches.
I guess
On 02/07/10 08:46, Eliot Gable wrote:
> So, the bottom line is, I need a faster way to do this sorting.
You haven't showed us how you're doing it at the moment, so it's awfully
hard to comment usefully on possible approaches.
--
Craig Ringer
Tech-rel
ed, but it has a much faster
approach to interpretation than PL/PgSQL.
Really, the right choice depends on exactly what the OP is doing and
how, which they're not saying.
Where's the code?
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To ma
entries in a table - using count(...) - is
actually a rather expensive operation, and a poor choice if you just
want to see if the server is responsive.
SELECT id FROM tablename LIMIT 1;
where "id" is the primary key of the table would be a better option.
--
Craig Ringer
-
equently or don't have to be up to the minute accurate, so they're
candidates for caching. Memcached is an incredibly handy tool for taking
load off your database.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
and lots of connections you should use a connection pool to save the
main database the overhead of managing that.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 09/07/10 12:42, Tom Lane wrote:
> Samuel Gendler writes:
>> On Thu, Jul 8, 2010 at 8:11 PM, Craig Ringer
>> wrote:
>>> If you're not using a connection pool, start using one.
>
>> I see this issue and subsequent advice cross this list awfully
>> f
tence.
Sorry for the rely-to-self, I just realized my post could've been taken
as a whine about Pg's architecture and some kind of demand that someone
do something about it. That couldn't be further from my intent.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (
y of users.
That said, I don't think it follows that (a) cannot be solved in-core.
How much architectural change would be required to do it efficiently
enough, though...
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to yo
on top. Josh Berkus has made some good points on why this
isn't as easy as it looks, though:
http://it.toolbox.com/blogs/database-soup/admission-control-and-its-discontents-39895
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
et you enable the write cache on a drive... hopefully, not many.
AFAIK Disk drive caches can be safe to leave in write-back mode (ie
write cache enabled) *IF* the OS uses write barriers (properly) and the
drive understands them.
Big if.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (
afely on the root file system. The OS would have to
come up, init software raid, and find the caches before it'd be safe to
read or write volumes with s/w raid write caching enabled. It's not the
sort of thing that'd be practical to implement in GRUB's raid support.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
es.
Also, consider attaching the EXPLAIN ANALYZE output as a text
attachment, as your mail client is "helpfully" rewrapping it into
unintelligible gibberish.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
dly)
alternative to url twiddling when searches reveal docs for an old
version, and might help push the /current/ pages up in search rank too.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
the
credentials of the new role. That'd be *awesome* for application server
connection pools. )
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
t, so the client
couldn't change pooled_client_ip its self by accident or through malice.
But even without that, it'd be awfully handy.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ee what it's up to while this query runs. I'd also be
using wireshark to look at network activity to see if there were any
clues there. I'd be using "top", "vmstat" and "iostat" to examine
system-level load if it was practical to leave the syst
On 26/07/10 17:25, Vitalii Tymchyshyn wrote:
> 26.07.10 12:15, Craig Ringer написав(ла):
>> On 26/07/10 16:35, Piotr Gasidło wrote:
>>
>>> Hello,
>>>
>>> I've found strange problem in my database (8.4.4, but also 9.0beta3,
>>>
anyway), reset GUCs, etc, but how much else is there to do?
That way you can maintain per-database pools of idle workers (apache
prefork style) with ageing-out of backends that're unused. Wouldn't this
do the vast majority of what most pools are needed for anyway? And
wouldn'
27;ve expected that
proper VACUUMing would address any resulting index bloat, but
Any idea of where the 20M record estimate is coming from? Isn't the
size of the partial index taken into account in the estimate?
I'd really help to have EXPLAIN ANALYZE output here.
--
Craig Ringer
Tec
occurs.
See:
http://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Windows
By the way: You don't have the old 8.3 data directory on the system PATH
do you? You can check the value of the system path in the System control
panel, though exactly where varies from Wind
d
to set reasonable work_mem.
"Reasonable" is hard to work out for work_mem, because Pg's work_mem
limit is per-sort (etc) not per-query or per-backend. I understand that
making it per-query is way, way harder than it sounds at face value,
though, so we must make do.
--
Craig Ringer
uch every modern OS does it. Pg is
reliant on the operating system's disk cache, and has some minimal
knowledge of it (see effective_cache_size) .
I don't know how shared_buffers management works, but certainly at the
OS cache level that's what already happens.
--
Craig Ringer
orage-toast.html
http://www.postgresql.org/docs/current/static/sql-altertable.html
While (I think) PLAIN storage could be used, the inability to span rows
over blocks means you would't get over 8k anyway.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgres
ts contents into the RDBMS for analysis and
reporting. This doesn't have to be every hour - every minute is usually
pretty reasonable, and still gives your database a much easier time
without forcing you to modify your app to batch inserts into
transactions or anything like that.
--
Cra
turn the amount
of rows before the LIMIT and OFFSET is applied as well?
It'd force the server to fully execute the query. Then again, it sounds
like you're doing that anyway.
--
Craig Ringer
Tech-related writing at http://soapyfrogs.blogspot.com/
--
Sent via pgsql-performance ma
o suspect
that "not that hard to implement" is probably a little ...
over-optimistic. Sure, it's not that hard to implement in a new program
with no wired-in architectural and design choices; that doesn't mean
it's easy to retrofit onto existing code, especially a bunch o
On 10/10/2010 2:55 PM, Mladen Gogala wrote:
On 10/10/2010 2:43 AM, Craig Ringer wrote:
Some of the other flavours of non-SQL databases, both those that've been
around forever (PICK/UniVerse/etc, Berkeley DB, Cache, etc) and those
that're new and fashionable Cassandra, CouchDB, et
ig engineering challenge. I expect that
in current Pg versions a trivial PL/PgSQL function could be used to
slurp and discard unwanted results, but a better in-server option to
count the results from a cursor query would certainly be nice.
--
Craig Ringer
Tech-related writing at http://so
on from someone who's done the testing.
If you still claim that it wouldn't make the difference,
the onus to prove it is on you.
I didn't mean to claim that it would make no difference. If I sounded
like it, sorry.
I just want to know how _much_ , or more accurately how gre
m on a crappy laptop disk...
I'm guessing this is the origin of the OP's focus on I/O chunk sizes.
Anyway, for the single-seqscan case, I see little evidence here that
using a bigger read chunk size would help PostgreSQL reduce overheads or
improve performance.
OP: Is your Oracl
7;re not using. I can't help but
wonder if a bundled "quick and dirty benchmark" tool for Pg would be
beneficial in helping to determine appropriate values for these settings
and for effective io concurrency.
--
Craig Ringer
Tech-related writing: http://soapyfrogs.blogspot.com/
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ith every insert/delete/update.
Perhaps for some users that'd be worth having, but it seems to me like
it'd have pretty narrow utility. I'm not sure that's the answer.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
e in the visibility map, or in indexes.
That's why PostgreSQL has to hit the heap to find it.
* current transaction should really be "current snapshot". The snapshot
is taken at the start of the whole transaction for SERIALIZABLE
isolation, and at the start of each statement fo
my hand-waving can provide.
--
Craig Ringer
Tech-related writing at http://soapyfrogs.blogspot.com/
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
things down rather than speed them up, depending
on the file system in use and all sorts of other factors.
If Pg was to use posix_fallocate, it'd probably need control over it on
a per-tablespace basis.
--
Craig Ringer
Tech-related writing at http://soapyfrogs.blogspot.com/
--
Sent
uch discussion of the issue here - I
get the impression Pg doesn't see heavy use in sharded environments.
--
Craig Ringer
Tech-related writing at http://soapyfrogs.blogspot.com/
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ts of triggers on the table? Or foreign key relationships
that're DEFERRABLE ?
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
iate parameters ("running with scissors").
I'd like to add something else to the discussion: have you looked at
memcached yet? Or pgpool? If you haven't, start there.
--
Craig Ringer
Tech-related writing: http://soapyfrogs.blogspot.com/
--
Sent via pgsql-performance mailing lis
On 11/05/2010 07:32 PM, A B wrote:
The server will just boot, load data, run, hopefully not crash but if
it would, just start over with load and run.
Have you looked at VoltDB? It's designed for fast in-memory use.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (
hen, the WAL is useful for all sorts of replication options, and the
use of linear WALs means that write ordering in the tables doesn't need
to be as strict, which has performance advantages.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 11/14/2010 02:38 AM, Mladen Gogala wrote:
Craig Ringer wrote:
It sounds like you're describing Oracle-style MVCC, using redo logs.
http://blogs.sybase.com/database/2009/04/mvcc-dispelling-some-oracle-fudunderstanding-the-cost/
Craig, this is an interesting blog page, making some
o be effective. OTOH, I'm not sure what the answer is.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
en populated with rows? The former is
MUCH faster. Are they full of NUMERIC fields? Those seem to be
incredibly slow compared to int/float/etc, which is hardly surprising
given their storage and how they work.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@pos
culprit is I/O contention from other guests on the same
host, possibly combined with I/O queuing policies on the host that
favour throughput over request latency.
Checkpoints might also be a factor.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To
econd try to explain what they were *trying* to say? Was it just a joke
- 'cos if so, it was kinda flat.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
100.0100.0
0.0 0.0 0.0 0.0
18.|-- ns1.gunduz.org 20.0%10 483.5
482.3 476.7 485.0 2.7
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.post
On 01/15/2011 12:54 AM, Mladen Gogala wrote:
Craig Ringer wrote:
On 01/12/2011 10:16 PM, Guillaume Cottenceau wrote:
What's your point and in what is it related to that ML?
Given the package names, I suspect this is a poorly-expressed
complaint about the performance of downloads fro
On 18/01/11 18:56, Lars wrote:
> Hi,
>
> We are in the process of moving a web based application from a MySql to
> Postgresql database.
> Our main reason for moving to Postgresql is problems with MySql (MyISAM)
> table locking.
> We will buy a new set of servers to run the Postgresql databases.
databases.
"sharding" or "shards" is pretty much the standard way that setup is
described. It doesn't come up on the Pg list a lot as most people doing
web-oriented horizontally scaled apps use MySQL or fashionable non-SQL
databases, but it's pretty well known in w
On 03/02/11 07:15, Dan Birken wrote:
> However, they can't guarantee that any particular RAID controller would
> be in stock when they are building the machine, so basically I would
> like to know if any of these cards are sufficiently better or worse than
> the others that I should either a) wait
On 07/02/11 09:52, felix wrote:
> So is it normal for postgres to report that it failed to shut down,
> operate for an hour and then go ahead and restart itself ?
That's pretty wacky. Did you shut it down via pg_ctl or using an init
script / "service" command in your OS?
It shouldn't matter, bu
On 02/07/2011 06:30 PM, Marti Raudsepp wrote:
On Mon, Feb 7, 2011 at 05:03, Craig Ringer wrote:
What would possibly help would be if Pg could fall back to lower
shared_buffers automatically, screaming about it in the logs but still
launching. OTOH, many people don't check the logs, so t
also have to send notices to the client.
Problem is, many clients don't process notices/warnings, so particularly
slack admins won't see that either.
I'm not particularly excited about the idea.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance
t;booking_size_type_fkey" FOREIGN KEY (size_type) REFERENCES
> booking_size_type(id)
> Triggers:
> booking_after_insert_update AFTER INSERT ON booking FOR EACH ROW EXECUTE
> PROCEDURE booking_after_trigger()
> booking_audit AFTER UPDATE ON booking FOR EACH ROW EXECUTE PROCEDURE
> booking_audit_trigger()
> booking_before_insert BEFORE INSERT ON booking FOR EACH ROW EXECUTE
> PROCEDURE booking_before_trigger()
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ll, maybe the code will be useful to somebody anyway.
--
Craig Ringer
--
-- This file defines functions to (hopefully) reasonably safely enable and
-- disable enforcement of a foreign key constraint, written by Craig Ringer.
-- They're free for any use your care to make of them.
--
-- T
though I mostly work in C++ so the result
probably won't be too pretty initially.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
that that was indeed the issue, and coding around
the begin block dramatically cuts the runtimes of commands executed
after the big import function.
Thanks again!
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
ng the use of an exception block per record generated
worked around the performance issues, so it's clearly something to do
with the vast numbers of subtransactions - as Heikki Linnakangas
suggested and tested.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performanc
Tom Lane wrote:
Craig Ringer <[EMAIL PROTECTED]> writes:
It seems to happen with every statement run in the same transaction as,
and after, the procedure with all the subtransactions. As soon as a
COMMIT is executed, operations return to normal speed.
Ah. I misread your p
must the order of the checks.
- You can use multi-column indexes for single-column filters under some
circumstances, but it's slow.
> Shall i have more then one partial index for same field,
Maybe, it depends on your query and what the distribution of the data in
the field is like.
bugs, and there have
been a fair few fixes and improvements since prior versions.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
machines where it is having DEBIAN
ETCH STABLE, i dont want to face any problems... so only...
am i right ???
Personally I'd use 8.3 from backports.org, but it's of course up to you.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To mak
ase, I was suggesting that *I* might've been asknig a silly
question by asking you why you wanted 8.1. You've covered that (you want
to use the version packaged in your distro) so that's easy enough.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresq
release notes. See:
http://www.postgresql.org/docs/current/static/release.html
particularly:
http://www.postgresql.org/docs/current/static/release-8-0.html
http://www.postgresql.org/docs/current/static/release-8-1.html
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql
able and/or ugly.
Is there any way to get postgresql to detect such repeated query parts
and evaluate them only once?
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ssion available to all parts of the expression. If the SQL:2003
WITH expression is anything like that it'd be very handy indeed.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
uestion. Are you asking which audio
compression codec and audio container file type (like "mp3", "aac", etc)
you should use? If so, this is really not the right place to ask that.
Do you mean which file /system/ ?
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pg
ith the xfs file system.
Once the file system is formatted you can mount it manually with the
mount command, eg:
mkdir /mnt/tmp
mount -t xfs /dev/sda1 /mnt/tmp
... or have it mounted on boot using an fstab entry like:
/dev/sda1 /path/to/desired/mountpoint xfs defaults 0 0
--
Craig Ringer
--
Sent via
lly likely to encounter a
double disk failure (such as during the load imposed by rebuild onto a
spare) ?
I guess if you have good backups - as you must - it's not that big a
deal, but I'd be pretty nervous with anything less than RAID 6 or RAID 10 .
--
Craig Ringer
--
Sent via pgsql-per
which do a surface scan). Just using SMART - say,
the basic health check - really isn't enough.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
controller adding a second disk's
worth of duplicate parity information when striping a four or more disk
RAID 5 array, but I thought that's basically what RAID 6 was.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make chan
nnection. Backups are only possible
once a day when all users are logged off. It's not an application where
losing half a day of data is fun. On top of all that it runs on SCO
OpenServer 5.0.5 (which has among other things the most broken C
toolchain I've ever seen).
So ... hooray for up-to
st fortnightly, and short tests at least
weekly. Being able to plan ahead to swap a dying disk is very nice indeed.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ight
still be other costs.
I can't help wondering why you have all those useless columns in the
first place, and why you have so many identically structured tables.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes t
opg.connect("dbname=craig")', 'import
psycopg').timeit(number=100);
... and this is still with an interpreted language. I wouldn't be too
surprised if much better again could be achieved with the C/C++ APIs,
though I don't currently feel the desire to write a test for t
Craig Ringer wrote:
[EMAIL PROTECTED] wrote:
It takes more then 0.05s :(
Only this function reduce server speed max to 20request per second.
If you need that sort of frequent database access, you might want to
look into:
- Doing more work in each connection and reducing the number of
here so you should search the archives for information.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
and tablespaces (this requires
considerable understanding of postgresql to use successfully). You
probably want to avoid this unless you really need it, and I doubt it
will help much for in-memory databases anyway.
- Buy a faster computer
--
Craig Ringer
--
Sent via pgsql-performance mai
list archives. This was suggested to you very early on in the
discussion.
If you have problems with other queries, how about showing EXPLAIN
ANALYZE for the other queries you're having problems with?
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgres
of updating each of the three
indexes isn't equal.
It might also be worth looking into using partial indexes if some of
your data is "hotter" than others and perhaps more worth the index
update cost.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performanc
?i=2480
http://www.gigabyte.com.tw/Products/Storage/Products_Overview.aspx?ProductID=2180
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
s a multiprocess model I imagine individual backends can make
use of a large amount of RAM (as work_mem etc), though the address space
consumed by the shared memory will limit how much it can use.
There's a decent, if Microsoft-specific, article about PAE here:
http://www.microsoft.com/
blocks creating large
numbers of subtransactions behind the scenes and slowing everything to a
crawl.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
u post the full EXPLAIN ANALYZE from the query? This snippet
doesn't even show how records are being looked up.
What about a \d of the table from psql, or at least a summary of the
involved column data types and associated indexes?
--
Craig Ringer
--
Sent via pgsql-performance mailin
1 - 100 of 370 matches
Mail list logo