-- it
doesn't make too much difference whether it's a "bug" or not.
Performance problems sometimes end up as bugs and sometimes end up being
treated more like an enhancement; but most of the progress is made on
pgsql-performance regardless.
Regards,
Jeff Davis
--
s not really
worth the effort to batch if that is the size of the update.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
subtransactions, make sure to release them as quickly as
you create them (don't just use ROLLBACK TO, that still leaves the
savepoint there); having 1500 open subtransactions might cause
performance problems elsewhere.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsq
arate
transactions, but only up to a point, after which it levels off. I'm not
sure exactly when that point is, but after that, the downsides of
keeping a transaction open (like inability to remove the previous
version of an updated tuple) take over.
Regards,
Jeff Davis
--
Sent via pg
asonable for your application.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
important improvements in GiST build times for cases
where the index doesn't fit in memory. Mark, can you please try your
experiments on the 9.2beta and tell us whether that helps you?
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql
make sure you have a safe backup
first!):
http://www.postgresql.org/docs/9.1/static/app-pgresetxlog.html
And try setting the current transaction ID to just before the delete
ran. Then you may be able to use pg_dump or otherwise export the deleted
rows.
Regards,
Jeff Davis
--
Sent via
t;SELECT 1" might not send the
stats messages right away, and they might end up getting lost.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
t you
need using a SELECT (or make it a view for convenience)? Fundamentally,
these records aren't changing, you are just trying to interpret them in
the context of the current day. That should be done using a SELECT, not
an UPDATE.
Regards,
Jeff Davis
--
Sent via pgsql-performance ma
:
>
> SELECT * FROM y WHERE EXISTS (SELECT 1 FROM z WHERE z.y_id = y.id)
>
> ... or vice versa depending on which it thought would be faster.
Although those two queries are semantically the same (I think), a lot of
very similar pairs of queries are not equivalent. For instance, if it
er.
Seeing as it's around a couple ms at minimum, it's probably some kind of
IO latency. You could see that by wrapping the statements in a big
transaction (BEGIN/END block) -- I bet the inserts go very quickly and
the final commit takes longer.
Regards,
Jeff Davis
--
Sent via p
gt; Round_Action is the parent table and has no record in the tables, all
> the records are lying in child tables.
Run EXPLAIN ANALYZE on each of those queries, and post the results.
See http://wiki.postgresql.org/wiki/SlowQueryQuestions for a guide on
how to give the necessary information for ot
nteresting thing to test.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
y as a whole.
However, a series proposals for individual hacks for specific purposes
will probably be rejected. I am in no way implying that you are
approaching it this way -- I am just trying to characterize an approach
that won't make progress.
Regards,
Jeff Davis
--
Sent via pg
s eating 99% cpu. Is there a
chance that it's flooded with connection attempts?
Usually the work is done by backend processes, not the postmaster. The
postmaster just does some management like accepting connections and
starting new processes.
Regards,
Jeff Davis
--
Sent via pgsql-perfor
esent in the
archive; it must return nonzero when so asked."
So, it is safe to ignore those errors.
Personally, I would use a restore_command that is silent when the file
doesn't exist so that it doesn't pollute your logs. I'm not sure why the
documentation suggests &quo
order; and any total order is allowed.
Radix sort only works for some data types and some total orders.
However, it would be very nice to use radix sorting where it does work.
That would require some extensions to the type system, but it could be
done.
The GPU issue is orthogonal.
Regards,
Je
risking 256K data loss per outage".
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
be less remote than we expect ;)
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
n (again, not intended as scheduler
benchmarks). The server was modified to record a log message every N
page accesses.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
What's your opinion on the practical
performance impact? If it doesn't need to be fsync'd, the kernel
probably shouldn't have written it to the disk yet anyway, right (I'm
assuming here that the OS buffer cache is much larger than the disk
write cache)?
Regards,
Jeff
urally
compact after the rows at the end are removed.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
missing something.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
e real problem is that there's a difference between these
cases at all? I don't see any reason why the first should be more
expensive than the second.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
second plan you only see
a single index scan with two quals:
Index Cond: ((ftsbody_body_fts @@
to_tsquery('commonterm'::text)) AND (ftsbody_body_fts @@
to_tsquery('spellerror'::text)))
So it's entirely up to GIN how to execute that.
Regards,
Jeff Da
here might
be a lot of common words you'd like to track.
Perhaps ANALYZE can automatically add the common words above some
frequency threshold to the list?
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
only disadvantage is that it's more metadata to manage -- all of the
existing data like dictionaries and stop words, plus this new "common
words". Also, it would mean that almost every match requires RECHECK. It
would be interesting to know how common a word needs to be before i
major
problems.
Personally, I'd like to see the GIT feature finished as well. When I
have time, I was planning to take a look into it.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
n look for
opportunities to set hint bits or freeze tuples.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
the "ANALYZE" and just do
"EXPLAIN ...".
Then post those results to the list. These tell us what plans PostgreSQL
is choosing and what it estimates the costs to be. If it's the output of
EXPLAIN ANALYZE, it also runs the query and tells us what the costs
really are.
>F
On Fri, 2008-09-19 at 11:25 -0700, Jeff Davis wrote:
> What's the n_distinct for start_time?
Actually, I take that back. Apparently, PostgreSQL can't change "x
BETWEEN y AND y" into "x=y", so PostgreSQL can't use n_distinct at all.
That's your problem.
index scan with the filter (311ms per loop), the
"player" condition must be very selective, but PostgreSQL doesn't care
because it already thinks that the date range is selective.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@p
memory. And there is already a
system-wide limit on shared memory. So what's the point of such a bad
design?
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
t, killing a process doesn't free shared memory, so it's
just flat out broken.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
I'm not sure how much of a gain this is, because right now that could
be accomplished by increasing the statistics for that column (and
therefore all of your distinct values would fit in the MCV list). Also
the statistics aren't guaranteed to be perfectly up-to-date, so an
estimate of zero migh
he index?
>
> No, what makes you think that? The index won't change at all in the
> above example. The major problem is, as Scott says, that DROP INDEX
> takes exclusive lock on the table so any other sessions will be locked
> out of it for the duration of your test query.
On Wed, 2008-02-20 at 18:18 +, Matthew wrote:
> On Wed, 20 Feb 2008, Jeff Davis wrote:
> > However, building indexes in parallel would allow better CPU
> > utilization.
>
> We have a process here that dumps a large quantity of data into an empty
> database, much l
ore mult-processed, then
> we can possibly add more parallelism to the restore process.
I like this approach more. I think that pg_restore is the right place to
do this, if we can make the options reasonably simple enough to use.
See:
http://archives.postgresql.org/pgsql-hackers/
ot of indexes on localized
text using only one core at a time.
Regards,
Jeff Davis
---(end of broadcast)---
TIP 6: explain analyze is your friend
your cores. Build your primary key/unique indexes first,
and then after those are built you can start using the database while
the rest of the indexes are building (use "CREATE INDEX CONCURRENTLY").
Regards,
Jeff Davis
---(end of broadcast)-
s it). But they can take a lot of processor time to build up
again, especially with localized text.
Regards,
Jeff Davis
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
bin
> very well at all. On top of that, the database scale should be bigger
He was referring to the CFQ I/O scheduler. I don't think that will
affect pgbench itself, because it doesn't read/write to disk, right?
Regards,
Jeff Davis
may be bad in your case is if you have many backends
commit many transactions asynchronously, and then the WAL writer tries
to make those transactions durable, CFQ might think that the WAL writer
is "unfairly" using a lot of I/O. This is just speculation though.
Regards,
Jeff Davis
On Fri, 2007-12-14 at 19:04 -0500, Tom Lane wrote:
> Jeff Davis <[EMAIL PROTECTED]> writes:
> > Where am I going wrong? Are many of these lseeks no-ops or something?
>
> They're not supposed to be, but if you only tracked seeks and not
> reads or writes, it's har
000 lseeks in 10 seconds, would it?
Where am I going wrong? Are many of these lseeks no-ops or something?
Regards,
Jeff Davis
---(end of broadcast)---
TIP 6: explain analyze is your friend
On Fri, 2007-12-14 at 11:29 -0800, Jeff Davis wrote:
> "bigtable" has about 60M records, about 2M of which are dead at the time
> of VACUUM. Shared_buffers are about 1GB, and the machine has 4GB of
> memory.
Forgot to mention: version 8.2.4
Regards,
ifference.
There's still the hit of performing a CLUSTER, however.
Another option, if you have a relatively small number of topic_ids, is
to break it into separate tables, one for each topic_id.
Regards,
Jeff Davis
---(end of broadcast)-
ing to do with the GROUP BYs, but we won't know
until you post EXPLAIN ANALYZE results.
Regards,
Jeff Davis
---(end of broadcast)---
TIP 6: explain analyze is your friend
issues about it?.
> thanks
> I'm working with postgres 8.0.1,
For the query in question, what would be faster than a seqscan? It
doesn't read the whole table, it only reads until it satisfies the limit
clause.
Regards,
Jeff Davis
---(end of broadcast)--
t can return a requested block. That could mitigate random I/O, if
the I/O is asynchronous and something intelligent (OS or controller) can
schedule it.
Regards,
Jeff Davis
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
eplicate our dbase
> by using slony?? and why slony??
>
This question should be asked on the slony1-general list, you'll get
more responses there.
The benefit of using slony is that you can read from many servers rather
than just one.
Regards,
Jeff Davis
---
#x27;s also FAILOVER, which can set a new origin even if the old origin
is completely gone, however you will lose the transactions that haven't
been replicated yet.
To have a new node join the cluster, you SUBSCRIBE SET, and you can MOVE
SET to it later if you want that to b
uested from the OS. The OS
does it's own caching, and so many of those reads might come from the OS
buffer cache, and not the disk itself.
Also, if you're concerned with the number since the last server restart,
make sure you have stats_reset_on_server_start set appropriately.
Re
one customers DB and execute the query there?
Have you already considered using views with specific privileges to
separate your customers?
Regards,
Jeff Davis
---(end of broadcast)---
TIP 4: Have you searched our list archives?
achine perhaps just can't handle that many FDs in all of those
processes at once.
That is a lot of tables. Maybe a different OS will handle it better?
Maybe there's some way that you can use fewer connections and then the
OS could still handle it?
Regards,
Jeff Davis
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
g their own statistics? What's the easiest
way to take statistical samples of the data in a table without reading
the entire thing?
Regards,
Jeff Davis
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropr
of the last 6 elements
and see if postgresql will use a BitmapAnd to combine them?
Regards,
Jeff Davis
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTE
ate
results. Do those all have an id? If not, why not? How do you join a
virtual relation to a physical relation if the virtual relation has no
internal id? Is the id field still unique in the result of a join or
Cartesian product?
Regards,
Jeff Davis
---(end of broadcast)---
TIP 6: explain analyze is your friend
ual
process (i.e. open up the database files in a hex editor and look for
your data), but there would be no guarantee.
Regards,
Jeff Davis
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
disk pages you need to fetch enough to make up for the extra
cost of random I/O.
Regards,
Jeff Davis
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
huge
amounts of money on NVRAM (or something) to store your data. So identify
the highest-risk scenarios and prevent those first.
Also keep in mind what the cost of failure is: a few hundred bucks more
on a better RAID controller is probably a good value if it prevents a
day of chaos and un
On Tue, 2007-02-27 at 01:11 +0100, Peter Kovacs wrote:
> On 2/26/07, Jeff Davis <[EMAIL PROTECTED]> wrote:
> > On Sun, 2007-02-25 at 23:11 +0100, Peter Kovacs wrote:
> > > A related question:
> > > Is it sufficient to disable write cache only on the disk where pg
e
checkpoint have successfully made it to disk.
If the write cache holds those data pages, and then loses them, there's
no way for PostgreSQL to recover. So use a battery backed cache or turn
off the write cache.
Regards,
Jeff Davis
---
indexes to return search results very quickly. As someone already
mentioned, it also has ranking features.
Regards,
Jeff Davis
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
nt, that would answer a lot of my questions. I did some brief
searching and nothing turned up. Do you have a link to the discussion or
the patch?
Regards,
Jeff Davis
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ign
ause extra disk I/O. Is there a good way to
measure how much extra I/O (and WAL volume) is caused by the
checkpoints? Also, it would be good to know how much total I/O is caused
by a checkpoint so that I know if bgwriter is doing it's job.
Regards,
Jeff Davis
---
to provide queries, and also define "slower". Set
log_min_duration_statement to some positive value (I often start with
1000) to try to catch the slow statements in the logs. Once you have
found the slow statements, do an EXPLAIN and an EXPLAIN ANALYZE on those
statemen
h tables
are used in hash joins, hash-based aggregation, and hash-based
processing of IN subqueries."
Regards,
Jeff Davis
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
ses 6 bytes. That means you could set it to 10 times the number
you currently have, and it would still be insignificant.
Regards,
Jeff Davis
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
on what kind of performance
degradation you're experiencing.
Am I missing something?
Regards,
Jeff Davis
---(end of broadcast)---
TIP 6: explain analyze is your friend
On Wed, 2006-10-18 at 17:35 -0500, Jim C. Nasby wrote:
> On Wed, Oct 18, 2006 at 03:32:15PM -0700, Jeff Davis wrote:
> > On Wed, 2006-10-18 at 17:10 -0500, Jim C. Nasby wrote:
> > > Sorry, don't have the earlier part of this thread, but what about...
> > >
>
a view of a union.
If PostgreSQL could sort the result of a union by merging the results of
two index scans, I think the problem would be solved. Is there something
preventing this, or is it just something that needs to be added to the
planner?
Regards,
Jeff Davis
nheritance won't help him.
The problem is that he has two indexes, and he needs to find the max
between both of them. PostgreSQL isn't smart enough to recognize that it
can use two indexes, find the max in each one, and find the max of those
two values.
Regard
N. At least, I can't get PostgreSQL to sort over two UNIONed tables
using an index; perhaps I'm missing it.
Regards,
Jeff Davis
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
ch
On Thu, 2006-10-12 at 14:34 -0500, Jim C. Nasby wrote:
> On Thu, Oct 12, 2006 at 09:42:55AM -0700, Jeff Davis wrote:
> > On Thu, 2006-10-12 at 10:14 -0500, Jim C. Nasby wrote:
> > > The syntax these hints is something arbitrary. I'm borrowing Oracle's
> > > ide
On Thu, 2006-10-12 at 17:28 -0400, Tom Lane wrote:
> [ trying once again to push this thread over to -hackers where it belongs ]
>
> Arjen van der Meijden <[EMAIL PROTECTED]> writes:
> > On 12-10-2006 21:07 Jeff Davis wrote:
> >> On Thu, 2006-10-12 at 19:15 +0
and augmented with the possibility of
> correlation hinting, or table level hinting.
Well, I wouldn't say "evil". Query hints are certainly against the
principles of a relational database, which separate the logical query
from the physical storage.
Regards,
Jeff Davis
--
or force) the
planner in this way."
I'm not suggesting that we do that, but it seems better then embedding
the hints in the queries themselves.
Regards,
Jeff Davis
---(end of broadcast)---
TIP 1: if posting/reading through Usenet,
(a) Your application could probably make better use of a relational
database; or
(b) The benchmark doesn't represent your application's needs.
Regards,
Jeff Davis
Regards,
Jeff Davis
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
all.
Also, I'll point out that what you want to do is very similar to using
typical relational constructs. Consider whether sub-selects or
aggregates in conjunction with set-returning functions can achieve what
you want. PostgreSQL is smart enough to only read t
On Thu, 2006-09-14 at 21:04 -0400, Michael Stone wrote:
> On Thu, Sep 14, 2006 at 05:52:02PM -0700, Jeff Davis wrote:
> >Any long-running system will have very little "free" memory. Free memory
> >is wasted memory, so the OS finds some use for it.
>
> The important
help a lot.
> It would be most helpfull to have something on the docs to specify what each
> setting affects most such as reads, writes, updates, inserts, etc..
I agree that they could be improved. It gets complicated quickly though,
and it's hard to generalize the effect that a perf
kely that the insert plans will change.
There aren't multiple INSERT plans (however, there could be a subselect
or something, which would be planned separately). INSERT is INSERT. That
means effective_cache_size will have zero effect on INSERT.
Regards,
Jeff Davis
---(end of broadcast)---
TIP 6: explain analyze is your friend
hm_use_phys: 1" on FreeBSD), then that amount of physical
memory will never be available to processes other than postgres. At 2GB,
that still leaves 6GB for the other process, so you should be fine.
Regards,
Jeff Davis
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
+ autovacuum_vacuum_threshold), and if
so, it runs VACUUM. If not, it sleeps. It works the same way for
ANALYZE.
So, in a large table, the scale_factor is the dominant term. In a small
table, the threshold is the dominant term. But both are taken into
account.
Regards,
Jeff Davis
--
n all the
tables, because it does not have to read the entire table and doesn't
interfere with the rest of the operations.
Regards,
Jeff Davis
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
.0, since that
> > is saying it's cheaper to get a random page than a sequential one.
> >
> > What was your original random_page_cost, and what is the new value you
> > set it to?
> >
> > Regards,
> >Jeff Davis
> >
> >
> >
>
doesn't make sense at all to set it below 1.0, since that
is saying it's cheaper to get a random page than a sequential one.
What was your original random_page_cost, and what is the new value you
set it to?
Regards,
Jeff Davis
---(end of broadcast)---
he new?
Also, I suggest that you upgrade to 8.1. 7.4 is quite old, and many
improvements have been made since then.
Regards,
Jeff Davis
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire t
abase will be more of an archive, the requirements might be quite
reasonable. However, if or when you do need to search through that data,
expect it to take a long time.
Regards,
Jeff Davis
---(end of broadcast)---
TIP 1: if posting/r
,
etc., to add up to more than 2GB on a system with 4GB of RAM.
Is there a general consensus on the matter, or is it highly application-
dependent? I am not doing any huge amount of 64-bit arithmetic.
I am using Woodcrest, not Opteron.
Regards,
Jeff Davis
-
ch we are using is 7.2.4
>
I highly recommend upgrading if at all possible. That's quite an old
version.
Hope this helps,
Jeff Davis
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
On Thu, 2006-08-24 at 09:21 -0400, Merlin Moncure wrote:
> On 8/22/06, Jeff Davis <[EMAIL PROTECTED]> wrote:
> > On Tue, 2006-08-22 at 17:56 -0400, Bucky Jordan wrote:
> > Very interesting. I always hear that people avoid RAID 5 on database
> > servers, but I suppos
On Wed, 2006-08-23 at 21:50 -0400, Michael Stone wrote:
> On Wed, Aug 23, 2006 at 03:23:03PM -0700, Jeff Davis wrote:
> >Also, do ext2 or UFS without soft updates run the risk of losing or
> >corrupting my data?
>
> I suggest you check the list archives; there
determined that
JFS was a good choice. However, I assume things have changed somewhat
since then. Does anyone have a pointer to some newer results?
Regards,
Jeff Davis
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
y to run the same on the 2950. (The 2950 is a 2-cpu dual core 3.0
> ghz box, 8GB ram with 6 disks, running FreeBSD 6.1 amd64 RELEASE if
> you're interested in picking a "fair" opteron equivalent ;)
>
Based on your results, I think the Intels should be fine. Does
not migrated from 7.x or anything
> like that).
Usually this is caused by either:
(1) You're not vacuuming as a superuser, so it's not able to vacuum
everything.
(2) You have a long-running transaction that never completed for some
strange reason.
Hope this helps,
r are Opterons significantly better?
I may go for more storage as well (i.e. getting 300GB disks), but I am
still determining the potential need for storage. I can get more RAM at
a later date if necessary also.
Regards,
Jeff Davis
---(end of broa
other suggestions are appreciated. Is there a common place to look
for hardware suggestions (like a postgresql hardware FAQ)?
Regards,
Jeff Davis
---(end of broadcast)---
TIP 4: Have you searched our list archives?
for WAL on 3Par, ext3
Does your disk controller have battery-backed writeback cache? How much?
> With PostgreSql 8.1.4
>
> We don't have i/o bottle neck.
>
Well, chances are PostgreSQL is waiting for fsync, which means you do
have an I/O bottleneck (however, you're not
100 matches
Mail list logo