incorporate some
perhaps-configurable amount of risk aversion in its choices.
regards, tom lane
PS: please do not top-post, and do not quote the entire darn thread
in each message.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make ch
g/wiki/Slow_Query_Questions
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Justin Pryzby writes:
> On Fri, Nov 10, 2017 at 04:19:41PM -0500, Tom Lane wrote:
>> One idea is to say that relpages = reltuples = 0 is only the state that
>> prevails for a freshly-created table, and that VACUUM or ANALYZE should
>> always set relpages to at least 1 even if
;s not like that's going
to be a noticeable percentage increase in the row width ...
> But is there a better way (I don't consider adding a row of junk to be a
> significant improvement).
Not ATM.
regards, tom lane
--
Sent via pgsql-performance m
two. If so,
turning on log_lock_waits might provide some useful info.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
me doesn't seem very bright either.
Changing this in back branches might be too much of a behavioral change,
but it seems like we oughta change HEAD to apply standard selectivity
estimation to the HAVING clause.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
hit due to a bad plan.
An alternative you might consider, if simplifying the input queries
is useful, is to put the fixed conditions into a view and query the
view instead. That way there's not an enforced evaluation order.
regards, tom lane
--
Sent via pgsql
.
Whether that's got anything directly to do with your original problem is
hard to say. Joins to subqueries, which we normally lack any stats for,
tend to produce pretty bogus selectivity numbers in themselves; so the
original problem might've been more of that nature.
Jim Nasby writes:
> On 10/8/17 2:34 PM, Tom Lane wrote:
>> Why has this indexscan's cost estimate changed so much?
> Great question... the only thing that sticks out is the coalesce(). Let
> me see if an analyze with a higher stats target changes anything. FWIW,
> the
>> rows=508 loops=1)
I think the reason it's discarding the preferable plan is that, with this
huge increment in the estimated cost getting added to both alternatives,
the two nestloop plans have fuzzily the same total cost, and it's picking
the one you don't want on the basis o
er_Id) ss
WHERE Ma.User_Id = ss.User_Id AND
Ma.Bb_Open_Date = ss.max
GROUP BY Ma.User_Id
HAVING COUNT(*) > 1;
This is still not going to be instantaneous, but it might be better.
It's possible that an index on (User_Id, Bb_Open_Date) would help,
but I'm not sure.
o the right, so that the original
upper-level key splits would become impossibly unbalanced. This isn't
all that unusual a situation; consider timestamp keys for instance,
in a table where old data gets flushed regularly.
regards, tom lane
--
Sent via pgsql-performa
ut I really doubt
you want the side-effects of that.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ut it
didn't get done for v10.
If we do look at that as a substitute for "make an expression index just
so you get some stats", it would be good to have a way to specify that you
only want the standard ANALYZE stats on that value and not the extended
ones.
re
n.
JSON columns are great for storing random unstructured data, but they are
less great when you want to do relational-ish things on subfields.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
transaction.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Neto pr writes:
> I need to know the height of a B-tree index (level of the leaf node
> farthest from the root).
pageinspect's bt_metap() will give you that --- it's the "level"
field, I believe.
regards, tom lane
--
Sent via pgsql-perfo
u about that. If this is under a Gather node,
I believe that the numbers include time expended in all processes.
So if you had three or more workers these results would make sense.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.
s and you're using C locale on the faster machine but
some non-C locale on the slower. strcoll() is pretty darn expensive
compared to strcmp() :-(
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your su
e is it took the system an unusually long time to
notice that it needed to cancel the autovacuum to avoid a deadlock
with the CREATE INDEX. Was either process consuming a noticeable
amount of CPU during that interval? Do you have deadlock_timeout
set higher than the default 1s?
ot
> null;
> (I was actually expecting that commented out index to exists, but for some
> reason it didn't)
It would've done the job if you'd had it, I believe.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postg
to UNION in general is
difficult because of the possibility of duplicates. I wouldn't
recommend holding your breath waiting for the planner to do this
for you.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
The circa-tenth-of-a-second savings on the server side
is getting swamped by client-side processing.
It's possible that pgAdmin4 has improved matters in this area.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To
hat patterns you're looking for, it's possible that a
trigram index (contrib/pg_trgm) would work better.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgre
Index Cond: ((f1 >= 'dr7g'::text) AND (f1 < 'dr7h'::text))
-> Bitmap Index Scan on loc_f1_key (cost=0.00..4.22 rows=7 width=0)
Index Cond: ((f1 >= 'dr7e'::text) AND (f1 < 'dr7f'::text))
(8 rows)
Whether this is worth the trouble depends a lot on your data distribution,
but any of them are probably better than the seqscan you're no doubt
getting right now.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
erating
with its stupid cap on. Usually people also increase from_collapse_limit
if they have to touch either, but I think for this specific query syntax
only the former matters.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.or
xing something
that's unrelated to the predicate condition, but is also needed by the
query you want to optimize.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
p one eye firmly fixed on whether it slows planning down even in cases
where no benefit ensues. In the meantime, I'm not sure that there are
any quick-hack ways of materially improving the situation :-(
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
s; so that
would require inserting them manually into the DO text, with all the
attendant hazards of getting-it-wrong.
We've speculated before about letting DO grow some parameter handling,
but it's not gotten to the top of anyone's to-do list.
regards, tom lane
icantly
more efficient than one-use functions. Even disregarding the
pg_proc update traffic, plpgsql isn't going to shine in that usage
because it's optimized for repeated execution of functions.
regards, tom lane
--
Sent via pgsql-performance mailing list (
avoid bad bloat in pg_proc.
If you're intending that these functions be use-once, it's fairly unclear
to me why you bother, as opposed to just issuing the underlying SQL
statements.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@
sions involving columns of the table. So the
first clause loses because it's got variables on both sides, and the
second loses because the LHS expression is not what the index is on.
You could build an additional index on that expression, if this shape
of query is important enough to you to
t be usable
right away. Telling whether your own transaction can use it is harder
from SQL level, but if you're in the same transaction that made the
index then the answer is probably always "no" :-(
regards, tom lane
--
Sent via pgsql-performance maili
s, what you need to do is create
the gin index before you start populating the table. Fortunately, that
shouldn't create a really horrid performance penalty, because gin index
build isn't optimized all that much anyway compared to just inserting
the data serially.
regards,
fe.
Looks like a round-tuit-shortage issue rather than anything fundamental.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
uld improve with
a better estimate. Maybe you need to increase the stats target for
that table ... or maybe it just hasn't been ANALYZEd lately?
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to
slowdown?
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
rrectly.) But you do have gin_pending_list_limit, so see
what that does for you. Note you can set it either globally or per-index.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
lly I'd try tweaking gin_pending_list_limit first, if you have
a version that has that ... but YMMV.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ely certain that that has exactly the same
semantics (-ENOCAFFEINE), and it might still be none too quick.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
r this case,
but it doesn't look much like a typical use-case to me.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
t
excited enough about it to do that.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
with no disk buffer, as hoped
Seems odd. Is your cursor just on "SELECT * FROM table", or is there
some processing in there you're not mentioning? Maybe it's a cursor
WITH HOLD and you're exiting the source transaction?
regards, tom lane
-
the whole database
can be expected to stay RAM-resident at all times, it'd be a good idea
to reduce random_page_cost to reflect that. The default planner cost
settings are meant for data that's mostly on spinning rust.
regards, tom lane
--
Sent via
e other hand,
select * from generate_series(1,1);
does dump the data into a temp file, something we ought to work on
improving.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
tructure corresponds to a good join order.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
gt; 2017-01-25 11:10:17 EET [6902-1] xxx@YYY FATAL: canceling authentication due
> to timeout
So ... what authentication method are you using?
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes
plicit sort has to be inserted,
reducing the amount of data passing through the sort would be worth doing;
but in the general case that's unproven.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ything in plpgsql is a prepared query).
It's a trick that's likely to bite you eventually though.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql
xplaining when the optimizations noted in that paragraph cannot occur -
> and probably examples of both as well since its not clear when it can occur.
If you want an exact definition of when things will happen or not happen,
start reading the source code. I'm loath to document small optimiz
an uncorrelated sub-query, which gets evaluated
just once per run. But the overhead associated with that mechanism is
high enough that forcing it automatically for every stable function would
be a loser. I'd recommend doing it only where it *really* matters.
regar
maybe it's s/390?). And I can't find any
documentation suggesting that glibc supports turning off gradual
underflow, either.
Perhaps you're using some extension that fools around with the
hardware floating-point options?
regards, tom lane
--
Sent via pg
arison (char = char operator).
No type conversion step needed, so it's faster.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
you don't need to use
SELECT DISTINCT? The sort/unique steps needed to do DISTINCT are
eating a large part of the runtime, and they also form an optimization
fence IIRC.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgr
Tomas Vondra writes:
> On 12/10/2016 12:51 AM, Tom Lane wrote:
>> I tried to duplicate this behavior, without success. Are you running
>> with nondefault planner parameters?
> My guess is this is a case of LIMIT the matching rows are uniformly
> distributed in the in
Eric Jiang writes:
> We aren't using any special planner settings - all enable_* options are "on".
No, I'm asking about the cost settings (random_page_cost etc). The cost
estimates you're showing seem impossible with the default settings.
Eric Jiang writes:
> I have a query that I *think* should use a multicolumn index, but
> sometimes isn't, resulting in slow queries.
I tried to duplicate this behavior, without success. Are you running
with nondefault planner parameters?
regards, tom lane
-
www.postgresql.org/docs/current/static/indexes.html
particularly 11.3 - 11.5.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
of their
selectivity estimation routines.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
g as long as the group counts are similar,
so maybe you could post a script that generates junk test data that
causes this, rather than needing 27M rows of real data.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To
of this is actually on pgbench changes not the server. But in
the end, what you're measuring here is mostly contention, and you'd need
to alter the test parameters to make it not so. The "Good Practices"
section at the bottom of the pgbench reference page has some tips about
tha
(cost=0.00..18.80 rows=880 width=64) (actual time=0.014..0.016 rows=4
> loops=1)
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
o get mutated into a semijoin, but in this
example that couldn't happen anyway, so it's not much of an objection.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ndices to work either for
> Array columns with Like. Am I wrong?
Plain GIN index, probably not. A pg_trgm index could help with LIKE
searches, but I don't think we have a variant of that for array columns.
Have you considered renormalizing the data so that you don't have
arrays
e?
There were several different changes in the planner's number-of-distinct-
values estimation code in 9.6, so maybe the the cause of the difference is
somewhere around there.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postg
blem is "new server won't use hashagg", I'd wonder whether
the work_mem setting is the same, or whether maybe you need to bump
it up some (the planner's estimate of how big the hashtable would be
might have changed a bit).
regards, tom lane
--
Se
selectivity of the conditions on "echo_tango('seven_november'::text,
four_charlie)". Reformulating that, or maybe making an index on it just
so that ANALYZE will gather stats about it, could help.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
in the table, I'm not real sure why you'd need an MCV
list. Could we see the actual problem query (and the other table
schemas), rather than diving into the code first?
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.o
index on it.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
a plain
seqscan. That's a pretty silly plan, which in most cases you would
not get if you hadn't forced it.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
could be forgiven for wondering if
these were really against the same data.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
y overcommit - would that also explain the
> index issues we were seeing before we were seeing the crashes?
Unlikely. I'm guessing that there's some sort of race condition involved
in parallel restore with -c, but it's not very clear what.
regards,
lled
This is probably the dreaded Linux OOM killer. Fix by reconfiguring your
system to disallow memory overcommit, or at least make it not apply to
Postgres, cf
https://www.postgresql.org/docs/9.5/static/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT
regards, tom lane
--
it seems to perform fine when I force
it to use that index anyway", the answer may be that you need to
adjust random_page_cost. The default value is OK for tables that
are mostly sitting on spinning rust, but if your database is
RAM-resident or SSD-resident you probably want a value closer t
size anytime the hash tables got too big.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
yway.
Maybe it did, but threw it away on some bogus cost estimate. If you could
produce a self-contained test case, I'd be willing to take a look.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
ther
that would help your real application as opposed to this test case.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
, everything since 9.0 seems to be willing
to consider the type of plan you're expecting.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Postgres bug. Unlike the
situation with data files, it's very hard to see how PG could be holding
onto a reference to an unused log file. It only ever writes to one log
file at a time.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performanc
essing you can, please use
"lsof" or similar tool to see which Postgres process is holding open
references to lots of no-longer-there files.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes
if your script doesn't want
to wait around then an extra ANALYZE is the ticket.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
rs in the abstract.
If you want to make useful engineering tradeoffs you have to talk about
specific data sets and available hardware.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
htt
able is not a good thing â IndexScan is touching 10x more
> pages and in a typical situation those are cold.
In that case you've got random_page_cost too far down. Values less than
the default of 4 are generally only appropriate if the bulk of your
database stays in RAM.
s, that will just
add cost.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
s if your table is bigger
than RAM, which it apparently is.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ed more performance, look
into SSDs.
(If you have storage kit for which you'd expect better performance than
this, you should start by explaining what it is.)
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make chan
ces 78k rows not 1, it'd likely do something
smarter at the outer antijoin.
I have no idea why that estimate's so far off though. What PG version is
this? Stats all up to date on these two tables? Are the rows excluded
by the filter condition on "creditnote" significantly
_part(split_part((s.attvalue)::text, ' '::text, 1), '.'::text, 1)
since it's the join of that to e.name that seems to be actually selective.
(The planner doesn't appear to realize that it is, but ANALYZE'ing after
creating the index should fix that.)
good reason.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Jim Nasby writes:
> On 7/19/16 3:10 PM, Tom Lane wrote:
>> It's not so much that people don't care, as that it's not apparent how to
>> improve this without breaking desirable system properties --- in this
>> case, that functions are black boxes so far as ca
this
case, that functions are black boxes so far as callers are concerned.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
age for each row retrieved says
that the data you need is pretty badly scattered, so constructing an index
that concentrates everything you need into one range of the index might
be the ticket.
Either of these additional-index ideas is going to penalize table
insertions/updates, so keep an eye on that e
unds very much like a timeout expiring someplace, and I have
no idea where.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
w if it would be practical for you at all, but if you could
attach to a process that's stuck like this with a debugger and get a stack
trace, that would probably be very informative.
https://wiki.postgresql.org/wiki/Generating_a_stack_trace_of_a_PostgreSQL_backend
rega
ich one is reasonable?
The lower number sounds a lot more plausible for laptop-grade hardware.
If you weren't using an SSD I wouldn't believe that one was doing
persistent commits either.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-p
st place.
Thanks for taking the trouble to check this!
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
esql.org/message-id/15245.1466031608%40sss.pgh.pa.us
I wonder though whether the rewrite will fix your example. Could you
either make some test data available, or try HEAD + aforesaid patch
to see if it behaves sanely on your data?
regards, tom lane
--
Sent via pgsql-perfo
Adam Brusselback writes:
> Gah, hit send too soon...
Hm, definitely a lot of foreign keys in there. Do the estimates get
better (or at least closer to 9.5) if you do
"set enable_fkey_estimates = off"?
regards, tom lane
--
Sent via pgsql-performance
s?
If it's not that, I wonder whether the misestimates are connected to the
foreign-key-based estimation feature. Are there any FKs on the tables
involved? May we see the table schemas?
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-
. If you can't persuade the app to
label the comparison value as bpchar not text, the easiest fix would be
to create an additional index on "guid::text".
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
1 - 100 of 4389 matches
Mail list logo