For JDBC there are certain prerequisites for setFetchSize to work, e.g.
using forward only result sets and transactions.
вт, 21 лют. 2017 о 09:06 John Gorman пише:
> My experience with cursors in PostgreSQL with Java has been to stay away
> from them. We support 2 databases with our product, Pos
ny useful information .
>
> Thanks,
>
> Br,
> Vuko
> ----------
> *From: *"Vitalii Tymchyshyn"
> *To: *"Vucomir Ianculov" , "Tom Lane" <
> t...@sss.pgh.pa.us>
> *Cc: *pgsql-performance@postgresql.org
> *Sent: *Wednes
Just a wild guess, but did you check your random source? We had similar
problems in Oracle and had to switch to /dev/urandom. It can be done with a
system variable setting.
On Wed, Feb 1, 2017, 7:52 AM Vucomir Ianculov wrote:
> can anyone help me with my problem?
> i'm really don't know from whe
Hi.
In SQL "null == any value" resolves to false, so optimizer can safely skip
nulls from either side if any for the inner join.
Best regards, Vitalii Tymchyshyn
NULL is still a value that may be paired with a NULL in a.a
>
> The only optimization I could see is if the a.a col
>
> Have you considered having many databases (e.g. 100) and possibly many
postgresql servers (e.g. 10) started on different ports?
This would give you 1000x less tables per db.
>
>>
Well, as I can see it was just few phrases unless I miss something. May be
it's worth to bring it to -hackers for a wider discussion?
Best regards, Vitalii Tymchyshyn
Сб, 30 січ. 2016 12:31 David Rowley пише:
> On 31 January 2016 at 06:14, Vitalii Tymchyshyn wrote:
> > It m
to all queries right away".
Another good option is to threshold against cumulative query time. E.g. if
there was 1 runs 0.5 millis each, it may be beneficial to spend few
millis to get 0.2 millis each.
Best regards, Vitalii Tymchyshyn
Сб, 30 січ. 2016 10:57 David Rowley пише:
> On 3
, Vitalii Tymchyshyn
Чт, 21 січ. 2016 19:30 Phil S пише:
> I am running Postgresql on a Windows Server 2008 server. I have noticed
> that queries have very high planning times now and then. Planning times go
> down for the same query immediately after the query runs the first time,
> bu
BTW: May be it could be feasible in future to perform partition exclusion
during the execution? This would be very neat feature.
Regards, Vitalii Tymchyshyn
Вт, 27 жовт. 2015 15:03 David G. Johnston пише:
> On Tue, Oct 27, 2015 at 2:29 PM, GMail wrote:
>
>> I have partitioned a la
Hi.
How would BBU cache help you if it lies about fsync? I suppose any RAID
controller removes data from BBU cache after it was fsynced by the drive.
As I know, there is no other "magic command" for drive to tell controller
that the data is safe now and can be removed from BBU cache.
Вт, 7 лип. 2
It may be even easier. AFAIR, it's possible just to tell OS expected
allocation without doing it. This way nothing changes for general code,
it's only needed to specify expected file size on creation.
Please see FILE_ALLOCATION_INFO:
https://msdn.microsoft.com/en-us/library/windows/desktop/aa36421
It looks pretty much like partitioning. You should check partitioning
recipes.
Чт, 21 трав. 2015 06:41 Florian Lohoff пише:
> Hi,
> i stumbled over something i cant seem to find a workaround. I create a
> view like
>
> create view v_test as
> select a,b
> frombig_table
>
You can set it for the db user or use stored proc.
Best regards, Vitalii Tymchyshyn
Ср, 18 бер. 2015 14:48 Vivekanand Joshi пише:
> The issue here is that the queries are running inside a Jasper Reports. So
> we cannot set this only for a one single query.
>
> We are accessing our
s on one image with another
> 999 images overlapping it means 10000 *1000^2).
>
>
>
> Best regards,
>
>
>
> Jiri Nadvornik
>
>
>
> *From:* tiv...@gmail.com [mailto:tiv...@gmail.com] *On Behalf Of *Vitalii
> Tymchyshyn
> *Sent:* Sunday, July 27, 2014 8:06 A
each axe and group the results.
I think you will have most observations grouped this way and then use your
regular algorithm to combine the results.
Best regards, Vitalii Tymchyshyn
(current_user or special_value) or (read_by = special_value and not
exists()). Note that optimizer may have problems with such a complex
expression nd you may need to use "union all" instead on "or". Partial
index(es) for null/special value may help.
Best regards, Vitalii Tymchyshy
Can it be hardware problem with io? Try finding out which file the stuck
table is and do a simple fs copy. Or simply do a copy of the whole pg data
directory.
15 вер. 2013 04:54, "Craig James" напис.
> On Sat, Sep 14, 2013 at 11:36 AM, bricklen wrote:
>
>> On Sat, Sep 14, 2013 at 11:28 AM, Craig
rticle for, say, dzone. The
problem can be quite tricky in MVCC database and choice must be done wisely.
Best regards, Vitalii Tymchyshyn
2013/9/2 Andrew Dunstan
>
> On 09/02/2013 05:53 AM, Łukasz Walkowski wrote:
>
>> On 1 wrz 2013, at 05:10, Vitalii Tymchyshyn wrote:
>&g
t's harder to mess with the mapping and values are still
more or less readable with simple select. But it can be less efficient than
(a).
c) Do mixed approach with mapping table, loaded on start into application
memory. This would be an optimization in case you get into optimizer
troubles.
Best regards, Vitalii Tymchyshyn
I'd try to check why discounts are different. Join with 'or' should work.
Build (one query) except all (another query) and check some rows from
result.
13 лип. 2013 01:28, "Brian Fehrle" напис.
> On 07/11/2013 06:46 PM, Josh Berkus wrote:
>
>> Brian,
>>
>> 3. I'm trying to eliminate the union,
Analyze your temp tables after filling and before using!
17 трав. 2013 17:27, "Sékine Coulibaly" напис.
> Oh, sorry, overlooked that part.
> Maybe refreshing stats with VACUUM FULL ?
>
>
> 2013/5/17 Robert Emery
>
>> Hi Sékine,
>>
>> Unfortunately I'm not trying to empty the table completely, ju
Well, could you write a trigger that would do what you need? AFAIR analyze
data is stored no matter transaction boundaries. You could store some
counters in session vars and issue an explicit analyze when enough rows
were added.
7 трав. 2013 08:33, "Mark Kirkwood" напис.
> On 07/05/13 18:10, Simo
order by p.PERSON_ID asc;
>
> So my question is this: Is there a way to get the Postgres optimizer
> "rewrite" the query execution plan to use the equivalent, but much more
> efficient latter form?
>
> And before you ask; yes, there are better ways of writing this query. But
> we're dealing with Java developers and Hibernate here. It's a legacy
> system, and the policy is to avoid hand-written SQL, so for the moment
> let's not go down that rabbit hole, and focus on the issue of what the
> optimizer can and cannot do.
>
--
Best regards,
Vitalii Tymchyshyn
I'd say you either have overloaded application (try increasing timeout),
too small pool (increase pool) or connection leaks (find and fix).
18 квіт. 2013 23:45, "itishree sukla" напис.
> Dear All,
>
> Can any one please help me to fix this issue, i am getting this error from
> our application, cu
2013/2/22 jackrg
> Tuning Postgre is not an option, as the instance
> is provided by Heroku and as far as I know cannot be tuned by me.
>
> Most tuning parameters can be set at per-query basis, so you can issue
alter database set param=value
to have same effect as if it was set through postgresql
Since cte is already an optimization fence, you can go further and make it
temporary table.
Create table;analyze;select should make optimizer's work much easier.
18 лют. 2013 18:45, "John Lumby" напис.
>
> On 2012-10-09 23:09:21
> Tom Lane wrote:
> >
>
>
> > re subject Why am I getting great/ter
to quotes_2012_09_10 select * from new where
> cast(new.received_time
> > as date) = '2012-09-10' ;
> > insert into quotes_2012_09_11 select * from new where
> cast(new.received_time
> > as date) = '2012-09-11' ;
> > ...
>
> It has only one problem - PostgreSQL has not relations NEW and OLD for
> statements triggers.
>
> Regards
>
> Pavel
>
--
Best regards,
Vitalii Tymchyshyn
lvalue=rvalue2 then out2 ...
end" then this can be optimized and this would benefit many users, not only
ones that use partitioning.
2012/12/28 Stephen Frost
> Vitalii,
>
> * Vitalii Tymchyshyn (tiv...@gmail.com) wrote:
> > There is switch-like sql case:
> [...]
> > I
BTW: If "select count(*) from new" is fast, you can even choose the
strategy in trigger depending on insert size.
2012/12/28 Vitalii Tymchyshyn
> There is switch-like sql case:
> 39.6.2.4. Simple CASE
>
> CASE search-expression
> WHEN expression [,
nditions required before
> it'd actually be more efficient to use, though perhaps getting rid of
> the expression evaluation (if that'd be possible) would make up for it.
>
> Thanks,
>
> Stephen
>
--
Best regards,
Vitalii Tymchyshyn
Well, you don't need to put anything down. Most settings that change
planner decisions can be tuned on per-quey basis by issuing set commands in
given session. This should not affect other queries more than it is needed
to run query in the way planner chooses.
Best regards, Vitalii Tymch
Well, it seems that my data can be outdated, sorry for that. I've just
checked performance numbers on Tom's hardware and it seems that best sad
really do 500 MB/s. Some others do 100. So, I'd say one must choose wisely
(as always :-) ).
Best regards,
Vitalii Tymchyshyn
1 груд. 20
ve operations on SSD. For the majority of databases,
> these are the logfiles. But you should investigate where the optimum is for
> your situation.
>
>
--
Best regards,
Vitalii Tymchyshyn
Oh, yes. I don't imagine DB server without RAID+BBU :)
When there is no BBU, SSD can be handy.
But you know, SSD is worse in linear read/write than HDD.
Best regards, Vitalii Tymchyshyn
2012/11/30 Mark Kirkwood
> Most modern SSD are much faster for fsync type operations than a spinnin
Actually, what's the point in putting logs to ssd? SSDs are good for random
access and logs are accessed sequentially. I'd put table spaces on ssd and
leave logs on hdd
30 лист. 2012 04:33, "Niels Kristian Schjødt"
напис.
> Hmm I'm getting suspicious here. Maybe my new great setup with the SSD's
ith others, that you should check. But
remember, joining two tables with millions of records os never cheap :)
Best regards, Vitalii Tymchyshyn
2012/11/28 Niels Kristian Schjødt
> Hi,
>
> I'm on the hunt for some solid knowledge on a theoretical level about the
> performance of
I'd also add ANALYZED/NOT ANALYZED. This should force it behave like
'create table, analyze, select' with statistics used in second query plan.
P.S. defaults can be configurable.
20 лист. 2012 02:22, "Gavin Flower" напис.
> On 15/11/12 15:03, Peter Geoghegan wrote:
>
>> On 15 November 2012 01:46
Hello.
This may be wrong type for parameter, like using setObject(param, value)
instead of setObject(param, value, type). Especially if value passed is
string object. AFAIR index may be skipped in this case. You can check by
changing statement to "delete from xxx where xxx_pk=?::bigint". If it
server side.
Best regards, Vitalii Tymchyshyn
06.06.12 15:05, Konstantin Mikhailov написав(ла):
I'm faced with a problem running postgres 9.1.3 which seems to
nobody else see before. Tried to search and only one relevant
post fond (about millions of files in pgsql_tmp).
Sympthoms:
Some
ql server may replan such
cases each time.
Best regards, Vitalii Tymchyshyn
01.06.12 02:34, Trevor Campbell написав(ла):
Thanks Craig, that certainly leads down the right path.
The following is all done in pgAdmin3:
Using an actual value we I get the plan I expect
explain analyze sele
Check for next messages in your log:
LOG: checkpoints are occurring too frequently (ZZZ seconds apart)
HINT: Consider increasing the configuration parameter "checkpoint_segments".
Best regards, Vitalii Tymchyshyn
22.03.12 09:27, Gnanakumar написав(ла):
Hi,
We're runn
27.01.12 11:06, Marti Raudsepp написав(ла):
On Fri, Jan 27, 2012 at 06:31, sridhar bamandlapally
wrote:
--
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
,unnest($1))
union all
(select 2,unnest($2))
) as r (arr, elements)
group by 1
having min(arr)=max(arr)
))
$$ language sql strict immutable;
Best regards, Vitalii Tymchyshyn
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to
o "old set". Any
operation on currently converting basket will block until conversion is
done.
P.S. Sorry for a lot of possibly dumb thoughts, I don't know why I've
got such a though stream on this topic :)
Best regards, Vitalii Tymchyshyn.
--
Sent via pgsql-perform
this case coupled with
rather ugly queries (see above). Native support would be nice.
Best regards, Vitalii Tymchyshyn.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
hat I would like to have is to force some relations to be in memory by
giving them fixed part of shared buffers and to tell postgresql they are
in memory (lowering page costs) to have fixed optimal plans.
Best regards, Vitalii Tymchyshyn.
19.09.11 14:57, Cédric Villemain написав(ла):
2011/9/19 Vi
tables/indexes be in memory, no matter what
other load DB has.
Complimentary features are:
1) Relations preloading at startup - ensure this relation are in memory.
2) Per buffer pool (or relation) page costs - tell it that this
indexes/tables ARE in memory
Best regards, Vitalii Tymchyshyn
umn with range criteria.
You may still try to change order of columns in your index if this will
give best selectivity on first column.
Another option is multiple single column indexes - postgres may merge
such an indexes at runtime (don't remember since which version this
feature i
st regards, Vitalii Tymchyshyn
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
um in background.
Yes, we get dual index search, but new zone will be hot, so this won't
make it twice as costly.
Best regards, Vitalii Tymchyshyn
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
then create indexes".
On my database it takes twice the time for int8 then for int4 to insert
data.
Also it takes ~twice a time (2 hours) to add 200K of rows to 200M of
rows than to make an index over 200M of rows (1 hour).
Best regards, Vitalii Tymchyshyn
--
Sent via pgsql-perfor
it's
page (and for index random writes can be vital). Unfortunately, such a
changes requires partition reformat and I can't afford it for now.
Best regards, Vitalii Tymchyshyn
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your sub
.
You can check this by dropping PK and creating it as a regular
non-unique index.
Best regards, Vitalii Tymchyshyn
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
in transaction and don't have to worry about
deadlocks.
Best regards, Vitalii Tymchyshyn
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
s that fools prefetch
logic. "dd if=table_file of=/dev/null bs=8m" helps me a lot. You can see
it it helps if CPU time goes up.
Best regards, Vitalii Tymchyshyn
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
in Oracle may be "hot" in caches?
Also, sequential scan is not too bad thing. It may be cheap enough to
read millions of records if they are not too wide. Please show "select
pg_size_pretty(pg_relation_size('openactivity'));" Have you tried to
explain analyze second
st regards, Vitalii Tymchyshyn
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ave the support :(
It has AIO, but does not have the call needed to enable this settings.
Best regards, Vitalii Tymchyshyn
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
populate bufferpools with fast sequential
read, but this can be easily emulated with a statement.
Best regards, Vitalii Tymchyshyn
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ked that you are really doing fetch and processing in
parallel? Dunno about C#, but under Java you have to make specific
settings (e.g. setFetchSize) or driver will fetch all the data on query
run. Check time needed to fetch first row from the query.
Best regards, Vitalii Tymchyshyn
25.03.11 16:12, Tom Lane написав(ла):
Vitalii Tymchyshyn writes:
Why so? I simply change cost estimation functions. This won't change
number of pathes.
If you have multiple figures of merit, that means you have to keep more
paths, with consequent slowdown when it comes to choosing which
to be
considered for any non trivial query.
Why so? I simply change cost estimation functions. This won't change
number of pathes.
Best regards, Vitalii Tymchyshyn.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://ww
853083
loops=1)
Total runtime: 56687.660 ms
(7 rows)
But Is there is any option to tune it further and one more thing
output rows varies from 6 to 7.
You need an index on source_id to prevent seq scan, like the next:
CREATE INDEX idx_clause2_source_id
ON clause2
(source_id);*
*Best regards, Vitalii Tymchyshyn
low query operates with data mostly on disk, while fast one
with data in memory.
Best regards, Vitalii Tymchyshyn
23.03.11 12:19, Adarsh Sharma ???(??):
Vitalii Tymchyshyn wrote:
23.03.11 12:10, Adarsh Sharma ???(??):
I just want to retrieve that id 's from page_content which do not
have any entry in clause2 table.
Then
select distinct(p.crawled_page_id) from page_content p
where NOT E
;
is correct query.
Best regards, Vitalii Tymchyshyn.
Is it
select distinct(b) from t1 where b > (select min(d) from t2)**or b <
(select max(d) from t2)
?
Can you explain in words, not SQL, what do you expect do retrieve?
Best regards, Vitalii Tymchyshyn
more then 1 source_id. This is because DB will be able to
find clause with source_id different from crawled_page_id. You need to
use "not exists" or "not in".
Best regards, Vitalii Tymchyshyn.
dding partial
indexes for different attributes. Note that in this case parameterized
statements may prevent index usage, so check also with attribute id inlined.
Best regards, Vitalii Tymchyshyn
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to
arge query will look ugly. And I am not sure if
optimizer is smart enough for not to fetch 25*N rows.
Best regards, Vitalii Tymchyshyn
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
val1::narrow > val2::wide as (val1::narrow + 1 > val2::narrow and
val1::wide > val2::wide)
Of course it should use additional check it this allows to use an index.
Surely, this is not an easy thing to implement, but as for me similar
question are raised quite often in this list.
Best regards, Vitalii Tymchyshyn
benefit
from it.
Best regards, Vitalii Tymchyshyn
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
d by since you may
loose significant information (like minimal values).
Best regards, Vitalii Tymchyshyn
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
not used tuples
3) Make vacuum truncate file if it has not used tuples at the end.
The only (relatively) tricky thing I can see is synchronizing truncation
with parallel ongoing scan.
Best regards, Vitalii Tymchyshyn
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresq
f thresholds)
4) As most other settings, this threshold can be changed on up to
per-query basis.
P.S. I would also like to have index analyze as part of any create index
process.
Best regards, Vitalii Tymchyshyn
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresq
quite good as for me.
Best regards, Vitalii Tymchyshyn
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
idual deduplications are
streaming and are sorted in one way - you can simply do a merge on top.
Best regards, Vitalii Tymchyshyn.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ere, is how much resources can take a backend that is
waiting for a lock. Is locking done at the query start? Or it may go
into wait while consumed much of work_mem. In the second case, the limit
won't be work_mem limit, but will still prevent much contention.
Best regards, Vitalii Tymc
eric value!
Have you considered splitting customer record into two tables with
mostly read-only data and with data that is updated often? Such 1-1
relationship can make a huge difference to performance in your case. You
can even try to simulate old schema by using an updateable view.
Best regard
12.11.10 12:56, Cédric Villemain написав(ла):
I supposed it was an answer to my mail but not sure... please keep
CC'ed people, it is easier to follow threads (at least for me)
OK
2010/11/12 Vitalii Tymchyshyn:
I'd say there are two Qs here:
1) Modify costs based on informat
#x27;s hard for
optimizer to decide if it will be enough to scan only this percents for
given query.
Best regards, Vitalii Tymchyshyn
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
rmat?
The spec is clear and we could clearly write our own, but figured I
would ask. ;)
JDBC driver has some COPY support, but I don't remember details. You'd
better ask in JDBC list.
Best regards, Vitalii Tymchyshyn
--
Sent via pgsql-performance mailing list (pgsql-performance@pos
t bits to the whole table. Second scan
may be way faster.
Best regards, Vitalii Tymchyshyn
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
or the page.
IMHO Such an indexes would make materialized views/triggers/high level
caches unneeded in most cases.
Best regards, Vitalii Tymchyshyn
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ve to count all index tuples and recheck the ones that are
uncertain. Does it work in this way? This can help a lot for wide tuples
in table, but with narrow index and mostly read-only data.
Best regards, Vitalii Tymchyshyn
--
Sent via pgsql-performance mailing list (pgsql-performance@postgres
12.10.10 11:14, Craig Ringer написав(ла):
On 10/12/2010 03:56 PM, Vitalii Tymchyshyn wrote:
BTW: There is a lot of talk about MVCC, but is next solution possible:
1) Create a page information map that for each page in the table will
tell you how may rows are within and if any write (either
after frozing all the tuples in the page (am
I using terminology correctly?).
In this case all read-only (archive) data will be this bit off and
index/count(*) will be really fast.
Am I missing something?
Best regards, Vitalii Tymchyshyn.
--
Sent via pgsql-performance mailing list (pgsql-
itself.
Best regards, Vitalii Tymchyshyn
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
itself has much overhead) and limit
max_active_queries to get maximum performance and won't use external
pooling. Of course this won't help if the numbers are really high, but
could work out the most common cases.
Best regards, Vitalii Tymchyshyn
--
Sent via pgsql-performance mailin
cute
time to check if this is planning time.
Best regards, Vitalii Tymchyshyn
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
)
) a
ORDER
BY rank LIMIT small_number) b;
So, you take small_number records from each table, then select small_number
best records from resulting set, then do the calculation.
Best regards, Vitalii Tymchyshyn
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To
91 matches
Mail list logo