to shove in per day? If you need to insert it,
and index it, and run queries, and deal with maintenance of the older
partitions, then you will need a lot of spare capacity, relative to
just inserting, to do all of those things. Do you have windows where
there is less insert activity in whic
On Wed, Dec 21, 2011 at 6:30 PM, Benjamin Johnson
wrote:
> Jeff,
>
> Sorry for the delayed response. Please see (some) answers inline.
>
> On 12/1/2011 9:06 AM, Jeff Janes wrote:
>> On Wed, Nov 30, 2011 at 7:27 AM, Benjamin Johnson
>> Why shove it in as fast as yo
ur EXPLAIN ANALYZE with BUFFERS as
well). But before trying to fix this by tweaking settings, will the
real case always be like your test case? If the data stops being all
in memory, either because the problem size increases or because you
have to compete for buffer space with other
'd prefer to do something
> more robust and based on config tweaks that leverage existing resource
> management in PG.
Why not just always throttle them? If you slam the data in as fast as
possible during brief bursts, you are probably just setting yourself
up for this type of issue. (T
ntrol over that.
>
> Hi Andy: As of now, there are no triggers in the table.
What about indexes?
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
x it. That can't be done with just the information
you provide.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Mon, Jan 30, 2012 at 9:46 AM, Saurabh wrote:
> Thank you for the information.
>
> Schema of table is:
>
> ID bigint
> company_name text
> data_set text
> time timestamp
> Date date
>
> Length of company_name is
;m sure this behavior
depends greatly on your IO subsystem.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
(a bit over an hour).
> Total number of records: approx 227 million, comprising 16GB of storage.
>
> Why the huge discrepancy?
Maintaining indices when rows are inserted in a random order generates
a huge amount of scattered I/O.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (
o give
you results which are meaningful to the actual production situation.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
.
Maybe with the timing = off feature,it would might make sense to just
preemptively analyze everything.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
But how long should each setting be tested for? If a different
setting causes certain index to start being used, then performance
would go down until those indexes get cached and then increase from
there. But how long is long enough to allow this to happen?
Thanks,
Jeff
--
Sent via pgsql-perfo
ely, when we implemented this simple delete, it executed slower
> than molasses, taking about 9 hours to do its thing.
Is this 9 hours run time for deleting one day worth of data, or for
deleting the entire accumulation of cruft that filled up the hard
drive in the first place (which would be 170
re buffers on
the free-list. That way a sequential scan would populate
shared_buffers after a restart. But it wouldn't help you get the
indexes into cache.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Sun, Feb 26, 2012 at 2:56 AM, Stefan Keller wrote:
> Hi Jeff and Wales,
>
> 2012/2/26 Jeff Janes wrote:
>>> The problem is that the initial queries are too slow - and there is no
>>> second chance. I do have to trash the buffer every night. There is
>>>
isk, the
now-clean data generally remains in cache (although I've seen nfs
implementations where that was not the case).
It is hard to figure out what problem you are facing. Is your data
not getting loaded into cache, or is it not staying there?
Cheers,
Jeff
--
Sent via pgsql-performan
e vacuum will just scan the tables and indices, I'm hoping doing
> nothing since the vacuum full will have cleaned everything already,
> but loading everything both in the OS cache and into shared_buffers.
Doesn't it use a ring buffer strategy, so it would load to OS, but
pro
uld be the ones just copied into PG, but the kernel is not
aware of that. So the whole thing is rather sub-optimal.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Wed, Feb 29, 2012 at 7:28 AM, Stefan Keller wrote:
> 2012/2/29 Stefan Keller :
>> 2012/2/29 Jeff Janes :
>>>> It's quite possible the vacuum full is thrashing your disk cache due
>>>> to maintainance_work_mem. You can overcome this issue with the tar
>
synchronous_commit=off to simulate a BBU I
> have no trouble hitting 11k tps on a single SATA disk.
fsync=off might be a better way to simulate a BBU.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
selects, with the same where clause. If
doing that makes it much faster, TCP must not have been your
bottleneck.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
map scan may
have done the hard work of reading all the needed buffers into cache,
and the index scan then got to enjoy that cache.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Fri, Apr 6, 2012 at 3:09 PM, Kim Hansen wrote:
> Hi all
>
> On Fri, Apr 6, 2012 at 19:11, Jeff Janes wrote:
>> On Wed, Apr 4, 2012 at 6:47 AM, Kim Hansen wrote:
>>> Hi All
>>>
>>> I have a query where the planner makes a wrong cost estimate, it look
16 or -j 32?
Also, -t 2000 is mighty low.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
nces might technically work, but would render the database
virtually unmanageable.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
many total relations do you have? I don't know if there is a limit to
> the number of schemas, but I suspect when you went from one schema to 20,000
> schemas, you also went from N relations to 2*N relations.
Yes, that might be important to know--whether the total number of
relati
On Thu, May 24, 2012 at 8:54 PM, Bruce Momjian wrote:
> On Thu, May 24, 2012 at 08:20:34PM -0700, Jeff Janes wrote:
>> pg_dump itself seems to have a lot of quadratic portions (plus another
>> one on the server which it hits pretty heavily), and it hard to know
>> where to
create schema foo$_; create table foo$_.foo (k
integer, v integer);"
foreach $ARGV[0]..$ARGV[0]+' $f | psql -d foo > /dev/null ;
time pg_dump foo -Fc -n foo1 | wc -c;
done >& dump_one_schema_timing
To show the overall dump speed problem, drop the "-n foo1", a
On Fri, May 25, 2012 at 9:56 AM, Tom Lane wrote:
> Jeff Janes writes:
>
>> For dumping entire databases, It looks like the biggest problem is
>> going to be LockReassignCurrentOwner in the server. And that doesn't
>> seem to be easy to fix, as any change to it to
On Fri, May 25, 2012 at 1:02 PM, Tom Lane wrote:
> Jeff Janes writes:
>> For dumping entire databases, It looks like the biggest problem is
>> going to be LockReassignCurrentOwner in the server. And that doesn't
>> seem to be easy to fix, as any change to it to im
eign keys, check constraints, etc.)?
Same as above. If they try to do conflicting things, they don't
continue operating concurrently.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
CurrentOwner problem
in the server when using pg_dump with lots of objects. Using a
preliminary version for this, in conjunction with -Fc, reduced the
dump time to 3.5 minutes.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to y
make sure you have a safe backup
first!):
http://www.postgresql.org/docs/9.1/static/app-pgresetxlog.html
And try setting the current transaction ID to just before the delete
ran. Then you may be able to use pg_dump or otherwise export the deleted
rows.
Regards,
Jeff Davis
--
Sent via
gt; tables. With 100k tables LOCK statements took 13 minutes in total, now
> it only takes 3 seconds. Comments?
Could you rebase this? I tried doing it myself, but must have messed
it up because it got slower rather than faster.
Thanks,
Jeff
--
Sent via pgsql-performance mailing list (pg
On Sun, Jun 10, 2012 at 4:47 PM, Jeff Janes wrote:
> On Wed, May 30, 2012 at 2:06 AM, Tatsuo Ishii wrote:
>>> Yeah, Jeff's experiments indicated that the remaining bottleneck is lock
>>> management in the server. What I fixed so far on the pg_dump side
>>> sh
important improvements in GiST build times for cases
where the index doesn't fit in memory. Mark, can you please try your
experiments on the 9.2beta and tell us whether that helps you?
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql
On Tue, Jun 12, 2012 at 1:54 AM, Tatsuo Ishii wrote:
>> On Sun, Jun 10, 2012 at 4:47 PM, Jeff Janes wrote:
>>> On Wed, May 30, 2012 at 2:06 AM, Tatsuo Ishii wrote:
>>>>> Yeah, Jeff's experiments indicated that the remaining bottleneck is lock
>>>>&
; know if a particular query is coming from the cache?
Using explain (analyze, buffers) will show you if it is coming from
the shared_buffers cache.
It is harder to see if it is coming from the file system cache. If
the server is mostly idle other than your stuff, you can run vmstat
and see ho
> You're aware that you can TRUNCATE many tables in one run, right?
>
> TRUNCATE TABLE a, b, c, d, e, f, g;
This still calls DropRelFileNodeAllBuffers once for each table (and
each index), even if the table is empty.
With large shared_buffers, this can be relatively slow.
Cheers,
cause 10 files in a single folder is a
> too many for OS ?
I doubt that that is a problem on any reasonably modern Linux.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Sun, Jul 8, 2012 at 11:49 PM, Sylvain CAILLET wrote:
> Hi,
>
> Thank you all for your help.
>
> @Jeff : my daemon creates these tables at start time so it doesn't do
> anything else at the same time. The CPU is loaded between 20% and 25%.
How does it decide which t
ion, too.
>
> Sounds like this hack may become unnecessary in 9.2 though.
Index only scans in 9.2 are nice, but they don't fundamentally change
this type of thing.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscrip
t; don't have data to back that
> statement up.
Did you have less RAM back when you were running PG 8.3?
> Any suggestions. I'm willing and able to profile, or whatever.
Who much RAM do you have? What are your settings for /proc/sys/vm/dirty_* ?
Cheers,
Jeff
--
Sent via pg
tegorize just what those situations are.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
insert a row, or do maintenance on an index) it
often doesn't even attempt to cost that at all as there is no choice.
So it is not just a matter of units.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
few
> records, do a few things, and then clear those handful of records.
How many rounds of truncation does one rake do? I.e. how many
truncations are occurring over the course of that 1 minute or 15
minutes?
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postg
look up rather
than a full scan.
Maybe we could just turn off the pending ops table altogether when
fsync=off, but since fsync is PGC_SIGHUP it is not clear how you could
safely turn it back on.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
from partitioning that can't
be obtained other ways. By using partitioning, you can greatly
increase the insert performance by imposing a small cost on each
query. The total cost is at least as great, but you have re-arranged
how the cost is amortized into a more acceptable shape.
ept in logical order, but they
do tend to be biased in that direction in most cases. I've found that
even if you are only inserting one row for every 4 or 5 leaf pages,
you still get substantial improvement by doing so in sorted order.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
hat worked like a charm! Many thanks. But how comes, the queries are also
> fast
> after a restore without the cluster?
Probably fewer buffers needed to be touched.
Running "explain (analyze, buffers)" would give information on how
many buffers were touched.
Cheers,
Jeff
er caching than the other.
Did you try running the queries in alternating order, to average out
caching effects?
Could you run the "explain (analyze, buffers)" on those to get a
better picture of the buffer effects?
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-perfor
query forgets the plan and goes to sequential scan. Is there
> any way around? Or is this the best I can have?
What happens if you set "enable_seqscan=off" and run the query with
the very large list? (This is an experiment, not a recommendation for
production use)
Cheers,
J
On Tue, Aug 7, 2012 at 9:06 AM, Ioannis Anagnostopoulos
wrote:
> On 07/08/2012 17:00, Jeff Janes wrote:
>>
>> What happens if you set "enable_seqscan=off" and run the query with
>> the very large list? (This is an experiment, not a recommendation for
>
p://archives.postgresql.org/pgsql-general/2005-01/msg01347.php
>>> From: Jeff Davis
>>>
>>> It got me curious enough that I tested it, and apparently droping a
>>> trigger locks the table. Any actions on that table must wait until the
>>> transaction that
On Tue, Aug 7, 2012 at 2:39 PM, Craig James wrote:
> On Tue, Aug 7, 2012 at 1:45 PM, Jeff Janes wrote:
>> On Tue, Aug 7, 2012 at 1:15 PM, Merlin Moncure wrote:
>>>
>>> IF current_user = 'bulk_writer' THEN
>>> return new;
>>> END IF;
&
licit conditions for there being
at least 1 school and 1 pharmacy within distance. There can't be >1
unless there is >=1, but the join possibilities for >=1 (i.e. "where
exists" rather than "where (select count(*)...)>1" ) are much more
attractive than the
On Thu, Aug 9, 2012 at 4:00 AM, Stefan Keller wrote:
> Hi
>
> 2012/8/8 Jeff Janes :
>> On Tue, Aug 7, 2012 at 5:07 PM, Stefan Keller wrote:
>>> Hi Craig
>>>
>>> Clever proposal!
>>> I slightly tried to adapt it to the hstore involved.
&
red.
If you are in a position to retest using 9.2Beta3
(http://www.postgresql.org/about/news/1405/), I'd be interested to
know if it does make truncations comparable in speed to unqualified
deletes.
Thanks,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@post
red statement:
pgbench -M prepared -f dummy2.sql -T300
tps = 30289.908144 (excluding connections establishing)
or 33 us/call.
So unless your server is a lot slower than mine, I think your client
may be the bottleneck. What is your client program? what does "top"
show as the relative CPU usage of
tainly am not volunteering to take on that task.
A compromise might be to have one stats file per database. That way
any given backend only needs to read in the database file it cares
about, and the stat's collector only needs to write out the one
database asked of it. This change could be mos
asonable for your application.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ardless of whether it is actually in memory, or on disk, or
both. If the data on disk is stale, the kernel guarantees requests to
read it will be served from memory rather than from disk.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
aving sparsely populated, but not
completely empty, index pages; which your insertions will then never
reuse because they never again insert values in that key range?
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscripti
t see a fundamental reason it can't be allowed, maybe
implementing that should be on the to-do list.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
e CLUSTER is done, the index might even be useful enough to
keep around for use with queries, or even replace the original index
altogether.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ng
with those rows, so that they show up in the new table. But it could
just treat them all as equal to one another and have them be in
whatever order they happen to fall in.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
tion through this test server and see what it logs. That
requires that you have
Easier would to be turn on wal_debug and watch the server log as the
WAL logs are generated, instead of recovered, but you probably don't
want to do that on production. So you would need a workload generator
that als
On Thu, Aug 16, 2012 at 12:57 PM, Strahinja Kustudić
wrote:
>
> @Jeff I'm not sure if I understand what you mean? I know that we never reuse
> key ranges. Could you be more clear, or give an example please.
If an index leaf page is completely empty because every entry on it
were de
On Mon, Aug 20, 2012 at 1:51 PM, delongboy wrote:
>
> Jeff Janes wrote
>>
>> Maybe there is an easier way, but one thing would be to compile a test
>> server (of the same version as the production) with WAL_DEBUG defined
>> in src/include/pg_config_manual.h,
hing.
> this seems like a situation the planner could easily detect
> and compensate for.
Yes, it is just a Small Matter Of Programming :)
And one I've wanted for a while.
If only someone else would offer to do it for me
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
file system cache implies that also postgres cache is
> deleted, isn't it ?
No, the postgres-managed cache is not cleared by doing that. In order
to get rid of both layers of caching, you should restart the postgres
server and then do the drop_caches.
Cheers,
Jeff
--
Sent via pgsql
u are running
is not the one you think you are running.
That's probably the first thing to sort out--repeat the experiment
with the correct version.
Also, rather than posting the entire config file, you can get just the
non-default settings:
https://wiki.postgresql.org/wiki/Server_Configurati
kwards compatible.
Maybe drop duplicates where the payload was the empty string, but keep
them otherwise?
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Fri, Aug 31, 2012 at 1:22 PM, Tom Lane wrote:
> Jeff Janes writes:
>> I wonder if should be trying to drop duplicates at all. I think that
>> doing that made a lot more sense before payloads existed.
>
> Perhaps, but we have a lot of history to be backwards-compatibl
window?
Putting Tatsuo-san's change into a future pg_dump might be more
conservative than back-porting the server's Lock Table change to the
server version they are trying to get rid of.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
g
And then make sure that that is the one you think it is.
You can also look in the file "PG_VERSION" in the data directory.
In any case, the behavior you report is exactly would would be
expected if autovacuum is not running. The config file you posted
shows autovac is turned on, but I suspect
s does seem to conflict with what you
report from pg_stats, but I'm not familiar with that view, and you
haven't told us what version of pgsql you are using.
> Below is the scenario where-in the same query's plan picking up the
> composite Index.
It is only using the fir
does anyone know of good links that explain
it for those OS?
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Wed, Sep 12, 2012 at 11:40 PM, Ants Aasma wrote:
> On Wed, Sep 12, 2012 at 7:00 PM, Jeff Janes wrote:
>> Regarding the wiki page on reporting slow queries:
>> We currently recommend EXPLAIN ANALYZE over just EXPLAIN. Should we
>> recommend EXPLAIN (ANALYZE, BUFFERS) in
elects is not estimated, except to the extent they are folded into
something else, like the page visiting costs.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
That might result
in the geom being inserted in a more cache-friendly order.
> Any ideas? Partitioning?
Do most selects against this table specify user_name as well as a
geometry query? If so, that might be a good partitioning key.
Otherwise, I don't see what you could partition on in
ideas how to prevent this situation?
The bulk update could take an Exclusive (not Access Exclusive) lock.
Or the delete could perhaps be arranged to delete the records in ctid
order (although that might still deadlock). Or you could just repeat
the failed transaction.
Cheers,
Jeff
--
Sent
On Tue, Oct 9, 2012 at 1:56 PM, Shaun Thomas wrote:
> On 10/09/2012 03:12 PM, Craig James wrote:
>
>>~3200 TPS max with hyperthreading
>>~9000 TPS max without hyprethreading
>
>
> That's really odd. We got almost the opposite effect on our X5645's.
>
> Also, there's no way your RAID is sus
2
> Swap:21000 51 20949
>
> So it did a little swapping, but only minor,
The kernel has, over the entire time the server has been up, found 51
MB of process memory to swap. That doesn't really mean anything. Do
you see active swapping going on, like with vmstat
ence anyway? If so,
in what circumstances?
In my hands, queries for which effective_cache_size might come into
play (for deciding between seq scan and index scan) are instead
planned as bitmap scans.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
ndex. But wouldn't that remove most of the
benefits of partitioning? You could no longer add or remove
partitions instantaneously, for example.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
n_date, or on (creation_date, c) might. How
many records are there per day? If you add a count(*) to your select,
what would typical values be?
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Sun, Oct 7, 2012 at 7:43 AM, Ants Aasma wrote:
> On Wed, Sep 26, 2012 at 11:11 PM, Jeff Janes wrote:
>> On Wed, Sep 12, 2012 at 11:40 PM, Ants Aasma wrote:
>>> I don't have any links for OS level monitoring, but with version 9.2
>>> track_io_timing would do th
ct to benefit
from it concurrently) are worse than the consequences of
underestimating it--assuming you have the types of queries for which
it makes much of a difference.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
but rather
how much of the table is in cache.
>
> @Jeff I have 4 drives in RADI10. The database has around 80GB of indices.
That seems like a pretty small disk set for a server of this size.
Do you know what percentage of that 80GB of indices gets dirtied
during any given round of batch loading
ant type
you expect to see.
The default page cost settings already assume that random fetches are
far more likely to be cache hits than sequential fetches are. If that
is not true, then the default random page cost is way too low,
regardless of the number of spindles or the concurrency.
to the
> tablespaces themselves?
Been done already:
http://www.postgresql.org/docs/9.0/static/sql-altertablespace.html
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Fri, Oct 19, 2012 at 8:07 AM, Shaun Thomas wrote:
> On 10/19/2012 10:05 AM, Jeff Janes wrote:
>
>> http://www.postgresql.org/docs/9.0/static/sql-altertablespace.html
>
>
> Yep. I realized my error was not checking the ALTER page after going through
> CREATE. I swore I
gt; know the right values before SELECT to get good results ;)
Not sure what you mean here. If you change the settings just for the
query, it should be safe because when the query is already fast it is
not using the seq scan, so discouraging it from using one even further
is not going to do an
#x27;till you find the next hole instead of
> making it right at the beginning of construction process
We are not at the beginning of the construction process. You are
already living in the house.
Version 9.3 is currently under construction. Maybe this will be a fix
for this problem in t
On Fri, Oct 26, 2012 at 8:30 AM, Böckler Andreas wrote:
>
> Am 25.10.2012 um 18:20 schrieb Jeff Janes:
>
>> Can you load the data into 9.2 and see if it does better? (I'm not
>> optimistic that it will be.)
>
> This takes months, the customer has to pay us for th
enable_seqscan) might not be
the right knobs, but they are the knobs that currently exist.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
:
> it took the system more than 80 minutes to replay 48 WAL files
> and connect to the primary.
>
> Can anybody think of an explanation why it takes that long?
Could the slow log files be replaying into randomly scattered pages
which are not yet in RAM?
Do you have sar or vms
n to Postgres via Unix socket.
Isn't pgbouncer single-threaded?
If you hitting it with tiny queries as fast as possible from 20
connections, I would think that it would become the bottleneck.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
T
On Tue, Oct 30, 2012 at 3:16 PM, Scott Marlowe wrote:
> On Tue, Oct 30, 2012 at 4:11 PM, Greg Williamson
> wrote:
>> Jeff / Catalin --
>>
>> Jeff Janes wrote:
>>
>>>On Mon, Oct 29, 2012 at 5:11 PM, Catalin Iacob
>>>wrote:
>>>
>>
401 - 500 of 878 matches
Mail list logo