d manual
> vacuums here; autovac treats toast tables as just another table, with their
> own stats and their own freeze needs. If you're generating a lot of toast
> records that might make a difference.
>
I do not anticipate TOAST entering the picture. No single column or record
> 8KB or even approaching it. We have a few databases that (ab)use pg_toast
and I want to avoid those complications.
-Greg
On Wed, Sep 28, 2016 at 11:27 AM, Stephen Frost wrote:
> Greg,
>
> * Greg Spiegelberg (gspiegelb...@gmail.com) wrote:
> > Bigger buckets mean a wider possibility of response times. Some buckets
> > may contain 140k records and some 100X more.
>
> Have you analyzed the
any database servers. 40 is okay,
100 isn't terrible but if it's thousands then operations might lynch me.
-Greg
On Tue, Sep 27, 2016 at 10:15 AM, Terry Schmitt
wrote:
>
>
> On Sun, Sep 25, 2016 at 7:50 PM, Greg Spiegelberg
> wrote:
>
>> Hey all,
>>
>> Obviously everyone who's been in PostgreSQL or almost any RDBMS for a
>> time has said not to have mill
On Tue, Sep 27, 2016 at 9:42 AM, Mike Sofen wrote:
> *From:* Mike Sofen *Sent:* Tuesday, September 27, 2016 8:10 AM
>
> *From:* Greg Spiegelberg *Sent:* Monday, September 26, 2016 7:25 AM
> I've gotten more responses than anticipated and have answered some
> questions and
On Tue, Sep 27, 2016 at 8:30 AM, Craig James wrote:
> On Sun, Sep 25, 2016 at 7:50 PM, Greg Spiegelberg
> wrote:
>
>> Hey all,
>>
>> Obviously everyone who's been in PostgreSQL or almost any RDBMS for a
>> time has said not to have millions of tables.
On Sun, Sep 25, 2016 at 8:50 PM, Greg Spiegelberg
wrote:
> Hey all,
>
> Obviously everyone who's been in PostgreSQL or almost any RDBMS for a time
> has said not to have millions of tables. I too have long believed it until
> recently.
>
> AWS d2.8xlarge instance wit
s in a table is not a problem for the query
> times you are referring to. So instead of millions of tables, unless I'm
> doing my math wrong, you probably only need thousands of tables.
>
>
>
> On Mon, Sep 26, 2016 at 5:43 AM, Stuart Bishop
> wrote:
>
>> On 26 S
ption to
reduce the 8M tables to ~4000 is an option however the problem then becomes
rather than having an anticipated 140k records/table to 140M to 500M
records/table. I'm concerned read access times will go out the window. It
is on the docket to test.
-Greg
on-disk storage
options. :)
-Greg
On Mon, Sep 26, 2016 at 6:54 AM, Yves Dorfsman wrote:
> Something that is not talked about at all in this thread is caching. A
> bunch
> of memcache servers in front of the DB should be able to help with the 30ms
> constraint (doesn't have to be m
On Mon, Sep 26, 2016 at 3:43 AM, Stuart Bishop
wrote:
> On 26 September 2016 at 11:19, Greg Spiegelberg
> wrote:
>
>> I did look at PostgresXL and CitusDB. Both are admirable however neither
>> could support the need to read a random record consistently under 30ms.
>
Following list etiquette response inline ;)
On Mon, Sep 26, 2016 at 2:28 AM, Álvaro Hernández Tortosa
wrote:
>
>
> On 26/09/16 05:50, Greg Spiegelberg wrote:
>
>> Hey all,
>>
>> Obviously everyone who's been in PostgreSQL or almost any RDBMS for a
>> tim
p;c in a data lake you must accept the possibility of scanning the
entire lake. However, if all fish were in barrels where each barrel had a
particular kind of fish of specific length, size, color &c then the problem
is far simpler.
-Greg
On Sun, Sep 25, 2016 at 9:04 PM, julyanto SUTANDANG
wr
but
the schema and it's intended use is complete. You'll have to trust me on
that one.
-Greg
On Sun, Sep 25, 2016 at 9:23 PM, Mike Sofen wrote:
> *From:* Greg Spiegelberg *Sent:* Sunday, September 25, 2016 7:50 PM
> … Over the weekend, I created 8M tables with 16M indexes on
CT or via table inheritance (I am abusing
it in my test case) are no-no's. A system or database crash could take
potentially hours to days to recover. There are likely other issues ahead.
You may wonder, "why is Greg attempting such a thing?" I looked at
DynamoDB, BigTable, and Cassandra
7;s undocumented and nobody else has complained, it's
probably not too important as far as day to day pgbouncer use. :)
- --
Greg Sabino Mullane g...@turnstep.com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 201507051040
http://biglumber.com/x/web?pk=2529DF6AB8F7940
the values in memory
for at the same time. Isn't that the whole point of the lossy =
algorithm? But now that I think of it I don't understand why that
algorithm is needed at all.
--
greg
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
oach. Scan some random sample of blocks with a
counting bloom filter then do a second pass (possibly for the same
sample?) keeping counts only for values that the counting bloom filter
said hashed to the most common hash values. That might not be exactly
the most common values but should be at leas
s effectively the same as reading the entire table
-- and it still had pretty poor results. All the research I could find
went into how to analyze the whole table while using a reasonable
amount of scratch space and how to do it incrementally.
--
greg
--
Sent via pgsql-performance mailing list (pg
7;s
causing users to get surprisingly far before they run into problems.
That may not be the best thing for users in the long run but that's a
problem that should be solved by better development tools to help
users identify scalability problems early.
--
greg
--
Sent via pgsql-performan
On 19 Sep 2014 19:40, "Josh Berkus" wrote:
>
> On 09/19/2014 10:15 AM, Merlin Moncure wrote:
> > On Wed, Sep 17, 2014 at 7:11 PM, Josh Berkus wrote:
> >> This is the core issue with abort-early plans; they depend on our
> >> statistics being extremely accurate, which we know they are not. And if
Two solutions come to mind. First possibility is table partitioning on the
column you're sorting. Second, depending on your application, is to use a
cursor. Cursor won't help with web applications however a stateful
application could benefit.
HTH
-Greg
On Wed, Aug 28, 2013 at 2:39
way as this is normally done with Postgres.
--
Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your sub
On 6/20/13 4:32 PM, Josh Berkus wrote:
First, cc'ing Greg Smith to see if he can address this with the Fusion
folks so that they stop giving out a bad guide.
I'm working on a completely replacement of that guide, one that actually
gives out a full set of advice. Right now I'
r. Sample from a busy system with
moderately tuned BGW and checkpoint_timeout at 15 minutes:
-[ RECORD 1 ]---+---
alloc_mbps | 246.019686474412
checkpoint_mbps | 0.0621780475463596
clean_mbps | 2.38631188442859
backend_mbps| 0.777490109599045
write_mbps | 3.2
Hey,
We have a search method that depending on search params will join 3-5 tables,
craft the joins and where section. Only problem is, this is done in rather
horrible java code. So using pgtap for tests is not feasible.
I want to move the database complexity back to database, almost writing the
what you're seeing
there. It doesn't actually use any significant amount of memory on its own.
--
Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com
--
Sent via pgsql-performance mailing list (pg
.
-Does this happen on every test run? Is it at the same time?
-You can run "top -bc" to dump snapshots of what the system is doing
every second. With some work you can then figure out what was actually
happening during the two seconds around when the throughput dropped.
--
es-and-ncq/
With that loose end sorted, I'll be kicking off a brand new round of SSD
tests on a 24 core server here soon. All those will appear on my blog.
The 320 drive is returning as the bang for buck champ, along with a DC
S3700 and a Seagate 1TB Hybrid drive with NAND durable write
es to flash wear or bytes
written, either. Several of the SMART attributes are labeled
"Vendor-specific," but you'll need to guess what they track and read the
associated values using third-party software."
That's a serious problem for most business use of this s
better protocol for handling things.
--
Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http:
efore abandoning it altogether.
--
Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www
et, and I haven't recommended anyone buy a 710 since the
announcement. However, "hit the street" is still an issue. No one has
been able to keep DC S3700 drives in stock very well yet. It took me
three tries through Newegg before my S3700 drive actually shipped.
--
Greg Smi
On 5/22/13 3:06 PM, Joshua D. Drake wrote:
Greg, can you elaborate on the SSD + Xlog issue? What type of burn
through are we talking about?
You're burning through flash cells at a multiple of the total WAL write
volume. The system I gave iostat snapshots from upthread (with the
Inte
them on traditional drives can be cost effective in a few ways. That
approach is lucky to hit 10K TPS though, so it can't compete against
what a PCI-E card like the FusionIO drives are capable of.
--
Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Ser
gh, so I'm still waiting for more
data before I feel comfortable saying exactly what the worst case looks
like.
--
Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com
--
Sent via pgsql-performance mailing
almost free it's so
fast? Sure. But presuming all systems will look like that is
optimistic, and it sets unreasonable expectations.
--
Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com
--
Sent via pgs
s. My
test server's 100GB DC S3700 was $250. That's still not two orders of
magnitude faster though.
--
Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com
--
Sent via pgsql-performance mailing li
thing that's happened since the last base backup happened.
That can easily result in a week of downtime if you're only shipping
backups once per month, for example.
--
Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Suppor
flushed to disk.
--
Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.or
t this type of work going on in the background.
--
Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subs
systems are relatively small.
--
Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.p
.ctid;
COMMIT;
HTH.
-Greg
On Fri, May 17, 2013 at 5:26 AM, Rob Emery wrote:
> Hi All,
>
> We've got 3 quite large tables that due to an unexpected surge in
> usage (!) have grown to about 10GB each, with 72, 32 and 31 million
> rows in. I've been tasked with cleani
> Thanks for your response. I tried doing what you suggested so that table now
> has a primary key of ' CONSTRAINT data_area_pkey PRIMARY KEY(area_id ,
> data_id ); ' and I've added the INDEX > of 'CREATE INDEX
> data_area_data_id_index ON data_area USING btree (data_id );' unfortunately
> it h
Joe --
>
> From: Joe Van Dyk
>To: Greg Williamson
>Cc: "pgsql-performance@postgresql.org"
>Sent: Friday, April 5, 2013 7:56 PM
>Subject: Re: [PERFORM] slow joins?
>
>
>On Fri, Apr 5, 2013 at 6:54
-> Index Scan using index_line_items_on_product_id on
>>line_items li (cost=0.00..835.70 rows=279 width=8) (actual time=0.002..0.004
>>rows=2 loops=70)
>> Index Cond: (product_id = products.id)
>> -> Index Only Scan using purchased_items_li
(following the interest from -hackers, I'm posting this here).
Hi folks,
I've always been fascinated with genetic algorithms. Having had a chance to
implement it once before, to solve real life issue - I knew they can be
brilliant at searching for right solutions in multi dimensional space.
On 13 Mar 2013, at 15:33, John Lister wrote:
> On 12/03/2013 21:41, Gregg Jaskiewicz wrote:
>>
>> Whilst on the hardware subject, someone mentioned throwing ssd into the mix.
>> I.e. combining spinning HDs with SSD, apparently some raid cards can use
>> small-ish (80GB+) SSDs as external cach
ash and leave it on. Purchasing and
management people tend to get annoyed if they discover the core count of
the server is half what they thought they were buying. The potential
downside of HT isn't so big that its worth opening that can of worms,
unless you've run real application lev
safely and as fast as it should be, 2ndQuadrant
does offer a hardware benchmarking service to do that sort of thing:
http://www.2ndquadrant.com/en/hardware-benchmarking/ I think we're even
generating those reports in German now.
--
Greg Smith 2ndQuadrant USg...@2ndquadrant.com Bal
ake a look at http://www.postgresql.org/support/versioning/ and you'll
see 8.4 only has a little over a year before it won't get bug fixes
anymore. Also, your server would really appreciate the performance
gains added to 9.2. If that's a bit too leading edge for you, I don'
nge. Nowadays there really is no
reason to consider running pgbench on a system with a smaller scale than
that. I normally get a rough idea of things by running with scales 100,
250, 500, 1000, 2000.
--
Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD
PostgreSQL Training
x of queries but
most are simple. Typically a few thousand queries a second to the readonly
boxes, about the same to a beefier read / write master.
This is a slightly old pgbouncer at that ... used is a fairly basic mode.
Greg Williamson
--
Sent via pgsql-performance mailing list (pgsql-p
hope this isn't redundant.
Partitioning might work if you can create clusters that are bigger than 1 hour
-- too many partitions doesn't help.
Greg Williamson
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
; line in /proc/meminfo as the test runs will give
you an idea if write cache filling is actually an issue here. If that
number just keeps going up and speeds keep on dropping, that's at least
one cause here. This could easily be both that and an autovacuum
related too thoug
delay needed to vary a bit based on
the number of the siblings was one of the challenges I kept butting into
then. Making the GUC settings even more complicated for this doesn't
seem a productive step forward for the average user.
--
Greg Smith 2ndQuadrant USg...@2ndquadrant.com
Thanks for this description--we have index bloat problems on a massively active
(but small) database.This may help shed light on our problems.
Sorry for top-posting--challenged email reader.
Greg W.
>
> From: Jeff Janes
>To: Strahinja Kustudić
&
Midge --
Sorry for top-quoting -- challenged mail.
Perhaps a difference in the stats estimates -- default_statistics_target ?
Can you show us a diff between the postgres config files for each instance ?
Maybe something there ...
Greg Williamson
>
>
how can
I create a real-time replica of this data?" to protect databases, and
the standby server for that doesn't need to be an expensive system.
That said, there is no reason to set things up so that they only work
with that Intel RAID controller, given that it's not a very good
ermittently, you'll see one incredibly slow
query periodically while most are fast. All depends on exactly what
core is executing, where the memory it needs is at, and whether the
server wants to reclaim memory (and just what that means its own
complicated topic) as part of tha
rposes then rewind and fetch the right "page".
Highly recommended.
-Greg
rdering on will serve as the page number. Issue
with this solution is you may not have an accurate page count.
Luck.
-Greg
their TPS numbers are useless without a contest of how big each
transaction is, and we don't know. I can take MemSQL seriously when
there's a press release describing how to replicate their benchmark
independently. Then it's useful to look at the absolute number.
--
Greg Smith
of writing that feature?" and "how can we
get them sponsored to focus on it?" I can tell from your comments yet
what role(s) in that process VMWare wants to take on internally, and
which it's looking for help with. The job of convincing people it's a
useful feature isn&
was CPU bound, that
would be an awesome world to live in.
Anyway, guessing at causes here is premature speculation. When there's
some code for the test kit published, at that point discussing the
particulars of why it's not running well will get interesting.
--
Greg Smith 2ndQua
t toward
leading edge SSD only to discover their real working set fits in memory
just fine, so it doesn't even matter. Or that the bottleneck is
somewhere else entirely.
--
Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support ww
e type MySQL is usually faster
than PostgreSQL on (i.e. ones using simple operations and operators), or
they could be ones where PostgreSQL is usually faster than MySQL (i.e.
more complicated joins). All I can tell you for sure if that they used
a query mix that makes MemSQL look much fast
IOPS of writes that are done more efficiently in terms of
flash longevity (710). You can't get both at the same time. The 710
may ultimately throttle its speed back to meet lifetime specifications
as the drive fills, it's really hard to benchmark the differences
between the tw
On Fri, May 25, 2012 at 9:04 AM, Craig James wrote:
> On Fri, May 25, 2012 at 4:58 AM, Greg Spiegelberg
> wrote:
>
>> On Sun, May 13, 2012 at 10:01 AM, Craig James
>> wrote:
>>
>>>
>>> On Sun, May 13, 2012 at 1:12 AM, Віталій Тимчишин
>
eeze (maybe they run out of memory, or are
> encountering an O(N^2) operation and simply cease to complete).
>
Glad I found this thread.
Is this 1M relation mark for the whole database cluster or just for a
single database within the cluster?
Thanks,
-Greg
he
> first thing I'd do especially since the IO pattern is so dramatically
> different from tablespace IO access patterns.
Yep - moving pg_xlog to something optimized for small, constantly
written files is one of the biggest and easiest wins. Other than
fsync = off ;)
-
Seconds, minutes,
hours, days, weeks?
I'm thinking if you can prepare the data ahead of time as it changes via a
trigger or client-side code then your problem will go away pretty quickly.
-Greg
performance, consider running your own more
application-like benchmarks instead.
--
Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
ctly, allocating
shared_buffers sanely, separating pg_xlog, etc.
- --
Greg Sabino Mullane g...@turnstep.com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 201204251304
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-B
uld include another RAID 10 for the indexes
for the OLTP but we ran out of drives to create a RAID Group D and the
above configuration works well enough.
Before going with RAID 5, please review http://www.baarf.com/.
-Greg
On Thu, Feb 23, 2012 at 11:11 AM, Andy Colson wrote:
> On 2/23/2012 12:05 PM, Shaun Thomas wrote:
>
>> On 02/23/2012 11:56 AM, Greg Spiegelberg wrote:
>>
>> I know there are perils in using ctid but with the LOCK it should be
>>> safe. This transaction took pe
ils in using ctid but with the LOCK it should be safe.
This transaction took perhaps 30 minutes and removed 100k rows and once
the table was VACUUM'd afterward it freed up close to 20 GB on the file
system.
HTH
-Greg
changed. One could even
decrement rpc slowly and find out at one points it changes,
which would be more interesting than testing arbitrary numbers.
Would lead to some really sweet graphs as well. :)
- --
Greg Sabino Mullane g...@turnstep.com
End Point Corporation http://www.endpoint.
ented acceleration here and
b) on Solaris. You won't find a compelling performance improvement
listed at
https://ext4.wiki.kernel.org/articles/c/l/a/Clarifying_Direct_IO%27s_Semantics_fd79.html
and Linux has generally ignored direct I/O as something important to
optimize for.
--
Greg Smith 2ndQ
chive
archive
To make sure the difference wasn't some variation on "gets slower after
each run". pgbench suffers a lot from problems in that class.
--
Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com
jects/reorg/
--
Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com
15:39:55.489344 semop(16777323, 0x7fffd09dac00, 1) = 0
Process 12634 detached
pg_locks for this 12634 shows all granted ones, nothing exciting there.
I asked how well this executes with enable_nestloop turned off, hoping
to see that next.
This all seems odd, and I get interested and concerned w
eliminate several of them.
--
Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.post
ng,
you may not find as much about ZFS tuning and SSDs as you'd like though.
http://www.2ndquadrant.com/en/talks/ has some updated material about
things discovered since the book was published. The "Bottom-Up Database
Benchmarking" there shows the tests I'm running nowadays
Looks good to me. I built PG with this change, no kernel warnings after
~10 minutes of running. I'll continue to monitor but I think this fixes
the syndrome. Thanks Tom.
-Greg
On Fri, 18 Nov 2011, Tom Lane wrote:
Claudio Freire writes:
On Thu, Nov 17, 2011 at 10:07 PM, Greg Mat
sons I keep so many samples on that
program's page is to help people navigate this whole maze, and have some
data points to set expectations against. See
https://github.com/gregs1104/stream-scaling for the code and the samples.
--
Greg Smith 2ndQuadrant USg...@2ndquadrant.com Bal
ing when system
logs fill up, and the extra floating point handling may affect some other
process(es).
-Greg
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
aluate RHEL6 to see how it's doing soon, just haven't had
time/requests for it.
--
Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us
--
Sent via pgsql-performance mailing list (pgsql-performanc
onds on one of their servers if I try
to make that problem bad, you're only seeing the middle range of latency
issues so far.
--
Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us
--
Sent via pgsql-perfor
y the ext4 changes. In just about every other way but commit
performance, ext4 is faster than most other filesystems.
--
Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us
t now there's no mention of that
parameter in the section that talks about parameters to configure; there
really should be.
--
Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us
--
Sent via pgsql-perform
ts=16 is a low setting.
You might as well set it to a large number, say 128, and let checkpoints
get driven by time instead. The existing limit isn't working
effectively anyway, and having more segments lets the checkpoint
spreading code work more evenly.
--
Greg Smith 2ndQuadrant US
On 19 Oct 2011, at 17:51, Anibal David Acosta wrote:
> For example:
> Table A
> -id (PK)
> -name
>
> Table B
> -table_a_id (PK, FK)
> -address
>
> When I do an insert on table B, the database check if value for column
> “table_a_id” exists in table A
> But, if I do an update of column “addre
plenty of times that the reason
behind "why isn't it using my index?" is "the index is too fat to
navigate efficiently", because the actual number of blocks involved is
factored into the cost computations.
--
Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimor
e trouble. I did a few blog
entries and mailing list posts about this earlier this year, and that
was as much documentation as I could justify at the time. If there's a
user-visible behavior changes here, that's the point where an update to
the manual would be in order.
--
Greg Smi
ere have been beating out improvements made to the
database fast enough that it's hard to innovate in this area within
Postgres.
--
Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us
--
Sent via pgsql-pe
7;d have to put a kernel bug on the list of suspects with this
unusual vmstat output.
--
Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us
--
Sent via pgsql-performance mailing list (pgsql-performance@p
x27;s hard to do
that in the position you're in now though.
--
Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make chan
ose things. The advantage to making them happen less often is that
you get less total writes. People need to be careful about going a long
*time* between checkpoints. But there's very few cases where you need
to worry about the segment count going too high before another one is
trigg
useful.
--
Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref
1 - 100 of 1495 matches
Mail list logo