unning multiple postmasters on the
same machine that can speak to the postgresql.conf knobs more specifically.
I'd still suggest you upgrade to at least 8.1.8.
Thanks
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jeff Frost
Sent: Thursday, Apri
ual
process (i.e. open up the database files in a hex editor and look for
your data), but there would be no guarantee.
Regards,
Jeff Davis
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
DB on every web page, you might consider playing
with the commit_delay and commit_siblings parameters in the postgresql.conf.
Also, if you're doing multiple inserts as separate transactions, you should
consider batching them up in one transaction.
--
Jeff Frost,
eport against the log and post us the explain analyze from your
slow queries.
And if Ron is indeed local, it might be worthwhile to contact him. Someone
onsite would likely get this taken care of much faster than we can on the
mailing list.
--
Jeff Frost, Owner <[EMAI
if you're not
doing this with regularity and strongly consider enabling autovacuum.
--
Jeff Frost, Owner <[EMAIL PROTECTED]>
Frost Consulting, LLC http://www.frostconsultingllc.com/
Phone: 650-780-7908 FAX: 650-649-1954
---(end of broadcast)---
ate
results. Do those all have an id? If not, why not? How do you join a
virtual relation to a physical relation if the virtual relation has no
internal id? Is the id field still unique in the result of a join or
Cartesian product?
Regards,
Jeff Davis
---(end of broadcast)---
TIP 6: explain analyze is your friend
of the last 6 elements
and see if postgresql will use a BitmapAnd to combine them?
Regards,
Jeff Davis
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTE
btree (lat_radians);
CREATE INDEX long_radians ON test_zip_assoc USING btree
(long_radians);
Maybe I'm missing something, but wouldn't it be easier to just use
PostGIS? Or failing that, using the vanilla built-in point type and an
r-tree index? That's what r-tree indexes are made
g their own statistics? What's the easiest
way to take statistical samples of the data in a table without reading
the entire thing?
Regards,
Jeff Davis
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropr
achine perhaps just can't handle that many FDs in all of those
processes at once.
That is a lot of tables. Maybe a different OS will handle it better?
Maybe there's some way that you can use fewer connections and then the
OS could still handle it?
Regards,
Jeff Davis
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
one customers DB and execute the query there?
Have you already considered using views with specific privileges to
separate your customers?
Regards,
Jeff Davis
---(end of broadcast)---
TIP 4: Have you searched our list archives?
uested from the OS. The OS
does it's own caching, and so many of those reads might come from the OS
buffer cache, and not the disk itself.
Also, if you're concerned with the number since the last server restart,
make sure you have stats_reset_on_server_start set appropriately.
Re
#x27;s also FAILOVER, which can set a new origin even if the old origin
is completely gone, however you will lose the transactions that haven't
been replicated yet.
To have a new node join the cluster, you SUBSCRIBE SET, and you can MOVE
SET to it later if you want that to b
eplicate our dbase
> by using slony?? and why slony??
>
This question should be asked on the slony1-general list, you'll get
more responses there.
The benefit of using slony is that you can read from many servers rather
than just one.
Regards,
Jeff Davis
---
r friend
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
--
Jeff Frost, Owner <[EMAIL PROTECTED]>
Frost Consulting, LLC http://www.frostconsultingllc.com/
Phone: 650-780-
t can return a requested block. That could mitigate random I/O, if
the I/O is asynchronous and something intelligent (OS or controller) can
schedule it.
Regards,
Jeff Davis
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
Postgres that reading index pages isn't such a big deal, encouraging index
use) solved most of these issues for me.
Jeff
-Original Message-
From: smiley2211 [mailto:[EMAIL PROTECTED]
Sent: Friday, September 21, 2007 8:14 AM
To: pgsql-performance@postgresql.org
Subject: Re: [PE
s only 8MB
so easily fits in memory on all the test systems. Any ideas what's going
on? Setting enable_indexscan = 0 also yields a fast (200ms) plan, so that's
a workaround.
Here are the plans:
jeff=# explain ANALYZE SELECT DISTINCT ON (shows.id) shows.id,
seasons.position AS alias
On Fri, 5 Oct 2007, Stephen Frost wrote:
* Jeff Frost ([EMAIL PROTECTED]) wrote:
Here are the plans:
It's probably just me but, honestly, I find it terribly frustrating to
try and read a line-wrapped explain-analyze output... I realize it
might not be something you can control in
On Sat, 6 Oct 2007, Tom Lane wrote:
"Jeff Frost" <[EMAIL PROTECTED]> writes:
Before analyze it seems to choose Bitmap Heap Scan on episodes
current_episode, but after it chooses Index Scan Backward using
index_episodes_on_publish_on on episodes current_episode.
Have you t
issues about it?.
> thanks
> I'm working with postgres 8.0.1,
For the query in question, what would be faster than a seqscan? It
doesn't read the whole table, it only reads until it satisfies the limit
clause.
Regards,
Jeff Davis
---(end of broadcast)--
be less remote than we expect ;)
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
risking 256K data loss per outage".
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
order; and any total order is allowed.
Radix sort only works for some data types and some total orders.
However, it would be very nice to use radix sorting where it does work.
That would require some extensions to the type system, but it could be
done.
The GPU issue is orthogonal.
Regards,
Je
esent in the
archive; it must return nonzero when so asked."
So, it is safe to ignore those errors.
Personally, I would use a restore_command that is silent when the file
doesn't exist so that it doesn't pollute your logs. I'm not sure why the
documentation suggests &quo
s eating 99% cpu. Is there a
chance that it's flooded with connection attempts?
Usually the work is done by backend processes, not the postmaster. The
postmaster just does some management like accepting connections and
starting new processes.
Regards,
Jeff Davis
--
Sent via pgsql-perfor
n PG is relatively slow, especially so on
Windows, because it involves starting and setting up a new process for
each one.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
by a lot would be
use a connection pooler.
The critical question is how often does each process create a new
connection. 200 processes which make one connection each and keep
them open for 10 minutes is quite different from 200 processes which
make and break connections as fast as they can.
Che
are a much smaller number. That
is the whole point of pooling.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
is effectively serialized by the backend.
If the background writer cannot keep up, then the individual backends
start doing writes as well, so it isn't really serialized..
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your s
On 12/25/10, Mladen Gogala wrote:
> Jeff Janes wrote:
>> If the background writer cannot keep up, then the individual backends
>> start doing writes as well, so it isn't really serialized..
>>
>>
> Is there any parameter governing that behavior?
No, it
ed_buffers,
shouldn't test with wal_buffers = shared_buffers/64 or
shared_buffers/128, with a lower limit of 8 blocks, and set that as
the default.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ease wal_buffers to 896kB before shmget fails, so I think
512kb would be a good default in that situation.
Maybe initdb should test larger values for shared_buffers as well,
rather than starting at only 32MB.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ymbol clash?
I've done this from Perl using DBI, DBD::Oracle, and DBD::Pg. As perl
is written in C, that should be a good sign for you.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
but did not remove
the promise we would be mentioning those things.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
y as a whole.
However, a series proposals for individual hacks for specific purposes
will probably be rejected. I am in no way implying that you are
approaching it this way -- I am just trying to characterize an approach
that won't make progress.
Regards,
Jeff Davis
--
Sent via pg
postgresql.conf?
I suspect the work load is entirely bulk inserts, and is using a
Buffer Access Strategy. By design, bulk inserts generally write out
their own buffers.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
sible altogether as of 8.3, but
> * we'd better check anyway.)
Seems like maybe an Assert would be called for.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
300.003 rows=1221391 loops=1)"
"Total runtime: 4472646.988 ms"
Have you been vacuuming or running autovacuum?
If you keep running queries like this you're certianly going to have
a ton of dead tuples, which would def explain these times too.
--
Jeff Trout <[EMAIL
/lmbench
As numbers from lmdd are seen on this frequently.
--
Jeff Frost, Owner <[EMAIL PROTECTED]>
Frost Consulting, LLC http://www.frostconsultingllc.com/
Phone: 650-780-7908 FAX: 650-649-1954
---(end of broadcast)---
TIP 3: Ha
If you respond off-list, I'll summarize and post the results back.
Thanks for any input.
--
Jeff Frost, Owner <[EMAIL PROTECTED]>
Frost Consulting, LLC http://www.frostconsultingllc.com/
Phone: 650-780-7908 FAX: 650-649-1954
---(end of broadcast)---
rate, it's
going to take about 10 hours to restore our database.
Most of the time, the server is not using very much CPU time or I/O time. So
I'm wondering what can be done to speed up the process?
Brendan,
Where are the WAL files being stored and how are they being read back?
--
Brendan,
Is your NFS share mounted hard or soft? Do you have space to copy the files
locally? I suspect you're seeing NFS slowness in your restore since you
aren't using much in the way of disk IO or CPU.
-Jeff
On Thu, 20 Apr 2006, Brendan Duddridge wrote:
Oops... forgot to me
very large datasets. It also tries to figure
out hte size you want (2x ram) - the original bonnie is limited to 2GB.
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.jefftrout.com/
http://www.stuarthamm.net/
---(end of broadcast)---
TIP 6: explain
On May 3, 2006, at 10:16 AM, Vivek Khera wrote:
On May 3, 2006, at 9:19 AM, Jeff Trout wrote:
Bonnie++ is able to use very large datasets. It also tries to
figure out hte size you want (2x ram) - the original bonnie is
limited to 2GB.
but you have to be careful building bonnie++ since
at is
kicking the performance down the tube.
If it was PG's fault it wouldn't be stuck uninterruptable.
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.dellsmartexitin.com/
http://www.stuarthamm.net/
---(end of broadcast)---
TIP
07346
(1 row)
(231907346-231894522)/300 = 42.74666666 TPS
--
Jeff Frost, Owner <[EMAIL PROTECTED]>
Frost Consulting, LLC http://www.frostconsultingllc.com/
Phone: 650-780-7908 FAX: 650-649-1954
---(end of broadcast)
8.1 for better performance overall.
--
Jeff Frost, Owner <[EMAIL PROTECTED]>
Frost Consulting, LLC http://www.frostconsultingllc.com/
Phone: 650-780-7908 FAX: 650-649-1954
---(end of broadcast)---
TIP 6: explain analyze is your friend
On Fri, 7 Jul 2006, William Scott Jordan wrote:
Hi Jeff,
We are running ANALYZE with the hourly VACUUMs. Most of the time the VACUUM
for this table looks like this:
INFO: vacuuming "public.event_sums"
INFO: index "event_sums_event_available" now contains 56121 r
d thousand rows it still takes tens of minutes
with high CPU. My database does have a lot of tables (can be several
thousand), can that cause performance issues?
Thanks,
Neil
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
ot;.eppairdefnid = "inner".eppairdefnid)
-> Seq Scan on ftone (cost=0.00..23583.33 rows=1286333 width=10)
(actual time=0.04..2299.94 rows=1286333 loops=1)
-> Hash (cost=0.00..0.00 rows=1 width=4) (actual
time=206.01..206.01 rows=0 loops=1)
-> Seq Scan on fttemp
hardware like?
I run through a loop, executing the following or similar queries 8
times (well actually 12 but the last 4 don't do anything) - Jeff I've
attached complete outputs as files.
A debug output further below (numbers after each method call name,
above each SQL statement, ar
On Tue, 11 Jul 2006, Jeff Frost wrote:
On Wed, 12 Jul 2006, Neil Hepworth wrote:
You might also want to turn on autovacuum and see if that helps.
What's your disk subsystem like? In fact, what's the entire DB server
hardware like?
By the way, how big does the temp table get
ew version ?
- Original Message -----
From: "Jeff Frost" <[EMAIL PROTECTED]>
To: "Neil Hepworth" <[EMAIL PROTECTED]>
Cc:
Sent: Wednesday, July 12, 2006 10:27 AM
Subject: Re: [PERFORM] High CPU Usage - PostgreSQL 7.3
On Wed, 12 Jul 2006, Neil Hepworth wrote:
would tune postgres to avoid using the
CPU.
Neil
On 12/07/06, Jeff Frost <[EMAIL PROTECTED]> wrote:
On Wed, 12 Jul 2006, Neil Hepworth wrote:
> I am using version PostgreSQL 7.3.10 (RPM:
> postgresql73-rhel21-7.3.10-2). Unfortunately vacuumdb -a -v does not
> give the FSM
72 1
so I'm going to be going with hw r5, which went against what I
thought going in - read perf is more important for my usage than write.
I'm still not sure about that software 10 read number. something is
not right there...
--
Jeff Trout <[EMAIL PROTECTED]
he 6i controller.
If you look in the pgsql-performance archives a week or two ago
you'll see a similar thread to
this one - in fact, it is also about a dl385 (but he had a 5i
controller)
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.dellsmartexitin.com/
http://ww
nt to have data in the cache while seeing
how long it takes for the battery to drain :)
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.dellsmartexitin.com/
http://www.stuarthamm.net/
---(end of broadcast)---
TIP 3: Have you checked our ex
for WAL on 3Par, ext3
Does your disk controller have battery-backed writeback cache? How much?
> With PostgreSql 8.1.4
>
> We don't have i/o bottle neck.
>
Well, chances are PostgreSQL is waiting for fsync, which means you do
have an I/O bottleneck (however, you're not
other suggestions are appreciated. Is there a common place to look
for hardware suggestions (like a postgresql hardware FAQ)?
Regards,
Jeff Davis
---(end of broadcast)---
TIP 4: Have you searched our list archives?
r are Opterons significantly better?
I may go for more storage as well (i.e. getting 300GB disks), but I am
still determining the potential need for storage. I can get more RAM at
a later date if necessary also.
Regards,
Jeff Davis
---(end of broa
not migrated from 7.x or anything
> like that).
Usually this is caused by either:
(1) You're not vacuuming as a superuser, so it's not able to vacuum
everything.
(2) You have a long-running transaction that never completed for some
strange reason.
Hope this helps,
On Tue, 2006-08-22 at 17:56 -0400, Bucky Jordan wrote:
> Hi Jeff,
>
> My experience with the 2950 seemed to indicate that RAID10x6 disks did
> not perform as well as RAID5x6. I believe I posted some numbers to
> illustrate this in the post you mentioned.
>
Very interesting.
determined that
JFS was a good choice. However, I assume things have changed somewhat
since then. Does anyone have a pointer to some newer results?
Regards,
Jeff Davis
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
On Wed, 2006-08-23 at 21:50 -0400, Michael Stone wrote:
> On Wed, Aug 23, 2006 at 03:23:03PM -0700, Jeff Davis wrote:
> >Also, do ext2 or UFS without soft updates run the risk of losing or
> >corrupting my data?
>
> I suggest you check the list archives; there
On Thu, 2006-08-24 at 09:21 -0400, Merlin Moncure wrote:
> On 8/22/06, Jeff Davis <[EMAIL PROTECTED]> wrote:
> > On Tue, 2006-08-22 at 17:56 -0400, Bucky Jordan wrote:
> > Very interesting. I always hear that people avoid RAID 5 on database
> > servers, but I suppos
ch we are using is 7.2.4
>
I highly recommend upgrading if at all possible. That's quite an old
version.
Hope this helps,
Jeff Davis
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
,
etc., to add up to more than 2GB on a system with 4GB of RAM.
Is there a general consensus on the matter, or is it highly application-
dependent? I am not doing any huge amount of 64-bit arithmetic.
I am using Woodcrest, not Opteron.
Regards,
Jeff Davis
-
abase will be more of an archive, the requirements might be quite
reasonable. However, if or when you do need to search through that data,
expect it to take a long time.
Regards,
Jeff Davis
---(end of broadcast)---
TIP 1: if posting/r
he new?
Also, I suggest that you upgrade to 8.1. 7.4 is quite old, and many
improvements have been made since then.
Regards,
Jeff Davis
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire t
doesn't make sense at all to set it below 1.0, since that
is saying it's cheaper to get a random page than a sequential one.
What was your original random_page_cost, and what is the new value you
set it to?
Regards,
Jeff Davis
---(end of broadcast)---
.0, since that
> > is saying it's cheaper to get a random page than a sequential one.
> >
> > What was your original random_page_cost, and what is the new value you
> > set it to?
> >
> > Regards,
> >Jeff Davis
> >
> >
> >
>
n all the
tables, because it does not have to read the entire table and doesn't
interfere with the rest of the operations.
Regards,
Jeff Davis
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
+ autovacuum_vacuum_threshold), and if
so, it runs VACUUM. If not, it sleeps. It works the same way for
ANALYZE.
So, in a large table, the scale_factor is the dominant term. In a small
table, the threshold is the dominant term. But both are taken into
account.
Regards,
Jeff Davis
--
hm_use_phys: 1" on FreeBSD), then that amount of physical
memory will never be available to processes other than postgres. At 2GB,
that still leaves 6GB for the other process, so you should be fine.
Regards,
Jeff Davis
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
kely that the insert plans will change.
There aren't multiple INSERT plans (however, there could be a subselect
or something, which would be planned separately). INSERT is INSERT. That
means effective_cache_size will have zero effect on INSERT.
Regards,
Jeff Davis
---(end of broadcast)---
TIP 6: explain analyze is your friend
help a lot.
> It would be most helpfull to have something on the docs to specify what each
> setting affects most such as reads, writes, updates, inserts, etc..
I agree that they could be improved. It gets complicated quickly though,
and it's hard to generalize the effect that a perf
On Thu, 2006-09-14 at 21:04 -0400, Michael Stone wrote:
> On Thu, Sep 14, 2006 at 05:52:02PM -0700, Jeff Davis wrote:
> >Any long-running system will have very little "free" memory. Free memory
> >is wasted memory, so the OS finds some use for it.
>
> The important
all.
Also, I'll point out that what you want to do is very similar to using
typical relational constructs. Consider whether sub-selects or
aggregates in conjunction with set-returning functions can achieve what
you want. PostgreSQL is smart enough to only read t
(a) Your application could probably make better use of a relational
database; or
(b) The benchmark doesn't represent your application's needs.
Regards,
Jeff Davis
Regards,
Jeff Davis
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
or force) the
planner in this way."
I'm not suggesting that we do that, but it seems better then embedding
the hints in the queries themselves.
Regards,
Jeff Davis
---(end of broadcast)---
TIP 1: if posting/reading through Usenet,
and augmented with the possibility of
> correlation hinting, or table level hinting.
Well, I wouldn't say "evil". Query hints are certainly against the
principles of a relational database, which separate the logical query
from the physical storage.
Regards,
Jeff Davis
--
On Thu, 2006-10-12 at 14:34 -0500, Jim C. Nasby wrote:
> On Thu, Oct 12, 2006 at 09:42:55AM -0700, Jeff Davis wrote:
> > On Thu, 2006-10-12 at 10:14 -0500, Jim C. Nasby wrote:
> > > The syntax these hints is something arbitrary. I'm borrowing Oracle's
> > > ide
On Thu, 2006-10-12 at 17:28 -0400, Tom Lane wrote:
> [ trying once again to push this thread over to -hackers where it belongs ]
>
> Arjen van der Meijden <[EMAIL PROTECTED]> writes:
> > On 12-10-2006 21:07 Jeff Davis wrote:
> >> On Thu, 2006-10-12 at 19:15 +0
N. At least, I can't get PostgreSQL to sort over two UNIONed tables
using an index; perhaps I'm missing it.
Regards,
Jeff Davis
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
ch
nheritance won't help him.
The problem is that he has two indexes, and he needs to find the max
between both of them. PostgreSQL isn't smart enough to recognize that it
can use two indexes, find the max in each one, and find the max of those
two values.
Regard
a view of a union.
If PostgreSQL could sort the result of a union by merging the results of
two index scans, I think the problem would be solved. Is there something
preventing this, or is it just something that needs to be added to the
planner?
Regards,
Jeff Davis
On Wed, 2006-10-18 at 17:35 -0500, Jim C. Nasby wrote:
> On Wed, Oct 18, 2006 at 03:32:15PM -0700, Jeff Davis wrote:
> > On Wed, 2006-10-18 at 17:10 -0500, Jim C. Nasby wrote:
> > > Sorry, don't have the earlier part of this thread, but what about...
> > >
>
laces?
Currently, I'm looking at Penguin, HP and Sun (though Sun's store isn't
working for me at the moment). Maybe I just need to order a Penguin and then
buy the controller separately, but was hoping to get support from a single
entity.
--
Jeff Frost, Owner <[
d DB which is mostly read
intensive, but occassionally has large burts of write activity due to new user
signups generated by the marketing engine.
--
Jeff Frost, Owner <[EMAIL PROTECTED]>
Frost Consulting, LLC http://www.frostconsultingllc.com/
Phone: 650
d how much BBU cache can
you put in it? Oh, does it use the good ole megaraid_mbox driver as well?
--
Jeff Frost, Owner <[EMAIL PROTECTED]>
Frost Consulting, LLC http://www.frostconsultingllc.com/
Phone: 650-780-7908 FAX: 650-649-1954
---(
rest.
I could only find the 6 disk RAID5 numbers in the archives that were run with
bonnie++1.03. Have you run the RAID10 tests since? Did you settle on 6 disk
RAID5 or 2xRAID1 + 4XRAID10?
--
Jeff Frost, Owner <[EMAIL PROTECTED]>
Frost Consulting, LLC http://www.frostconsult
only pairs
after that.
A valid question. Does the caching raid controller negate the desire to
separate pg_xlog from PGDATA?
--
Jeff Frost, Owner <[EMAIL PROTECTED]>
Frost Consulting, LLC http://www.frostconsultingllc.com/
Phone: 650-780-7908 FAX: 650-64
s for more. 10k would probably be a better start.
- Jeff
>Hi, all,
>
>I have got a new MaC OS G5 with 8GB RAM. So i tried to increase
>the shmmax in Kernel so that I can take advantage of the RAM.
>
>I searched the web and read the manual for PG7.4 chapter 16.5.1.
>Af
in a machine, it quite
another thing to know how much the user wants dedicated to PostgreSQL.
A couple of questions like that can go a long way to coming up with
better ballpark figures.
--
Jeff Hoffmann
[EMAIL PROTECTED]
---(end of broadcast)-
ing to do with the GROUP BYs, but we won't know
until you post EXPLAIN ANALYZE results.
Regards,
Jeff Davis
---(end of broadcast)---
TIP 6: explain analyze is your friend
y purposes, I'm content to use the union of separate views in my
application, so if this doesn't pique anyone's interest, feel free to
ignore it.
Jeff
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send a
anation for me. I'll spare you the gory
details of my tables, unless a developer wants to have a look at it
off-list.
Jeff
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
On 11/3/07, Tom Lane wrote:
> "Jeff Larsen" <[EMAIL PROTECTED]> writes:
> > My case probably fits the 'special case' description. Not all the
> > columns in each subquery matched up, so there were NULL::text
> > placeholders in some spots in the SE
the pgiosim project on pgfoundry, it sort of simulates a pg index
scan, which is probably what you'll want to focus on more than seq
read speed.
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.dellsmartexitin.com/
http://www.stuarthamm.net/
-
201 - 300 of 878 matches
Mail list logo