Title: Nachricht
Hi Tom,You are right, of course :pgpool=#
set constraint_exclusion = on ;SETpgpool=# explain analyze
select * from part where id1=3 and id2=5
;
QUERY
And sorry for the hassle.
I was running the db cluster with Tthw wrong(old) postgresql.conf ;(
Best Regrads.
Milen
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Monday, September 18, 2006 1:14 AM
To: Milen Kulev
Cc: pgsql-performance@postgresql.org
That query is generated by hibernate, right?
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Pallav
Kalva
Sent: den 15 september 2006 17:10
To: pgsql-performance@postgresql.org
Subject: [PERFORM] Optimize SQL
Hi,
Is there anyway we can optimize this
Hi, Piotr,
Piotr Kołaczkowski wrote:
Why match rows from the heap if ALL required data are in the index itself?
Why look at the heap at all?
Because the index does not contain any transaction informations, so it
has to look to the heap to find out which of the rows are current.
This is one
Because there is no MVCC information in the index.
cug
2006/9/12, Piotr Kołaczkowski [EMAIL PROTECTED]:
On Tuesday 12 September 2006 12:47, Heikki Linnakangas wrote:
Laszlo Nagy wrote:
I made another test. I create a file with the identifiers and names of
the products:
psql#\o
Hi Guillaume,
Now i disable Hyper Threading in BIOS, and context switch storms
disappeared. (when i look with command sar -t)
I decreased work_mem parameter to 32768. My CPU load is better. But it
is still too high, in example :
top - 16:27:05 up 9:13, 3 users, load average:
Yes. What's pretty large? We've had to redefine large recently, now
we're
talking about systems with between 100TB and 1,000TB.
- Luke
Well, I said large, not gargantuan :) - Largest would probably be around
a few TB, but the problem I'm having to deal with at the moment is large
numbers
Hi Markus,
Le vendredi 15 septembre 2006 à 11:43 +0200, Markus Schaber a écrit :
Hi, Jérôme,
Jérôme BENOIS wrote:
max_connections = 512
Do you really have that much concurrent connections? Then you should
think about getting a larger machine, probably.
You will definitely want to
On 9/18/06, Jérôme BENOIS [EMAIL PROTECTED] wrote:
Tomorrow morning i plan to add 2Go RAM in order to test difference with
my actual config.
I don't think more RAM will change anything if you don't swap at all.
You can try to set shared_buffers lower (try 32768 and 16384) but I
don't
On 9/18/06, Bucky Jordan [EMAIL PROTECTED] wrote:
My question is at what point do I have to get fancy with those big
tables? From your presentation, it looks like PG can handle 1.2 billion
records or so as long as you write intelligent queries. (And normal PG
should be able to handle that,
On Monday 18 September 2006 13:56, Merlin Moncure [EMAIL PROTECTED]
wrote:
just another fyi, if you have a really big database, you can forget
about doing pg_dump for backups (unless you really don't care about
being x day or days behind)...you simply have to due some type of
On Thu, Sep 14, 2006 at 11:23:01AM -0400, Francisco Reyes wrote:
My setup:
Freebsd 6.1
Postgresql 8.1.4
Memory: 8GB
SATA Disks
Raid 1 10 spindles (2 as hot spares)
500GB disks (16MB buffer), 7200 rpm
Raid 10
Raid 2 4 spindles
150GB 10K rpm disks
Raid 10
shared_buffers = 1
good normalization skills are really important for large databases,
along with materialization strategies for 'denormalized sets'.
Good points- thanks. I'm especially curious what others have done for
the materialization. The matview project on gborg appears dead, and I've
only found a
Do the basic math:If you have a table with 100million records, each of which is 200bytes long, that gives you roughtly 20 gig of data (assuming it was all written neatly and hasn't been updated much). If you have to do a full table scan, then it will take roughly 400 seconds with a single 10k RPM
On Mon, Sep 18, 2006 at 07:14:56PM -0400, Alex Turner wrote:
If you have a table with 100million records, each of which is 200bytes long,
that gives you roughtly 20 gig of data (assuming it was all written neatly
and hasn't been updated much).
If you're in that range it doesn't even count
I'm having a problem with a simple query, that finds children of a node,
using a materialized path to the node. The query:
select n1.id
from nodes n1, nodes n2
where n1.path like n2.path || '%'
and n2.id = 14;
QUERY
PLAN
Bucky,
On 9/18/06 7:37 AM, Bucky Jordan [EMAIL PROTECTED] wrote:
My question is at what point do I have to get fancy with those big
tables? From your presentation, it looks like PG can handle 1.2 billion
records or so as long as you write intelligent queries. (And normal PG
should be able to
Alex,
On 9/18/06 4:14 PM, Alex Turner [EMAIL PROTECTED] wrote:
Be warned, the tech specs page:
http://www.sun.com/servers/x64/x4500/specs.xml#anchor3
doesn't mention RAID 10 as a possible, and this is probably what most would
recommend for fast data access if you are doing both read and
Thanks Tom,
Is that documented somewhere? I can't seem to see any mention of it in
the docs.
Tom Lane wrote:
Marc McIntyre [EMAIL PROTECTED] writes:
... Is there a way to perform this efficiently in one query ?
No, because you're hoping for an indexscan optimization of a LIKE
Sweet - thats good - RAID 10 support seems like an odd thing to leave out.AlexOn 9/18/06, Luke Lonergan
[EMAIL PROTECTED] wrote:Alex,On 9/18/06 4:14 PM, Alex Turner
[EMAIL PROTECTED] wrote: Be warned, the tech specs page: http://www.sun.com/servers/x64/x4500/specs.xml#anchor3
doesn't mention
Yep, Solaris ZFS kicks butt. It does RAID10/5/6, etc and implements most of
the high end features available on high end SANs...
- Luke
On 9/18/06 8:40 PM, Alex Turner [EMAIL PROTECTED] wrote:
Sweet - thats good - RAID 10 support seems like an odd thing to leave out.
Alex
On 9/18/06,
Mark,
On 9/18/06 8:45 PM, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
Does a tool exist yet to time this for a particular configuration?
We're considering building this into ANALYZE on a per-table basis. The
basic approach times sequential access in page rate, then random seeks as
page rate
22 matches
Mail list logo