Dean Gibson (DB Administrator) wrote:
The questions are:
1. Why in the planner scanning the entire idx_listing_entrydate, when
I'd think it should be scanning the entire
pk_listingstatus_listingstatusid ?
It's looking at the ORDER BY and sees that the query needs the 10 most
recent, so
Hello.
I think I started that discussion. We ended up buying a Dell 2900 with
PERC 6/i and 10 * 145GB SAS 3,5 15KRpm discs. 6 of the SAS discs are
in a raid 10 for the database, 2 in a mirror for the wal and the last
2 in a mirror for the OS. We get 350MB/s writing and 380MB/s reading
yes, I also thought of this method and tested it before I got your mail and
this solution seems workable.
Thanks for the help
On Feb 12, 2008 9:18 PM, Tom Lane [EMAIL PROTECTED] wrote:
Linux Guru [EMAIL PROTECTED] writes:
Analyzing did not help, here is the out of EXPLAIN ANALYZE of update
On 13-Feb-08, at 5:02 AM, Tore Halset wrote:
Hello.
I think I started that discussion. We ended up buying a Dell 2900
with PERC 6/i and 10 * 145GB SAS 3,5 15KRpm discs. 6 of the SAS
discs are in a raid 10 for the database, 2 in a mirror for the wal
and the last 2 in a mirror for the OS.
Can you send the table definitions of the tables involved in the
query, including index information? Might be if we look hard enough we
can find something.
Peter
Table messungen_v_dat_2007_11_12
Column | Type | Modifiers | Description
We have tried some recoding now, using a materialized view we could
reduce the query to a join over too tables without any functions inside
the query, for example:
explain analyse SELECT '12.11.2007 18:04:00 UTC' AS zeit,
'M' AS ganglinientyp,
zs_de,
We have a web application for which we intend to run the database on a
dedicated server.
We hope by the end of 2008 to have 10 companies accessing 10 instances
of the database for this application. The dump file of each database is
likely to be less than 100MB at the end of the year. The
On Wed, 13 Feb 2008, Rory Campbell-Lange wrote:
4 x 147GB 15000 rpm SCSI in RAID 10 with 320-1 RAID CARD + 64MB cache BBU
2x Intel Xeon E5405 / 4x 2.00GHz / 1333MHz FSB / 12MB cache
6GB RAM
Cost around 2320 GBP -- it would be great to get it under 2000
Needs to be in the UK.
I
I want to create and update two tables in a function such as below, but
using parameters as tablename is not allowed and gives an error. Is there
any way I could achieve this?
CREATE OR REPLACE FUNCTION test ( t1 text,t2 text ) RETURNS numeric AS $$
declare temp1 text;
declare temp2 text;
begin
A Dimecres 13 Febrer 2008 15:25, Linux Guru va escriure:
I want to create and update two tables in a function such as below, but
using parameters as tablename is not allowed and gives an error. Is there
any way I could achieve this?
You're looking for EXECUTE:
On Wed, 13 Feb 2008, Magnus Hagander wrote:
On Wed, 13 Feb 2008, Rory Campbell-Lange wrote:
4 x 147GB 15000 rpm SCSI in RAID 10 with 320-1 RAID CARD + 64MB cache
BBU
In my experience, battery backed cache is always worth the money. Even if
you're mostly select, you will have some updates.
Thomas Zaksek [EMAIL PROTECTED] writes:
Nested Loop Left Join (cost=0.00..32604.48 rows=3204 width=14) (actual
time=11.991..2223.227 rows=2950 loops=1)
- Index Scan using
messungen_v_dat_2007_11_12_messpunkt_minute_tag_idx on
messungen_v_dat_2007_11_12 m (cost=0.00..5371.09 rows=3204
On Feb 13, 2008 5:02 AM, Tore Halset [EMAIL PROTECTED] wrote:
Hello.
I think I started that discussion. We ended up buying a Dell 2900 with
PERC 6/i and 10 * 145GB SAS 3,5 15KRpm discs. 6 of the SAS discs are
in a raid 10 for the database, 2 in a mirror for the wal and the last
2 in a mirror
Matthew wrote:
On Wed, 13 Feb 2008, Rory Campbell-Lange wrote:
4 x 147GB 15000 rpm SCSI in RAID 10 with 320-1 RAID CARD + 64MB
cache BBU
2x Intel Xeon E5405 / 4x 2.00GHz / 1333MHz FSB / 12MB cache
6GB RAM
Cost around 2320 GBP -- it would be great to get it under 2000
Needs to
Hi all,
We're considering setting up a SAN where I work. Is there anyone using
a SAN, for postgres or other purposes? If so I have a few questions
for you.
- Are there any vendors to avoid or ones that are particularly good?
- What performance or reliability implications exist when using SANs?
On Wed, Feb 13, 2008 at 10:56:54AM -0600, Peter Koczan wrote:
Hi all,
We're considering setting up a SAN where I work. Is there anyone using
a SAN, for postgres or other purposes? If so I have a few questions
for you.
- Are there any vendors to avoid or ones that are particularly good?
For so many rows I'm surprised it's not using a bitmap indexscan.
What PG version is this? How big are these tables?
regards, tom lane
Its PG 8.2.6 on Freebsd.
messungen_v_dat_2007_11_12 ist about 4 million rows and messwerte is
about 10 million rows.
Folks,
Does anyone know if HOT is compatible with pg_toast tables, or do TOASTed rows
simply get excluded from HOT? I can run some tests, but if someone knows
this off the top of their heads it would save me some time.
--
Josh Berkus
PostgreSQL @ Sun
San Francisco
On Feb 13, 2008 12:46 PM, Kenneth Marshall [EMAIL PROTECTED] wrote:
On Wed, Feb 13, 2008 at 10:56:54AM -0600, Peter Koczan wrote:
Hi all,
We're considering setting up a SAN where I work. Is there anyone using
a SAN, for postgres or other purposes? If so I have a few questions
for you.
On Wed, 13 Feb 2008, Rory Campbell-Lange wrote:
(https://secure.dnuk.com/systems/r325hs-1u.php?configuration=7766)
4 x 147GB 15000 rpm SCSI in RAID 10 with 320-1 RAID CARD + 64MB cache BBU
That's running the LSI Megaraid SCSI controller. Those are solid but not
the best performers in
Josh Berkus [EMAIL PROTECTED] writes:
Does anyone know if HOT is compatible with pg_toast tables, or do TOASTed
rows
simply get excluded from HOT?
The current TOAST code never does any updates, only inserts/deletes.
But the HOT logic should be able to reclaim deleted rows early via
pruning.
On Feb 13, 2008, at 12:06, Dave Cramer wrote:
On 13-Feb-08, at 5:02 AM, Tore Halset wrote:
Hello.
I think I started that discussion. We ended up buying a Dell 2900
with PERC 6/i and 10 * 145GB SAS 3,5 15KRpm discs. 6 of the SAS
discs are in a raid 10 for the database, 2 in a mirror for
Thanks! for all your replies, I tried increasing the statistics on
fklistingsourceid to 1000 it made any difference. Then I created an
index on (fklistingsourceid,entrydate) it helped and it was fast.
This index would fix this problem but in general I would like to know
what if there are
On Feb 13, 2008, at 20:45, Tore Halset wrote:
The box have 16GB of ram, but my original test file was only 25GB.
Sorry. Going to 33GB lowered the numbers for writing. Here you have
some samples.
% sh -c dd if=/dev/zero of=bigfile bs=8k count=400 sync
3276800 bytes (33 GB) copied,
Pallav Kalva [EMAIL PROTECTED] writes:
This index would fix this problem but in general I would like to know what if
there are queries where it does index scan backwards and there is no order
by clause and the query is still bad ? Would there be a case like that or the
planner uses index scan
[Peter Koczan - Wed at 10:56:54AM -0600]
We're considering setting up a SAN where I work. Is there anyone using
a SAN, for postgres or other purposes? If so I have a few questions
for you.
Some time ago, my boss was planning to order more hardware - including a
SAN - and coincidentally, SANs
On Feb 12, 2008 4:11 AM, Thomas Zaksek [EMAIL PROTECTED] wrote:
I tried turning off nestloop, but with terrible results:
Yeah, it didn't help. I was expecting the query planner to switch to
a more efficient join plan.
I also tried to increase work_men, now the config is
work_mem = 4MB
Try
On 13-2-2008 22:06 Tobias Brox wrote:
What I'm told is that the state-of-the-art SAN allows for
an insane amount of hard disks to be installed, much more than what
would fit into any decent database server. We've ended up buying a SAN,
the physical installation was done last week, and I will be
Tom,
The current TOAST code never does any updates, only inserts/deletes.
But the HOT logic should be able to reclaim deleted rows early via
pruning.
OK, so for a heavy update application we should still see a vacuum
reduction, even if most of the rows are 40k large?
Time to run some tests
On Wed, 13 Feb 2008, Tobias Brox wrote:
What I'm told is that the state-of-the-art SAN allows for an insane
amount of hard disks to be installed, much more than what would fit
into any decent database server.
You can attach a surpringly large number of drives to a server nowadays,
but in
[Arjen van der Meijden]
Your SAN-pusher should have a look at the HP-submissions for TPC-C...
The recent Xeon systems are all without SAN's and still able to connect
hundreds of SAS-disks.
Yes, I had a feeling that the various alternative solutions for direct
connection hadn't been
On Feb 13, 2008 5:02 PM, Greg Smith [EMAIL PROTECTED] wrote:
On Wed, 13 Feb 2008, Tobias Brox wrote:
What I'm told is that the state-of-the-art SAN allows for an insane
amount of hard disks to be installed, much more than what would fit
into any decent database server.
You can attach a
Should this be summarized somewhere in our docs; just a few lines with
the tradeoffs, direct storage = cheaper, faster, SAN = more configurable?
---
Scott Marlowe wrote:
On Feb 13, 2008 5:02 PM, Greg Smith [EMAIL
Thanks for all your input, it is very helpful. A SAN for our postgres
deployment is probably sufficient in terms of performance, because we
just don't have that much data. I'm a little concerned about needs for
user and research databases, but if a project needs a big, fast
database, it might be
34 matches
Mail list logo