Limit the sub-queries to 1, i.e. :
select 1 from Table2 where Table2.ForeignKey = Table1.PrimaryKey fetch first 1
rows only
Andy.
On 19.02.2013 07:34, Bastiaan Olij wrote:
Hi All,
Hope someone can help me a little bit here:
I've got a query like the following:
--
select Column1, Co
have also become extremely slow. I was expecting a drop off when the
database grew out of memory, but not this much.
Am I really missing the target somewhere?
Any help and or suggestions will be very much appreciated.
Best regards,
Andy.
http://explain.depesz.com/s/cfb
select distinct
ch
would leave us struggling.
Could someone please let me know if this is usual and if so where the time's
spent?
Short of getting a faster server, is there anything I can do to influence this?
Thanks,
Andy
index. But why does it not use the IDX_order_sort_down_2 index for the sort?
Thanks,
Andy
Details..
Version
---
PostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.5.2,
64-bit
Tables
--
CREATE TABLE node (
node_id bigint NOT NULL,
node_typeint4 NO
postgres even better :)
Did you ever try re-writing some of the temp table usage to use
subselect's/views/cte/etc?
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
h was
| generally
|> the case with our previous hardware.
|
| Do you experience decreased query performance?
Yes we do experience substantial application performance degradations.
Maybe you are hitting some locks? If its not IO and not CPU then maybe
something is getting locked and queries
that traffic still has to go thru some bit of network stack, yes?
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
will tell you what config file changes to make.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
big refcursor. dose it sound
wired (maybe it worth more tests)? that's why we took that path.
No, if you tried it out, I'd stick with what you have. I've never used
them myself, so I was just wondering aloud.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-p
to re-do your single procedure that returns a bunch of
refcursors into multiple procedures each returning one resultset?
Or maybe it would be something you can speed test to see if it would
even make a difference.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
You could try using Unix domain socket and see if the performance improves. A
relevant link:
http://stackoverflow.com/questions/257433/postgresql-unix-domain-sockets-vs-tcp-sockets
From: Ofer Israeli
To: "pgsql-performance@postgresql.org"
Sent: Sunday, Apri
--END PGP SIGNATURE-
What does your perl look like? This would be wrong:
for $key (@list)
{
my $q = $db->prepare('select a from b where c = $1');
$q->execute($key);
$result = $q->fetch;
}
This would be right:
my $q = $db->prepare('select a from b w
On 03/16/2012 05:30 PM, Kevin Grittner wrote:
Brian Hamlin wrote:
On Mar 16, 2012, at 7:17 AM, Kevin Grittner wrote:
Andy Colson wrote:
I tried shared_buffers at both 2400M and 18000M, and it took 4.5
hours both times. ... (weak attempts at humor omitted)
Ah, I didn't pick
his is the best list ever! Thanks all! (especially that poetic Dave
Fetter, and that somewhat mean, but helpful, Andy Colson)
Shout outs to my friends Garlynn, Nick and Rush (best band ever!).
Party, my house, next week!
==
(Virtually) Brian Hamlin
GeoCal
OSGeo California Chapter
415-717
I've heard that LVM limits IO, so if you want full speed you
might wanna drop LVM. (And XFS supports increasing fs size, and when are you
ever really gonna want to decrease fs size?).
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make chang
On 02/26/2012 01:11 PM, Stefan Keller wrote:
2012/2/26 Andy Colson wrote:
On 02/25/2012 06:16 PM, Stefan Keller wrote:
1. How can I warm up or re-populate shared buffers of Postgres?
2. Are there any hints on how to tell Postgres to read in all table
contents into memory?
Yours, Stefan
How
:
SELECT count(*) FROM osm_point WHERE tags @> 'tourism=>junk'
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
tatements
3) fastest: COPY
Again.. assuming the triggers are not the bottleneck.
Have you run an insert by hand with 'EXPLAIN ANALYZE'?
-Andy
Have you read up on synchronous_commit?
Only a tiny bit. A couple people suggested disabling it since my
database is being hosted on AWS
you watched vmstat and iostat?
Have you read up on synchronous_commit?
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
n across this thread:
http://archives.postgresql.org/pgsql-hackers/2010-11/msg02028.php
They use it without locks.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
less than
one) but still more that I thought should be happening on a db with next
to no usage.
I found setting autovacuum_naptime = 6min made the IO all but vanish.
And if I ever get a wild hair and blow some stuff away, the db will
clean up after me.
-Andy
--
Sent via pgsql-performance maili
Oh, I knew I'd seen index usage stats someplace.
give this a run:
select * from pg_stat_user_indexes where relname = 'SuperBigTable';
http://www.postgresql.org/docs/current/static/monitoring-stats.html
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@pos
-Original Message-
From: Andy Colson [mailto:a...@squeakycode.net]
Sent: Tuesday, February 07, 2012 4:47 PM
To: Ofer Israeli
Cc: pgsql-performance@postgresql.org; Olga Vingurt; Netta Kabala
Subject: Re: [PERFORM] Inserts or Updates
On 2/7/2012 4:18 AM, Ofer Israeli wrote:
Hi all,
We
t, Pg should
be able to do HOT updates.
If you have lots of indexes, you should review them, you probably don't
need half of them.
And like Kevin said, try the simple one first. Wont hurt anything, and
if it works, great!
-Andy
--
Sent via pgsql-performance mailing list (pgsql-p
index on text column as
well then the performance reduced to 1/8th times. My question is how I
can improve performance when inserting data using index on text
column?
Thanks,
Saurabh
Do it in a single transaction, and use COPY.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance
On 1/27/2012 10:47 AM, Heikki Linnakangas wrote:
On 27.01.2012 15:34, Jayashankar K B wrote:
Hi,
We are having an embedded system with a freescale m68k architecture
based micro-controller, 256MB RAM running a customized version of
Slackware 12 linux.
It's a relatively modest Hardware.
Fascina
stname, m.lastname, m.regcomplete, m.emailok '
|| ' FROM members m
|| ' WHERE m.emailaddress LIKE ' || arg1
|| ' ANDm.changedate_id < ' || arg2
|| ' ORDER BY m.emailaddress, m.websiteid ';
execute(sql);
Maybe its the planner doesnt plan so well with $1 arguments vs actual
arguments thing.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
u keep having lots and lots of transaction, look into
commit_delay, it'll help batch commits out to disk (if I remember
correctly).
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
n
summarize everything into a summary table, and blow away the details. You
wouldn't have to delete the details if you wanted them, just keeping the
summary table updated would be enough.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make change
Your results are consistent with the benchmarks I've seen. Intel SSD have much
worse write performance compared to SSD that uses Sandforce controllers, which
Vertex 2 Pro does.
According to this benchmark, at high queue depth the random write performance
of Sandforce is more than 5 times that o
t look like?
And 10% of 24 gig is 2.4 gig, not 1 gig.
Or is this box doing something else. I noticeeffective_cache_size is only 5
gig, so you must be doing other things on this box.
--> autovacuum = off
Are you vacuuming by hand!? If not this is a "really bad idea" (tm)(c)(r)
-Andy
--
Se
Do you have an Intel 320? I'd love to see tests comparing 710 to 320 and see
if it's worth the price premium.
From: David Boreham
To: PGSQL Performance
Sent: Saturday, October 1, 2011 10:39 PM
Subject: [PERFORM] Suggestions for Intel 710 SSD test
I have a 71
On Sep 21, 2011, at 8:30 AM, Shaun Thomas wrote:
> I wish they would erase that Wiki page, or at least add a disclaimer.
The "they" that you refer to includes you. It's a wiki. You can write your
own disclaimer.
xoa
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
wledge of
internals enough to drive testing myself.
xoa
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
On 9/12/2011 1:22 PM, Robert Schnabel wrote:
On 9/12/2011 12:57 PM, Shaun Thomas wrote:
On 09/12/2011 12:47 PM, Andy Colson wrote:
work_mem is not the total a query can use. I believe each step can
use that much, and each backend can use it for multiple bits. So if
you had two backends, each
On 9/12/2011 1:22 PM, Robert Schnabel wrote:
On 9/12/2011 12:57 PM, Shaun Thomas wrote:
On 09/12/2011 12:47 PM, Andy Colson wrote:
work_mem is not the total a query can use. I believe each step can
use that much, and each backend can use it for multiple bits. So if
you had two backends, each
On 9/12/2011 12:57 PM, Shaun Thomas wrote:
On 09/12/2011 12:47 PM, Andy Colson wrote:
work_mem is not the total a query can use. I believe each step can
use that much, and each backend can use it for multiple bits. So if
you had two backends, each doing 2 sorts, you'd use 2*2 = 4 * 2GB
27;m using: PostgreSQL 9.0.4, compiled by Visual C++ build 1500, 64-bit
Windows 2008 Server Enterprise
Thanks,
Bob
work_mem is not the total a query can use. I believe each step can use
that much, and each backend can use it for multiple bits. So if you had
two backends, each doing 2 sorts, y
On 9/12/2011 12:15 PM, Robert Klemme wrote:
On 11.09.2011 19:02, Marti Raudsepp wrote:
On Sun, Sep 11, 2011 at 17:23, Andy Colson wrote:
On 09/11/2011 08:59 AM, Igor Chudov wrote:
By the way, does that INSERT UPDATE functionality or something like
this exist in Postgres?
You have two options
TB == 12 TB total space. That's not big
enough, is it?
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
where not exists(select 1 from realTable where tmp.id
= realTable.id);
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
an summarize my personal view: mysql is fast at the expense
of safety and usability. (mysql still cannot do update's with subselects).
PG is safe and usable at the expense of speed, and you wont be disappointed by
the speed.
-Andy
--
Sent via pgsql-performance mailing list (pgsql
should
clean it up. But, if you ignore the problem for two years, and have super
really bad table bloat, well, maybe backup/restore is best.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
irst place :-) )
Good catch, thanks Scott.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 09/05/2011 08:57 AM, Richard Shaw wrote:
Hi Andy,
It's not a new issue no, It's a legacy system that is in no way ideal but is
also not in a position to be overhauled. Indexes are correct, tables are up to
25 million rows.
On startup, it hits CPU more than IO, I'
cantly recently?
That's a good thought, maybe the stats are old and you have bad plans? It
could also be major updates to the data too (as opposed to growth).
Gerhard, have you done an 'explain analyze' on any of your slow queries? Have
you done an analyze lately?
-Andy
--
Sen
0 308024 884812 4051293200 464 168 353 92 4 2 84
9
Only one line? That does not help much. Can you run it as 'vmstat 2' and let
it run while a few slow queries are performed? Then paste all the lines?
-Andy
--
Sent via pgsql-performance mailing lis
a problem.
fsync | off
Seriously?
-Andy
There are 30 DBs in total on the server coming in at 226GB. The one that's
used the most is 67GB and there are another 29 that come to 159GB.
I'd really appreciate it if you could review my configurations below and m
On 8/31/2011 1:51 PM, Alan Hodgson wrote:
On August 31, 2011 11:26:57 AM Andy Colson wrote:
When you ran it, did it really feel like 30 seconds? Or did it come
right back real quick?
Because your report says:
> 35.833 ms
Thats ms, or milliseconds, or 0.035 seconds.
I think the "
When you ran it, did it really feel like 30 seconds? Or did it come
right back real quick?
Because your report says:
> 35.833 ms
Thats ms, or milliseconds, or 0.035 seconds.
-Andy
On 8/31/2011 8:04 AM, Kai Otto wrote:
Hi all,
I am running a simple query:
SELECT * FROM public.“Fr
On 8/30/2011 8:33 PM, Craig Ringer wrote:
On 31/08/2011 4:30 AM, Andy Colson wrote:
Hi all,
I have read things someplace saying not exists was better than not
in... or something like that. Not sure if that was for in/exists and
not in/not exists, and for a lot of records or not.
`EXISTS'
d.general(gid);
delete from public.general a where exists(select 1 from upd.general b
where a.gid=b.gid);
Thanks for any suggestions,
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
According to the specs for database storage:
"Random 4KB arites: Up to 600 IOPS"
Is that for real? 600 IOPS is *atrociously terrible* for an SSD. Not much
faster than mechanical disks.
Has anyone done any performance benchmark of 320 used as a DB storage? Is it
really that slow?
On 8/19/2011 4:03 AM, Krzysztof Chodak wrote:
Is there any performance benefit of using constant size tuples?
If you are referring to varchar(80) vs text, then no, there is no benefit.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to
On 8/17/2011 1:55 PM, Ogden wrote:
On Aug 17, 2011, at 1:48 PM, Andy Colson wrote:
On 8/17/2011 1:35 PM, k...@rice.edu wrote:
On Wed, Aug 17, 2011 at 01:32:41PM -0500, Ogden wrote:
On Aug 17, 2011, at 1:31 PM, k...@rice.edu wrote:
On Wed, Aug 17, 2011 at 01:26:56PM -0500, Ogden wrote:
I
and 2 cores. Everyone with more than 8 cores and 64 gig of ram is off
my Christmas list! :-) )
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Its trying to connect to unix socket "/var/run/postgresql/.s.PGSQL.5432",
but your postgresql.conf file probably has:
unix_socket_directory = '/tmp'
Change it to:
unix_socket_directory = '/var/run/postgresql'
and restart PG.
-Andy
--
Sent via pgsql-performance
then don't
fix it :-)
You say reporting query's are fast, and the disk's should take care of
your slow write problem from before. (Did you test the write
performance?) So, whats wrong?
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To
On 8/3/2011 11:03 PM, Craig Ringer wrote:
great gobs of battery backed write cache DRAM.
Now I know what I'm asking Santa for Christmas this year!
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
to windows.
If you have to have the fastest, absolute, system. Linux on metal is
the way to go.
(This is all speculation and personal opinion, I have no numbers to back
anything up)
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to
> > I'm not comparing SSD in SW RAID with rotating disks
> in HW RAID with
> > BBU though. I'm just comparing SSDs with or without
> BBU. I'm going to
> > get a couple of Intel 320s, just want to know if BBU
> makes sense for
> > them.
>
> Yes, it certainly does, even if you have a RAID BBU.
"ev
--- On Mon, 7/18/11, David Rees wrote:
> >> In this case is BBU still needed? If I put 2 SSD
> in software RAID 1, would
> >> that be any slower than 2 SSD in HW RAID 1 with
> BBU? What are the pros and
> >> cons?
>
> What will perform better will vary greatly depending on the
> exact
> SSDs,
AID 1, would that
be any slower than 2 SSD in HW RAID 1 with BBU? What are the pros and cons?
Thanks.
Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
x!". You could say, "please post your config
settings, and the stats from 'select * from pg_stats_something'"
We (or, you really) could compare the seq_page_cost and random_page_cost
from the config to the stats collected by PG and determine they are way
off... and you
--- On Wed, 4/6/11, Scott Carey wrote:
> I could care less about the 'fast' sandforce drives.
> They fail at a high
> rate and the performance improvement is BECAUSE they are
> using a large,
> volatile write cache.
The G1 and G2 Intel MLC also use volatile write cache, just like most SandF
.
Is there any benchmark measuring the performance of these SSD's (the new Intel
vs. the new SandForce) running database workloads? The benchmarks I've seen so
far are for desktop applications.
Andy
--- On Mon, 3/28/11, Greg Smith wrote:
> From: Greg Smith
> Subject: [PERFORM
more than happy to benchmark it and send it back :-)
Or, more seriously, even remote ssh would do.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 3/18/2011 9:38 AM, Kevin Grittner wrote:
Andy Colson wrote:
On 03/17/2011 09:25 AM, Michael Andreasen wrote:
I've been looking around for information on doing a pg_restore as
fast as possible.
bgwriter_lru_maxpages = 0
I hadn't thought much about that last one -- d
s quick as it can be?
Thanks.
autovacuum = off
fsync = off
synchronous_commit = off
full_page_writes = off
bgwriter_lru_maxpages = 0
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 3/8/2011 10:58 AM, Andreas Forø Tollefsen wrote:
Andy. Thanks. That is a great tips. I tried it but i get the error:
NOTICE: ptarray_simplify returned a <2 pts array.
Query:
SELECT ST_Intersection(priogrid_land.cell,
ST_Simplify(cshapeswdate.geom,0.1)) AS geom,
priogrid_land.gid AS divi
I have seen really complex geometries cause problems. If you have
thousands of points, when 10 would do, try ST_Simplify and see if it
doesnt speed things up.
-Andy
On 3/8/2011 2:42 AM, Andreas Forø Tollefsen wrote:
Hi. Thanks for the comments. My data is right, and the result is exactly
On 3/3/2011 3:19 AM, sverhagen wrote:
Andy Colson wrote:
For your query, I think a join would be the best bet, can we see its
explain analyze?
Here is a few variations:
SELECT events_events.* FROM events_events WHERE transactionid IN (
SELECT transactionid FROM
On 03/02/2011 06:12 PM, sverhagen wrote:
Thanks for your help already!
Hope you're up for some more :-)
Andy Colson wrote:
First off, excellent detail.
Second, your explain analyze was hard to read... but since you are not
really interested in your posted query, I wont worry about lo
t in ( select id frome details where some set is bad )
and id in ( select anotherid from anothertable where ... )
Its the subselects you need to think about. Find one that gets you a small set
that's interesting somehow. Once you get all your little sets, its easy to
combine them.
-A
--- On Sun, 2/6/11, Linos wrote:
> I am studying too the possibility of use an OCZ Vertex 2
> Pro with Flashcache or Bcache to use it like a second level
> filesystem cache, any comments on that please?
>
OCZ Vertex 2 Pro is a lot more expensive than other SSD of comparable
performances beca
On 02/03/2011 10:00 PM, Greg Smith wrote:
Andy Colson wrote:
Cpu's wont get faster, but HD's and SSD's will. To have one database
connection, which runs one query, run fast, it's going to need multi-core
support.
My point was that situations where people need to
ead a paper someplace that said shared cache (L1/L2/etc) multicore
cpu's would start getting really slow at 16/32 cores, and that message passing
was the way forward past that. If PG started aiming for 128 core support right
now, it should use some kinda message passing with queues thing
le cores you
need multiple database connections.
3) If your jobs are IO bound, then running multiple jobs may hurt
performance.
Your naive approach is the best. Just spawn off two jobs (or three, or
whatever). I think its also the only method. (If there is another
method, I dont know what
to the next product
lastprodid = prodid
... etc
> Is there any better way to do it? And how reliable is this?
It makes the sql really easy, but the code complex... so pick your poison.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 1/27/2011 9:09 AM, Michael Kohl wrote:
On Thu, Jan 27, 2011 at 4:06 PM, Andy Colson wrote:
Have you run each of your queries through explain analyze lately?
A code review including checking of queries is on our agenda.
You are vacuuming/autovacuuming, correct?
Sure :-)
Thank you
check some of
your sql statements and make sure they are all behaving. You may not
notice a table scan when the user count is low, but you will when it
gets higher.
Have you run each of your queries through explain analyze lately?
Have you checked for bloat?
You are vacuuming/autovacuum
st and one when its slow?
Looks to me, in both cases, you are not using much memory at all. (if
you happen to have 'free', its output is a little more readable, if you
wouldn't mind posting it (only really need it for when the box is slow)
-Andy
--
Sent via pgsql-perfor
?
Is the stress package running niced?
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
fashionable non-SQL
databases, but it's pretty well known in wider circles.
--
Craig Ringer
Or... PG is just so good we've never had to use more than one database
server! :-)
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to you
oops, call them database 'a' and database 'b'.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ge pattern is (70% read,
small columns, no big blobs (like photos), etc)... and even then we'd
still have to guess.
I can tell you, however, having your readers and writers not block each
other is really nice.
Not only will I not compare apples to oranges, but I really wont compare
app
-Original Message-
From: Andy Colson [mailto:a...@squeakycode.net]
Sent: Monday, 17 January 2011 5:22 p.m.
To: Jeremy Palmer
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Possible to improve query plan?
First, wow, those are long names... I had a hard time keeping track
et an "explain analyze"? It give's more info.
(Also, have you seen http://explain.depesz.com/)
Last: If you wanted to force the index usage, for a test, you could drop the
other indexes. I assume this is on a test box so it should be ok. If its
live, you could w
On 1/13/2011 4:49 PM, Robert Haas wrote:
On Thu, Jan 13, 2011 at 5:47 PM, Andy Colson wrote:
I don't believe there is any case where hashing each individual relation
is a win compared to hashing them all together. If the optimizer were
smart enough to be considering the situation as a
up? *scratches head*
Because it all fix it memory and didnt swap to disk?
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
--- On Thu, 12/23/10, John W Strange wrote:
> Typically my problem is that the
> large queries are simply CPU bound.. do you have a
> sar/top output that you see. I'm currently setting up two
> FusionIO DUO @640GB in a lvm stripe to do some testing with,
> I will publish the results after I'm d
code, one for each database.
In the end, can PG be fast? Yes. Very. But only when you treat is as
PG. If you try to use PG as if it were mssql, you wont be a happy camper.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your sub
pace for the imagery. The
imagery code uses more cpu that PG does. The database is 98% read,
though, so my setup is different that yours.
My maps get 100K hits a day. The cpu's never use more than 20%. I'm
running on a $350 computer, AMD Dual core, with 4 IDE disks in softwar
On 12/14/2010 9:41 AM, Jim Nasby wrote:
On Dec 14, 2010, at 9:27 AM, Andy Colson wrote:
Is this the same thing Nick is working on? How'd he get along?
http://archives.postgresql.org/message-id/4cd1853f.2010...@noaa.gov
So it is. The one I replied to stood out because no one had repli
How'd he get along?
http://archives.postgresql.org/message-id/4cd1853f.2010...@noaa.gov
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
> The "common knowledge" you based that comment on, may
> actually not be very up-to-date anymore. Current
> consumer-grade SSD's can achieve up to 200MB/sec when
> writing sequentially and they can probably do that a lot
> more consistent than a hard disk.
>
> Have a look here: http://www.anandt
> We use ZFS and use SSDs for both the log device and
> L2ARC. All disks
> and SSDs are behind a 3ware with BBU in single disk
> mode.
Out of curiosity why do you put your log on SSD? Log is all sequential IOs, an
area in which SSD is not any faster than HDD. So I'd think putting log on SSD
> > If you are IO-bound, you might want to consider using
> SSD.
> >
> > A single SSD could easily give you more IOPS than 16
> 15k SAS in RAID 10.
>
> Are there any that don't risk your data on power loss, AND
> are cheaper
> than SAS RAID 10?
>
Vertex 2 Pro has a built-in supercapacitor to s
If you are IO-bound, you might want to consider using SSD.
A single SSD could easily give you more IOPS than 16 15k SAS in RAID 10.
--- On Wed, 12/8/10, Benjamin Krajmalnik wrote:
> From: Benjamin Krajmalnik
> Subject: [PERFORM] Hardware recommendations
> To: pgsql-performance@postgresql.org
On 12/7/2010 2:10 PM, Kenneth Marshall wrote:
On Tue, Dec 07, 2010 at 11:56:51AM -0800, Richard Broersma wrote:
On Tue, Dec 7, 2010 at 11:43 AM, Andy Colson wrote:
In PG the first statement you fire off (like an "insert into" for example)
will start a transaction. ?If you dont com
1 - 100 of 177 matches
Mail list logo