On Nov 9, 2007 3:08 AM, Tom Lane <[EMAIL PROTECTED]> wrote:
> This rule works for all the locales I have installed ... but I don't
> have any Far Eastern locales installed. Also, my test cases are only
> covering ASCII characters, and I believe many locales have some non-ASCII
> letters that sort
"Chris Hoover" <[EMAIL PROTECTED]> writes:
> If there has been no scans on an index (as according to
> pg_stat_user_indexes), why is pg_statio_user_indexes showing non 0
> values in idx_blks_hit/read?
I grow weary, but I think that "scan" is only incremented by commencing
a SELECT search using the
I have a database where I dropped all indexes on a table last night
and built a new set of indexes. The goal is to try and let the
database have fewer indexes and use them more. I removed a bunch of
indexes that were surviving from our 7.3 days where functionality will
now be covered by 8.1's use
On Wednesday 31 October 2007 12:45, Ketema wrote:
> I am trying to build a very Robust DB server that will support 1000+
> concurrent users (all ready have seen max of 237 no pooling being
> used). I have read so many articles now that I am just saturated. I
> have a general idea but would like f
Larry,
Considering these recommendations, let's try setting shared_buffers to 2GB
and work_mem to 16MB. The thing is that work_mem is per connection, and if
we get too aggressive and we get a lot of simultaneous users, we can
potentially eat up a lot of memory.
So 2GB + (100 * 16MB) = 3.6GB
I wrote:
> I did do some experimentation and found that among the ASCII characters
> (ie, codes 32-126), nearly all the non-C locales on my Fedora machine
> sort Z last and z next-to-last or vice versa. Most of the remainder
> sort digits last and z or Z as the last non-digit character. Since Z i
Ooops, sorry about the multiple copies there --- not sure what happened.
regards, tom lane
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/
Gregory Stark <[EMAIL PROTECTED]> writes:
> Doesn't really strike at the core reason that this is so klugy though. Surely
> the "right" thing is to push the concept of open versus closed end-points
> through deeper into the estimation logic?
No, the right thing is to take the folk who defined "dic
"Steinar H. Gunderson" <[EMAIL PROTECTED]> writes:
> On Thu, Nov 08, 2007 at 04:47:09PM -0600, Pepe Barbe wrote:
>> I am having an issue on PostgreSQL 8.0.12. In the past we had performance
>> issues with the query planner for queries on some tables where we knew we
>> had indexes and it was doin
"Steinar H. Gunderson" <[EMAIL PROTECTED]> writes:
> On Thu, Nov 08, 2007 at 04:47:09PM -0600, Pepe Barbe wrote:
>> I am having an issue on PostgreSQL 8.0.12. In the past we had performance
>> issues with the query planner for queries on some tables where we knew we
>> had indexes and it was doin
"Steinar H. Gunderson" <[EMAIL PROTECTED]> writes:
> On Thu, Nov 08, 2007 at 04:47:09PM -0600, Pepe Barbe wrote:
>> I am having an issue on PostgreSQL 8.0.12. In the past we had performance
>> issues with the query planner for queries on some tables where we knew we
>> had indexes and it was doin
"Steinar H. Gunderson" <[EMAIL PROTECTED]> writes:
> On Thu, Nov 08, 2007 at 04:47:09PM -0600, Pepe Barbe wrote:
>> I am having an issue on PostgreSQL 8.0.12. In the past we had performance
>> issues with the query planner for queries on some tables where we knew we
>> had indexes and it was doin
On Nov 8, 2007 2:56 PM, Alan Hodgson <[EMAIL PROTECTED]> wrote:
> On Thursday 08 November 2007, Dimitri Fontaine <[EMAIL PROTECTED]>
> > Is raid6 better than raid10 in term of overall performances, or a better
> > cut when you need capacity more than throughput?
>
> You can't touch RAID 10 for perf
"Tom Lane" <[EMAIL PROTECTED]> writes:
> What I am tempted to do about this is have make_greater_string tack "zz"
> onto the supplied prefix, so that it would have to find a string that
> compares greater than "123/zz" before reporting success. This is
> getting pretty klugy though, so cc'ing to
On Thu, Nov 08, 2007 at 04:47:09PM -0600, Pepe Barbe wrote:
> I am having an issue on PostgreSQL 8.0.12. In the past we had performance
> issues with the query planner for queries on some tables where we knew we
> had indexes and it was doing a sequential scan, and for this reason we
> issue "SE
Hello,
I am having an issue on PostgreSQL 8.0.12. In the past we had
performance issues with the query planner for queries on some tables
where we knew we had indexes and it was doing a sequential scan, and
for this reason we issue "SET enable_seqscan = FALSE" for some queries.
Recently w
On Nov 8, 2007, at 1:22 PM, Scott Marlowe wrote:
I've heard the newest adaptecs, even the perc implementations aren't
bad.
I have a pair of Adaptec 2230SLP cards. Worst. Just replaced them on
Tuesday with fibre channel cards connected to external RAID
enclosures. Much nicer.
-
On Thursday 08 November 2007, Dimitri Fontaine <[EMAIL PROTECTED]>
> Is raid6 better than raid10 in term of overall performances, or a better
> cut when you need capacity more than throughput?
You can't touch RAID 10 for performance or reliability. The only reason to
use RAID 5 or RAID 6 is to g
>>> On Thu, Nov 8, 2007 at 2:14 PM, in message
<[EMAIL PROTECTED]>, Dimitri Fontaine
<[EMAIL PROTECTED]> wrote:
> The Dell 2900 5U machine has 10 spindles max, that would make 2 for the OS
> (raid1) and 8 for mixing WAL and data... not enough to benefit from the
> move,
> or still to test?
Le Thursday 08 November 2007 19:22:48 Scott Marlowe, vous avez écrit :
> On Nov 8, 2007 10:43 AM, Vivek Khera <[EMAIL PROTECTED]> wrote:
> > On Nov 6, 2007, at 1:10 PM, Greg Smith wrote:
> > > elsewhere. But once you have enough disks in an array to spread all
> > > the load over that itself may i
On Nov 8, 2007 10:43 AM, Vivek Khera <[EMAIL PROTECTED]> wrote:
>
> On Nov 6, 2007, at 1:10 PM, Greg Smith wrote:
>
> > elsewhere. But once you have enough disks in an array to spread all
> > the load over that itself may improve write throughput enough to
> > still be a net improvement.
>
> This
"Guillaume Smet" <[EMAIL PROTECTED]> writes:
> On Nov 8, 2007 12:14 AM, Tom Lane <[EMAIL PROTECTED]> wrote:
>> I've applied a patch that might help you:
>> http://archives.postgresql.org/pgsql-committers/2007-11/msg00104.php
> AFAICS, it doesn't seem to fix the problem. I just compiled
> REL8_1_ST
On Nov 6, 2007, at 1:10 PM, Greg Smith wrote:
elsewhere. But once you have enough disks in an array to spread all
the load over that itself may improve write throughput enough to
still be a net improvement.
This has been my expeience with 14+ disks in an array (both RAID10 and
RAID5).
[EMAIL PROTECTED] (Rafael Martinez) writes:
> Heikki Linnakangas wrote:
>> On a small table like that you could run VACUUM every few minutes
>> without much impact on performance. That should keep the table size in
>> check.
>>
>
> Ok, we run VACUUM ANALYZE only one time a day, every night. But we
On Nov 6, 2007, at 5:12 AM, Tore Halset wrote:
Here are our current alternatives:
Two things I recommend. If the drives are made by western digital,
run away.
If the PERC5/i is an Adaptec card, run away.
Max out your cache RAM on the RAID card. 256 is the minimum when you
have such b
On Nov 5, 2007, at 8:19 AM, Claus Guttesen wrote:
I will get four 72 GB sas-disks at 15K rpm. Reading the archives
suggest raid 1+0 for optimal read/write performance, but with a solid
raid-controller raid 5 will also perform very well when reading.
If you only have 4 drives, I'd recommend no
Rafael Martinez <[EMAIL PROTECTED]> writes:
> Heikki Linnakangas wrote:
>> On a small table like that you could run VACUUM every few minutes
>> without much impact on performance. That should keep the table size in
>> check.
> Ok, we run VACUUM ANALYZE only one time a day, every night.
There's yo
On Nov 8, 2007 4:01 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> Hmm, can we see the pg_stats row for the ancestors column?
Sure:
public | cms_items | ancestors | 0 |32 |
-1 | | |
{10011/10010/10009/10018/2554055/,10011/10010/84022/23372040/,100
Heikki Linnakangas wrote:
>
> If the table is already bloated, a VACUUM won't usually shrink it. It
> only makes the space available for reuse, but a sequential scan still
> needs to go through a lot of pages.
>
> CLUSTER on the other hand repacks the tuples and gets rid of all the
> unused space
In response to Rafael Martinez <[EMAIL PROTECTED]>:
> Heikki Linnakangas wrote:
> > Rafael Martinez wrote:
>
> >> The tables with this 'problem' are not big, so CLUSTER finnish very fast
> >> and it does not have an impact in the access because of locking. But we
> >> wonder why this happens.
> >
Rafael Martinez wrote:
Heikki Linnakangas wrote:
On a small table like that you could run VACUUM every few minutes
without much impact on performance. That should keep the table size in
check.
Ok, we run VACUUM ANALYZE only one time a day, every night. But we would
espect the performance to ge
Alvaro Herrera wrote:
> Rafael Martinez wrote:
>> The 'problem' is that performance decrease during the day and the only
>> thing that helps is to run CLUSTER on the table with problems. VACUUM
>> ANALYZE does not help.
>
> Probably because all the live tuples are clustered at the end of the
> ta
"Guillaume Smet" <[EMAIL PROTECTED]> writes:
> On Nov 8, 2007 12:14 AM, Tom Lane <[EMAIL PROTECTED]> wrote:
>> I've applied a patch that might help you:
>> http://archives.postgresql.org/pgsql-committers/2007-11/msg00104.php
> AFAICS, it doesn't seem to fix the problem.
Hmm, can we see the pg_sta
Rafael Martinez wrote:
> Hello
>
> This is a question about something we have seen sometimes in the last
> months. It happens with tables with a large amount of updates/selects
> compared with the amount of inserts/deletes. The sizes of these tables
> are small and the amount of rows too.
>
> The
Heikki Linnakangas wrote:
> Rafael Martinez wrote:
>> The tables with this 'problem' are not big, so CLUSTER finnish very fast
>> and it does not have an impact in the access because of locking. But we
>> wonder why this happens.
>
> 2 seconds for seq scanning 12 MB worth of data sounds like a lo
Performance problems with heavily modified tables (UPDATE or DELETE) are
usually caused by not vacuuming. There are two main modes the VACUUM can
run in (plain or full) and the former works in a much more aggressive
way (exclusive locking, etc). Try to run VACUUM FULL VERBOSE on the
table and s
Rafael Martinez wrote:
This is a question about something we have seen sometimes in the last
months. It happens with tables with a large amount of updates/selects
compared with the amount of inserts/deletes. The sizes of these tables
are small and the amount of rows too.
The 'problem' is that pe
Hello
This is a question about something we have seen sometimes in the last
months. It happens with tables with a large amount of updates/selects
compared with the amount of inserts/deletes. The sizes of these tables
are small and the amount of rows too.
The 'problem' is that performance decrease
Tom,
On Nov 8, 2007 12:14 AM, Tom Lane <[EMAIL PROTECTED]> wrote:
> I've applied a patch that might help you:
> http://archives.postgresql.org/pgsql-committers/2007-11/msg00104.php
AFAICS, it doesn't seem to fix the problem. I just compiled
REL8_1_STABLE branch and I still has the following behav
39 matches
Mail list logo