On Feb 12, 2015 9:17 PM, "Tom Lane" wrote:
> The planner might produce such a result if there's an opportunity
> to perform the sorting via an index on "alpha" (ie, the ORDER BY
> matches some index). If it has to do an explicit sort it's gonna
> do the join first.
>
> (If you have such an index
On Wed, Feb 5, 2014 at 9:52 AM, Jeff Janes wrote:
> On Monday, February 3, 2014, Pweaver (Paul Weaver)
> wrote:
>
>> We have been running into a (live lock?) issue on our production Postgres
>> instance causing queries referencing a particular table to become extr
On Tue, Feb 4, 2014 at 9:03 PM, Peter Geoghegan wrote:
> On Mon, Feb 3, 2014 at 1:35 PM, Pweaver (Paul Weaver)
> wrote:
> > We have been running into a (live lock?) issue on our production Postgres
> > instance causing queries referencing a particular table to become
> e
We have been running into a (live lock?) issue on our production Postgres
instance causing queries referencing a particular table to become extremely
slow and our application to lock up.
This tends to occur on a particular table that gets a lot of queries
against it after a large number of deletes
val, then
using a view that includes a calculation using CURRENT_DATE().
Regards,
Paul Bort
Systems Engineer
TMW Systems, Inc.
pb...@tmwsystems.com
216 831 6606 x2233
216 8313606 (fax)
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
This is a good one :)
Here is a "brief" description of our issue(Postgres 9.0):
Tables:
main fact table:
Table "public.parent_fact"
Column|Type |
--+-+---
etime| date | not n
8.3 on 64bit OS with 64gig of memory but
with Postgres still tuned for the 8 gigs the servers originally had
and under a VM).
Paul
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
r the other, such as
allocating shared_buffers to a much larger percentage (such as 90-95%
of expected 'free' memory).
Paul
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
r the other, such as
allocating shared_buffers to a much larger percentage (such as 90-95%
of expected 'free' memory).
Paul
(Apologies if two copies of this email arrive, I sent the first from
an email address that wasn't directly subscribed to the list so it was
blocked).
--
Sent
On 26-Sep-2009, at 10:16 PM, Claus Guttesen wrote:
I have a big performance problem in my SQL select query:
select * from event where user_id in
(500,499,498,497,496,495,494,493,492,491,490,489,488,487,486,485,484,483,482,481,480,479,478,477,476,475,474
I played with work_mem and setting work_mem more than 256000 do not change
the performance.
I try to upgrade to 8.3 using etch-backports but it's a new install not an
upgrade.
So i have to create users, permissions, import data again, it scared me so
i want to find another solutions first.
But now
Thanks,
Unfornatly, i can't update pgsql to 8.3 since it's not in debian stable.
So i'm going to play with work_mem & shared_buffers.
With big shared_buffers pgsql tells me
shmget(cle=5432001, taille=11183431680, 03600).
so i do "echo 13183431680 > /proc/sys/kernel/shmmax" ( 10Go + 2Go just
in
Hello
I'm running pgsql 8.1.11 (from debian stable) on a server with 16GB RAM
(Linux helios 2.6.18-6-amd64 #1 SMP Tue Aug 19 04:30:56 UTC 2008 x86_64
GNU/Linux).
I have a table "tickets" with 1 000 000 insert by month ( ~2600 each 2hours
) (for the moment 1300 rows for 5GB )
and i have to extr
On 2008-08-28, at 21:31, Emi Lu wrote:
Good morning,
Tried to compare Table1 based on Table2
. update table1.col = false
if table1.pk_cols not in table2.pk_cols
For the following two ways, (2) always performs better than (1) right,
and I need your inputs.
gs, or you can keep a table of
active partitions that your script which drops off old partitions and
generates new ones can keep updated on the oldest/newest partition
dates. Or some number of other solutions, whatever you find cleanest for
your purposes.
Paul
--
Sent via pgsql-performance m
econd that. PITR is IMHO the way to go, and I believe you'll be
pleasantly surprised how easy it is to do. As always, test your backup
strategy by restoring. Even better, make a point of periodically
testing a restore of production backups to a non-production system.
Paul
-
Sent via
Mark Mielke wrote:
Josh Berkus wrote:
Count() on Oracle and MySQL is almost instantaneous, even for very
large tables. So why can't Postgres do what they do?
AFAIK the above claim is false for Oracle. They have the same
transactional issues we do.
Nope. Oracle's MVCC is implemen
therwise silly) for a bitmap fast index scan/bitmap
conversion for similar dramatic results.
For "large" tables, Oracle is not going to be as fast as MyISAM tables
in MySQL, even with these optimizations, since MyISAM doesn't have to
scan even index pages to get a count(*) answer again
Gregory Stark wrote:
"Richard Huxton" <[EMAIL PROTECTED]> writes:
Paul Lambert wrote:
" -> Sort (cost=30197.98..30714.85 rows=206748 width=16) (actual >>
time=5949.691..7018.931 rows=206748 loops=1)"
"Sort Key: dealer_id, year_id, subled
Paul Lambert wrote:
This part of the query alone takes a significant part of the time:
SELECT DISTINCT ON (finbalance.dealer_id, finbalance.year_id,
finbalance.subledger_id, finbalance.account_id)
finbalance.year_id AS year
'0'::text)))::integer))"
" -> Index Scan using pk_fincompany_dealer_company on
fincompany (cost=0.01..8.28 rows=1 width=61) (actual time=0.007..0.009
rows=1 loops=17227)"
"Index Cond: (((fincompany.dealer
t;>>
>>>>
>>
>> Never take advice from a guy who top posts... A friend of mine just went
>> through an OTG course and had good things to say, and I've heard other
>> speak well of it too, so I'd probably recommend them, but there are
>>
minute when they find a cheaper system.
--
Paul Lambert
Database Administrator
AutoLedgers
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
Thank you for your answer.
Now i ve to find how to reduce the size of the table.
Paul.
Le mardi 28 août 2007 à 12:55 -0400, Tom Lane a écrit :
> Paul <[EMAIL PROTECTED]> writes:
> > Why in the first case, pgsql uses the "better" index and if i search
> > r_service
scan" first ?
There ara too much rows in this table ?
I'm doing something wrong ?
PS: sorry for my english, i'm french.
--
Paul.
t;Bitmap Heap scan" first ?
There ara too much rows in this table ?
PS: sorry for my english, i'm french.
--
Paul.
Alvaro,
thanks for the quick reply. Just to make sure: I do not set this
command. This results in the database cycling through a finite set
(hopefully small) set of WAL files. So old WAL files are reused once
the engine thinks this can be done.
Thanks
Paul
On 23-jul-2007, at 19:34
hint. And yes I know I put my database in danger
etc. This is for some benchmarks where I do not want the overhead of
archiving. Jus a file system that will not fill with zillions of
these 16MB WAL files ;^)
Thanks
Paul.
---(end of broadcast)---
At 16:26 04/05/2007, you wrote:
Paul Smith wrote:
Why doesn't it use the other index? If use 'set enable_seqscan=0'
then it does.
Just a guess, but is the table clustered on column a? Maybe not
explicitly, but was it loaded from data that was sorted by a?
I wouldn't ha
I can't just set an arbitrarily big
limit to use the index.
Any ideas? To me it looks like a bug in the planner. I can't think of
any logical reason not to use an existing index to retrieve a sorted
listing of the data.
PaulVPOP3
You'll find that PostGIS does a pretty good job of selectivity
estimation.
P
On 13-Feb-07, at 9:09 AM, Guillaume Smet wrote:
Hi all,
Following the work on Mark Stosberg on this list (thanks Mark!), I
optimized our slow proximity queries by using cube, earthdistance
(shipped with contrib) an
ue key violation leave dead rows in the table? If so, why?
I really appreciate any information you guys can give me. I'm convinced
that PG is the best database for our needs, but I need to be able to get
this database performing well enough to convince the bigwigs.
Regards,
Paul Lathrop
System
Take a look at: http://www.icp-vortex.com/english/index_e.htm
They have always made good RAID controllers.
Cheers
Paul
On Thu, 2006-07-06 at 11:10 -0700, Kenji Morishige wrote:
> Thanks for the suggestion Mark, though the server chassis I am trying to
> utilize already has 4 10,000 RP
I think that having an API for backup functionality would definitely be useful.
Just my 2 cents...
Paul
On 6/8/06, Christopher Kings-Lynne <[EMAIL PROTECTED]> wrote:
> Personally I think it would be neat. For example the admin-tool guys> would be able to get a dump without
imitated: move pg_xlog before anything else.
Now about "client side", I reccomend you install and use pgpool, see:
http://pgpool.projects.postgresql.org/ . Because "pgpool caches the
connection to PostgreSQL server to reduce the overhead to establish the
connection to it". Allways g
5000
Thread 3 : gets offset 1 limit 5000
And so on...
Would there be any other faster way than what It thought?
-Original Message-
From: PFC [mailto:[EMAIL PROTECTED]
Sent: Thursday, May 11, 2006 7:06 AM
To: Christian Paul Cosinas; pgsql-performance@postgresql.org
Subject: Re
Hi!
How can I speed up my server's performance when I use offset and limit
clause.
For example I have a query:
SELECT * FROM table ORDER BY id, name OFFSET 10 LIMIT 1
This query takes a long time about more than 2 minutes.
If my query is:
SELECT * FROM table ORDER BY id, name OFFSET 500
ed.Could the problem be that an index on a function result is not cached or less well cached ?
Thanks,Paul
busy, but it has more or less as many writes as
reads.
I have not seen more then 10-15 simultaneous queries.
Any idea why idle postmaster consume 3-5% CPU ?
This is a FreeBSD 5.4-RELEASE server with 2x3G Xeon CPUs, 2G memory,
RAID1 mirrored U320 drives.
Thanx
Paul
signature.asc
Description:
at least 4 bytes to be stored ?
Thanks,Paul
Sent: Tuesday, November 08, 2005 2:11 AM
To: Christian Paul B. Cosinas
Cc: 'Alvaro Nunes Melo'; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Temporary Table
Christian Paul B. Cosinas wrote:
> I try to run this command in my linux server.
> VACUUM FULL pg_class;
>
I try to run this command in my linux server.
VACUUM FULL pg_class;
VACUUM FULL pg_attribute;
VACUUM FULL pg_depend;
But it give me the following error:
-bash: VACUUM: command not found
I choose Polesoft Lockspam to fight spam, and you?
http://www.polesoft.com/refer.html
---
In what directory in my linux server will I find these 3 tables?
-Original Message-
From: Alvaro Nunes Melo [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 26, 2005 10:49 AM
To: Christian Paul B. Cosinas
Subject: Re: [PERFORM] Temporary Table
Christian Paul B. Cosinas wrote:
>I
Does Creating Temporary
table in a function and NOT dropping them affects the performance of the
database?
I choose Polesoft Lockspam to fight spam, and you?
http://www.polesoft.com/refer.html
I choose Polesoft Lockspam to fight spam, and you?
http://www.polesoft.com/refer.html
ith vacuum. We only have a
full server vacuum once a day.
-Original Message-
From: Mark Kirkwood [mailto:[EMAIL PROTECTED]
Sent: Monday, October 24, 2005 3:14 AM
To: Christian Paul B. Cosinas
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Used Memory
>
>
> I just n
It affect my application since the
database server starts to slow down. Hence a very slow in return of functions.
Any more ideas about this everyone?
Please….
From:
[EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Alex Turner
Sent: Friday, October 21, 2005
3:42 P
Hi! Here is the Specifications of My Server.
I would really appreciate the best configuration of postgresql.conf for my
sevrer.
I have tried so many value in the parameters but It seems that I cannot get
the speed I want.
OS: Redhat Linux
CPU: Dual Xeon
Memory: 6 gigabyte
PostgreSQL Version 8.0
?
-Original Message-
From: Alvaro Nunes Melo [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 25, 2005 10:58 AM
To: Christian Paul B. Cosinas
Subject: Re: [PERFORM] Temporary Table
Hi Christian,
Christian Paul B. Cosinas wrote:
> Does Creating Temporary table in a function and NOT dropp
Does Creating Temporary table
in a function and NOT dropping them affects the performance of the database?
I choose Polesoft Lockspam to fight spam, and you?
http://www.polesoft.com/refer.html
Here are the configuration of our database server:
port = 5432
max_connections = 300
superuser_reserved_connections = 10
authentication_timeout = 60
shared_buffers = 48000
sort_mem = 32168
sync = false
Do you think this is enough? Or
Paul B. Cosinas
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Used Memory
Christian Paul B. Cosinas wrote:
> Hi To all those who replied. Thank You.
>
> I monitor my database server a while ago and found out that memory is
> used extensively when I am fetching records from
Hi To all those who replied. Thank You.
I monitor my database server a while ago and found out that memory is used
extensively when I am fetching records from the database. I use the command
"fetch all" in my VB Code and put it in a recordset.Also in this command the
CPU utilization is used extens
total used free shared
buffers cached
Mem: 6192460 6137424
55036 0 85952 5828844
-/+ buffers/cache:
Also Does Creating Temporary table in a function and not dropping them
affects the performance of the database?
-Original Message-
From: Jens-Wolfhard Schicke [mailto:[EMAIL PROTECTED]
Sent: Friday, October 21, 2005 7:23 AM
To: Christian Paul B. Cosinas; pgsql-performance@postgresql.org
:23 AM
To: Christian Paul B. Cosinas; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Used Memory
--On Freitag, Oktober 21, 2005 03:40:47 + "Christian Paul B. Cosinas"
<[EMAIL PROTECTED]> wrote:
> I am having a confusion to the memory handling of postgreSQL.
> I re
HI!
I am having a confusion to the memory
handling of postgreSQL.
Here is the Scenario.
I rebooted my Server which is a PostgreSQL
8.0 Running on Redhat 9, which is a Dual Xeon Server and 6 gig of memory.
Of course there is not much memory still
used since it is just restarted.
B
Hi!
I'm experiencing a very slow deletion of records. Which I thin is not right.
I have a Dual Xeon Server with 6gig Memory.
I am only deleting about 22,000 records but it took me more than 1 hour to
finish this.
What could possibly I do so that I can make this fast?
Here is the code inside my f
ndle disk?
In cases such as this, where an external storage array with a hardware
RAID controller is used, the normal advice to separate the data from the
pg_xlog seems to come unstuck, or are we missing something?
Cheers,
Paul Johnson.
---(end of broadcast)-
gain.
Has anyone experienced real performance gains by moving the pg_xlog files?
Thanks in anticipation,
Paul.
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
ive_cache_size
--
1000
(1 row)
I have used the manual pages on postgresql, postmaster, and so on, but
I cant find anywhere to specify which config file Pg is to use. I'm
not entirely sure if he uses the one im editing
(/usr/local/etc/postgresql.conf).
Any hints,
a multi-key index on them both.
Paul
Kjell Tore Fossbakk wrote:
Hello!
I use FreeBSD 4.11 with PostGreSQL 7.3.8.
I got a huge database with roughly 19 million records. There is just one
table, with a time field, a few ints and a few strings.
table test
fields time (timestamp), source (string),
SHMMAX to 134217728 (ie 128 Meg)
What should SHMALL be?
The current system values are
[EMAIL PROTECTED]:~/data$ cat /proc/sys/kernel/shmmax
33554432
[EMAIL PROTECTED]:~/data$ cat /proc/sys/kernel/shmall
2097152
ie SHMALL seems to be 1/16 of SHMMAX
Paul
[1] http://www.po
00 setup that
we both run.
Many thanks,
Paul.
> Hi, Paul
>
> Josh helped my company with this issue -- PG doesn't use shared memory
> like Oracle, it depends more on the OS buffers. Making shared mem
> too large a fraction is disasterous and seriously impact performance.
>
memory available via /etc/system, and having read all we could find
on various web sites.
Should I knock it down to 400MB as you suggest?
I'll check out that URL.
Cheers,
Paul.
> Paul,
>> I would like to know what /etc/system and postgresql_conf values are
recommended to deliver
Hi all, we have an Sun E3500 running Solaris 9. It's got 6x336MHz CPU and
10GB RAM.
I would like to know what /etc/system and postgresql_conf values are
recommended to deliver as much system resource as possible to Postgres. We
use this Sun box solely for single user Postgres data warehousing
work
I ran analyze; several times.
rgds
Antony Paul
On Mon, 07 Feb 2005 12:53:30 +0100, Jan Poslusny wrote:
> It depends on many circumstances, but, at first, simple question: Did
> you run vacuum analyze?
> I am satisfied with functional indexes - it works in my pg 7.4.x.
>
> An
Sorry I forgot to mention it. I am using 7.3.3. I will try it in 8.0.0
rgds
Antony Paul
On Mon, 7 Feb 2005 12:46:05 +0100, Steinar H. Gunderson
<[EMAIL PROTECTED]> wrote:
> On Mon, Feb 07, 2005 at 04:44:07PM +0530, Antony Paul wrote:
> > On more investigation I found that in
On more investigation I found that index scan is not used if the query
have a function in it like lower() and an index exist for lower()
column.
rgds
Antony Paul
On Mon, 7 Feb 2005 14:37:15 +0530, Antony Paul <[EMAIL PROTECTED]> wrote:
> Hi all,
> I am facing a strange proble
.
rgds
Antony Paul.
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
467
Swap: 501 0 501
rgds
Antony Paul
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get thro
Actually the query is created like this.
User enters the query in a user interface. User can type any character
in the query criteria. ie. % and _ can be at any place. User have the
freedom to choose query columns as well. The query is agianst a single
table .
rgds
Antony Paul
On Tue, 25 Jan
://archives.postgresql.org/pgsql-sql/2004-11/msg00285.php
It says that index is not used if the search string begins with a % symbol.
rgds
Antony Paul
On Mon, 24 Jan 2005 20:58:54 +1100, Russell Smith <[EMAIL PROTECTED]> wrote:
> On Mon, 24 Jan 2005 08:18 pm, Antony Paul wrote:
> > Hi,
&
I used PreparedStatements to avoid SQL injection attack and it is the
best way to do in JDBC.
rgds
Antony Paul
On Mon, 24 Jan 2005 09:01:49 -0500, Merlin Moncure
<[EMAIL PROTECTED]> wrote:
> Russell wrote:
> > I am not sure what the effect of it being prepared will be, however
performance ?. If creating index can help then
how the index should be created on lower case or uppercase ?.
rgds
Antony Paul
---(end of broadcast)---
TIP 8: explain analyze is your friend
Hi all,
I have a table which have more than 20 records. I need to get
the records which matches like this
where today::date = '2004-11-05';
This is the only condition in the query. There is a btree index on the
column today.
Is there any way to optimise it.
rgds
A
ve to take into account your
expected number of concurrent connections.
Paul
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
relationship between PvA and PvB on a
row-by-row basis.
Have you considered using cursors?
--
Paul Thomas
+--+---+
| Thomas Micro Systems Limited | Software Solutions for Business |
| Computer Consultants | http
index but a type mis-match (e.g, an int4 field referencing
an int8 field)
Either of these will cause a sequential table scan and poor performance.
--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for
Having trouble with one table (see time to count records below!).
Fairly new to postgres so any help much appreciated.
It only contains 9,106 records - as you can see from:
select count(id) from project
count
9106
1 row(s)
Total runtime: 45,778.813 ms
There are only 3 fields:
id
integer
nex
eID_key" on "tblForumMessages"
~ (cost=0.00..8037.33 rows=2150 width=223) (actual time=0.153..0.153
rows=0 loops=1)
~ Index Cond: ("fk_iParentMessageID" = 90)
~ Total runtime: 0.323 ms
SELECT COUNT(*) FROM "tblForumMessages" WHERE "fk_iParentMessageID"
On 04/08/2004 13:45 Paul Serby wrote:
Can anyone give a good reference site/book for getting the most out of
your postgres server.
All I can find is contradicting theories on how to work out your
settings.
This is what I followed to setup our db server that serves our web
applications.
http
Can anyone give a good reference site/book for getting the most out of
your postgres server.
All I can find is contradicting theories on how to work out your settings.
This is what I followed to setup our db server that serves our web
applications.
http://www.phpbuilder.com/columns/smith2001082
or 2 years now and have yet to discover any key
sequence which makes any sense. But then I don't do drugs so my perseption
is probably at odds with the origators of Emacs ;)
--
Paul Thomas
+--+-+
| Thomas Micro Systems Lim
her queries
are slightly slower than under Oracle on the same
hardware but nothing like this.
Usual questions:
have you vacuumed the table recently?
what are your postgresql.conf settings?
can you show us explain ANALYZE output rather than just explain output?
On 17/06/2004 17:54 Vitaly Belman wrote:
Is it possible to download the Visual Explain only (link)? I only see
that you can donwload the whole ISO (which I hardly need).
You can get it from CVS and build it yourself.
--
Paul Thomas
On 17/06/2004 12:10 Adam Witney wrote:
Will this run on other platforms? OSX maybe?
It's a Java app so it runs on any any platform with a reasonably modern
Java VM.
--
Paul Thomas
+--+-+
| Thomas Micro Systems Li
04','182','153','6','2004','0')
DESC
OFFSET 0 LIMIT 20;
I expect that pg is having to evaluate your function every time it does a
compare within its sort. Something like
SELECT t1.value1,t1.value2,
getday_total(..) AS
One big caveat re. the "SAME" striping strategy, is that readahead can
really hurt an OLTP you.
Mind you, if you're going from a few disks to a caching array with many
disks, it'll be hard to not have a big improvement
But if you push the envelope of the array with a "SAME" configuration,
read
I'm confused why you say the system is 70% busy: the vmstat output
shows 70% *idle*.
The vmstat you sent shows good things and ambiguous things:
- si and so are zero, so your not paging/swapping. Thats always step
1. you're fine.
- bi and bo (physical IO) shows pretty high numbers for how many
2004, Paul Tuckfield wrote:
If you are having a "write storm" or bursty writes that's burying
performance, a scsi raid controler with writeback cache will greatly
improve the situation, but I do believe they run around $1-2k. If
it's write specific problem, the cache matte
it's very good to understand specific choke points you're trying to
address by upgrading so you dont get disappointed. Are you truly CPU
constrained, or is it memory footprint or IO thruput that makes you
want to upgrade?
IMO The best way to begin understanding system choke points is vmstat
o
The king of statistics in these cases, is probably vmstat. one can
drill down on specific things from there, but first you should send
some vmstat output.
Reducing cache -> reducing IO suggests to me the OS might be paging out
shared buffers. This is indicated by activity in the "si" and "so
I'm guessing you have a 4 cpu box:
1 99 percent busy process on a 4 way box == about 25% busy overall.
On May 5, 2004, at 6:03 AM, Tom Lane wrote:
"Cyrille Bonnet" <[EMAIL PROTECTED]> writes:
Should I be worried that Postgres is eating up 99% of my CPU??? Or is
this
*expected* behaviour?
It's not
Dave:
Why would test and set increase context swtches:
Note that it *does not increase* context swtiches when the two threads
are on the two cores of a single Xeon processor. (use taskset to force
affinity on linux)
Scenario:
If the two test and set processes are testing and setting the same bi
y underestimated for
todays
average hardware configuration (1+GHz, 0.5+GB RAM, fast FSB, fast HDD).
It seems to me better strategy to force that 1% of users to "downgrade"
cfg.
than vice-versa.
regards
ch
This has been discussed many times before. Check the
.4.
Yes, I've seen other benchmarks which also show that.
--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for
Business |
| Computer Consultants |
http://www.t
.)
On Apr 20, 2004, at 1:02 PM, Paul Tuckfield wrote:
I tried to test how this is related to cache coherency, by forcing
affinity of the two test_run.sql processes to the two cores
(pipelines? threads) of a single hyperthreaded xeon processor in an
smp xeon box.
When the processes are allowed to
I tried to test how this is related to cache coherency, by forcing
affinity of the two test_run.sql processes to the two cores (pipelines?
threads) of a single hyperthreaded xeon processor in an smp xeon box.
When the processes are allowed to run on distinct chips in the smp box,
the CS storm h
bout what the value should be..)
(b) is determined by the dastardly trick of actually sampling the data in
the table!!! That's what analyze does. It samples your table(s) and uses
the result to feeede into it's descision about when to flip between
sequential and index scans.
Hope thi
1 - 100 of 122 matches
Mail list logo