we are using cloud server
*this are memory info*
free -h
total used free sharedbuffers cached
Mem: 15G15G 197M 194M 121M14G
-/+ buffers/cache: 926M14G
Swap: 15G32M15G
*this
l, so a rather old and slow box) and I could sort
1E6 rows of 128 random bytes in 5.6 seconds. Even if I kept the first 96
bytes constant (so only the last 32 were random), it took only 21
seconds. Either this CPU is really slow or the data is heavily skewed -
is it possible that all dimensions exc
On 2015-05-29 10:55:44 +0200, Peter J. Holzer wrote:
wdsah= explain analyze select facttablename, columnname, term, concept_id,
t.hidden, language, register
from term t where facttablename='facttable_stat_fta4' and
columnname='einheit' and exists (select 1 from facttable_stat_fta4 f
)
(There was no analyze on facttable_stat_fta4 (automatic or manual) on
facttable_stat_fta4 between those two tests, so the statistics on
facttable_stat_fta4 shouldn't have changed - only those for term.)
hp
--
_ | Peter J. Holzer| I want to forget all about both belts
is the
frequency the same, every row where einheit='kg' has berechnungsart='m'
and every row where einheit='EUR' has berechnungsart='n'. So I don't see
why two different execution plans are chosen.
hp
--
_ | Peter J. Holzer| I want to forget all about both belts
On 2015-05-29 10:55:44 +0200, Peter J. Holzer wrote:
wdsah= explain analyze select facttablename, columnname, term, concept_id,
t.hidden, language, register
from term t where facttablename='facttable_stat_fta4' and
columnname='einheit' and exists (select 1 from facttable_stat_fta4 f
On Wed, Jul 23, 2014 at 6:21 AM, Rural Hunter ruralhun...@gmail.com wrote:
What's wrong and how can I improve the planning performance?
What is constraint exclusion set to?
--
Douglas J Hunley (doug.hun...@gmail.com)
Hi,
My application has high data intensive operations (high number of inserts
1500 per sec.). I switched my application from MySQL to PostgreSQL. When I
take performance comparison report between mysql and pgsql, I found that,
there are huge difference in disk writes and disk space taken. Below
have
no experience with databases past 50M rows, so my questions are just so you
can line up the right info for when the real experts get online :-)
Regards, David
On 16/08/12 11:23, J Ramesh Kumar wrote:
Hi,
My application has high data intensive operations (high number of
inserts
a specific table instead of whole database ?
Thanks,
Ramesh
On Thu, Aug 16, 2012 at 10:09 AM, Scott Marlowe scott.marl...@gmail.comwrote:
Please use plain text on the list, some folks don't have mail readers
that can handle html easily.
On Wed, Aug 15, 2012 at 10:30 PM, J Ramesh Kumar rameshj1
Hi,
My application is performing 1600 inserts per second and 7 updates per
second. The updates occurred only in a small table which has only 6 integer
columns. The inserts occurred in all other daily tables. My application
creates around 75 tables per day. No updates/deletes occurred in those 75
On Sun, Sep 11, 2011 at 1:36 PM, Ogden li...@darkstatic.com wrote:
As someone who migrated a RAID 5 installation to RAID 10, I am getting far
better read and write performance on heavy calculation queries. Writing on
the RAID 5 really made things crawl. For lots of writing, I think RAID 10 is
Sorry, meant to send this to the list.
For really big data-warehousing, this document really helped us:
http://pgexperts.com/document.html?id=49
On Sun, Sep 11, 2011 at 5:22 PM, Maciek Sakrejda msakre...@truviso.comwrote:
performance guidelines, I recommend Greg Smith's PostgreSQL 9.0 High
Performance [1] (disclaimer: I used to work with Greg and got a free
copy)
I'll second that. PostgreSQL 9.0 High Performance is an excellent
On Wed, Aug 17, 2011 at 1:55 PM, Ogden li...@darkstatic.com wrote:
What about the OS itself? I put the Debian linux sysem also on XFS but
haven't played around with it too much. Is it better to put the OS itself on
ext4 and the /var/lib/pgsql partition on XFS?
We've always put the OS on
On Mon, Apr 25, 2011 at 10:04 PM, Rob Wultsch wult...@gmail.com wrote:
Tip from someone that manages thousands of MySQL servers: Use InnoDB
when using MySQL.
Granted, my knowledge of PostgreSQL (and even MSSQL) far surpasses my
knowledge of MySQL, but if InnoDB has such amazing benefits as
Not sure if this is the right list...but:
Disclaimer: I realize this is comparing apples to oranges. I'm not
trying to start a database flame-war. I just want to say thanks to
the PostgreSQL developers who make my life easier.
I manage thousands of databases (PostgreSQL, SQL Server, and
On Thu, Apr 21, 2011 at 3:04 PM, Scott Marlowe scott.marl...@gmail.com wrote:
Just because you've been walking around with a gun pointing at your
head without it going off does not mean walking around with a gun
pointing at your head is a good idea.
+1
--
Sent via pgsql-performance mailing
On Thu, Mar 17, 2011 at 10:13 AM, Jeff thres...@torgo.978.org wrote:
hey folks,
Running into some odd performance issues between a few of our db boxes.
We've noticed similar results both in OLTP and data warehousing conditions here.
Opteron machines just seem to lag behind *especially* in
On Tue, Feb 08, 2011 at 03:52:31PM -0600, Kevin Grittner wrote:
Scott Marlowe scott.marl...@gmail.com wrote:
Greg Smith g...@2ndquadrant.com wrote:
Kevin and I both suggested a fast plus timeout then immediate
behavior is what many users seem to want.
Are there any settings in
On Thu, Feb 03, 2011 at 12:44:23PM -0500, Chris Browne wrote:
mladen.gog...@vmsinfo.com (Mladen Gogala) writes:
Hints are not even that complicated to program. The SQL parser should
compile the list of hints into a table and optimizer should check
whether any of the applicable access
On Sun, Jan 30, 2011 at 05:18:15PM -0500, Tom Lane wrote:
Andres Freund and...@anarazel.de writes:
What happens if you change the
left join event.origin on event.id = origin.eventid
into
join event.origin on event.id = origin.eventid
?
The EXISTS() requires that origin is
Odds are that a table of 14 rows will more likely be cached in RAM
than a table of 14 million rows. PostgreSQL would certainly be more
openminded to using an index if chances are low that the table is
cached. If the table *is* cached, though, what point would there be
in reading an index?
Also,
On Wednesday 17 November 2010 15:26:56 Eric Comeau wrote:
This is not directly a PostgreSQL performance question but I'm hoping
some of the chaps that build high IO PostgreSQL servers on here can help.
We build file transfer acceleration s/w (and use PostgreSQL as our
database) but we need
Tom Lane t...@sss.pgh.pa.us wrote in message
news:25116.1277047...@sss.pgh.pa.us...
Davor J. dav...@live.com writes:
Suppose 2 functions: factor(int,int) and offset(int, int).
Suppose a third function: convert(float,int,int) which simply returns
$1*factor($2,$3)+offset($2,$3)
All three
the functions. So, as far as I understand the
Postgres workings, this shouldn't pose a problem.
Regards,
Davor
Tom Lane t...@sss.pgh.pa.us wrote in message
news:25116.1277047...@sss.pgh.pa.us...
Davor J. dav...@live.com writes:
Suppose 2 functions: factor(int,int) and offset(int, int).
Suppose a third
I think I have read what is to be read about queries being prepared in
plpgsql functions, but I still can not explain the following, so I thought
to post it here:
Suppose 2 functions: factor(int,int) and offset(int, int).
Suppose a third function: convert(float,int,int) which simply returns
) AND
(sens_chan_data_timestamp = '2008-06-18 00:00:00'::timestamp without time
zone))
Total runtime: 694.968 ms
Szymon Guz mabew...@gmail.com wrote in message
news:aanlktimb8-0kzrrbddqgxnz5tjdgf2t3ffbu2lvx-...@mail.gmail.com...
2010/6/19 Davor J. dav...@live.com
I think I have read what
On Friday 04 June 2010 14:17:35 Jon Schewe wrote:
Some interesting data about different filesystems I tried with
PostgreSQL and how it came out.
I have an application that is backed in postgres using Java JDBC to
access it. The tests were all done on an opensuse 11.2 64-bit machine,
on the
On Wednesday 02 June 2010 13:37:37 Mozzi wrote:
Hi
Thanx mate Create Index seems to be the culprit.
Is it normal to just use 1 cpu tho?
If it is a single-threaded process, then yes.
And a Create index on a single table will probably be single-threaded.
If you now start a create index on a
On Sat, Mar 20, 2010 at 10:47:30PM -0500, Andy Colson wrote:
I guess, for me, once I started using PG and learned enough about it (all
db have their own quirks and dark corners) I was in love. It wasnt
important which db was fastest at xyz, it was which tool do I know, and
trust, that
On Tue, Mar 23, 2010 at 03:22:01PM -0400, Tom Lane wrote:
Ross J. Reedstrom reeds...@rice.edu writes:
Andy, you are so me! I have the exact same one-and-only-one mission
critical mysql DB, but the gatekeeper is my wife. And experience with
that instance has made me love and trust
Let's say you have one partitioned table, tbl_p, partitioned according to
the PK p_pk. I have made something similar with triggers, basing myself on
the manual for making partitioned tables.
According to the manual, optimizer searches the CHECKs of the partitions to
determine which table(s) to
On Mon, 26 Oct 2009 11:52:22 -0400, Merlin Moncure mmonc...@gmail.com
wrote:
Do you not have an index on last_snapshot.domain_id?
that, and also try rewriting a query as JOIN. There might be
difference in performance/plan.
Thanks, it runs better (average 240s, not 700s) with the index.
On Mon, 26 Oct 2009 14:09:49 -0400, Tom Lane t...@sss.pgh.pa.us wrote:
Michal J. Kubski michal.kub...@cdt.pl writes:
[ function that creates a bunch of temporary tables and immediately
joins them ]
It'd probably be a good idea to insert an ANALYZE on the temp tables
after you fill them
Hi,
On Fri, 23 Oct 2009 16:56:36 +0100, Grzegorz Jaśkiewicz
gryz...@gmail.com wrote:
On Fri, Oct 23, 2009 at 4:49 PM, Scott Mead
scott.li...@enterprisedb.comwrote:
Do you not have an index on last_snapshot.domain_id?
that, and also try rewriting a query as JOIN. There might be
Hi,
Is there any way to get the query plan of the query run in the stored
procedure?
I am running the following one and it takes 10 minutes in the procedure
when it is pretty fast standalone.
Any ideas would be welcome!
# EXPLAIN ANALYZE SELECT m.domain_id, nsr_id FROM nsr_meta m,
Excellent. I'll take a look at this and report back here.
Ross
On Mon, Feb 23, 2009 at 04:17:00PM -0500, Tom Lane wrote:
Ross J. Reedstrom reeds...@rice.edu writes:
Summary: C client and large-object API python both send bits in
reasonable time, but I suspect there's still room
[note: sending a message that's been sitting in 'drafts' since last week]
Summary: C client and large-object API python both send bits in
reasonable time, but I suspect there's still room for improvement in
libpq over TCP: I'm suspicious of the 6x difference. Detailed analysis
will probably find
On Thu, Feb 19, 2009 at 02:09:04PM +0100, PFC wrote:
python w/ psycopg (or psycopg2), which wraps libpq. Same results w/
either version.
I've seen psycopg2 saturate a 100 Mbps ethernet connection (direct
connection with crossover cable) between postgres server and client during
On Tue, Feb 17, 2009 at 12:20:02AM -0700, Rusty Conover wrote:
Try running tests with ttcp to eliminate any PostgreSQL overhead and
find out the real bandwidth between the two machines. If its results
are also slow, you know the problem is TCP related and not PostgreSQL
related.
I
On Tue, Feb 17, 2009 at 01:59:55PM -0700, Rusty Conover wrote:
On Feb 17, 2009, at 1:04 PM, Ross J. Reedstrom wrote:
What is the client software you're using? libpq?
python w/ psycopg (or psycopg2), which wraps libpq. Same results w/
either version.
I think I'll try network sniffing
On Tue, Feb 17, 2009 at 03:14:55PM -0600, Ross J. Reedstrom wrote:
On Tue, Feb 17, 2009 at 01:59:55PM -0700, Rusty Conover wrote:
What is the client software you're using? libpq?
python w/ psycopg (or psycopg2), which wraps libpq. Same results w/
either version.
It's not python
Recently I've been working on improving the performance of a system that
delivers files stored in postgresql as bytea data. I was surprised at
just how much a penalty I find moving from a domain socket connection to
a TCP connection, even localhost. For one particular 40MB file (nothing
outragous)
There are a few things you didn't mention...
First off, what is the context this database is being used in? Is it the
backend for a web server? Data warehouse? Etc?
Second, you didn't mention the use of indexes. Do you have any indexes on
the table in question, and if so, does EXPLAIN
On May 21, 2008, at 12:33 AM, Shane Ambler wrote:
Size can affect performance as much as anything else.
For a brief moment, I thought the mailing list had been spammed. ;-)
J. Andrew Rogers
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes
examples
probably, but you get my point I hope :)
--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http://doug.hunley.homeip.net
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your Subscription:
http://mail.postgresql.org/mj
indexes, does clustering on one index negatively impact
queries that use the other indexes?
5) is it better to cluster on a compound index (index on lastnamefirstname) or
on the underlying index (index on lastname)?
tia
--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http
On Wednesday 27 February 2008 12:40:57 Bill Moran wrote:
In response to Douglas J Hunley [EMAIL PROTECTED]:
After reviewing
http://www.postgresql.org/docs/8.3/static/sql-cluster.html a couple of
times, I have some questions:
1) it says to run analyze after doing a cluster. i'm assuming
On Wednesday 27 February 2008 13:35:16 Douglas J Hunley wrote:
2) is there any internal data in the db that would allow me to
programmatically determine which tables would benefit from being
clustered? 3) for that matter, is there info to allow me to determine
which index it should
wiki
--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http://doug.hunley.homeip.net
Drugs may lead to nowhere, but at least it's the scenic route.
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send
On Tuesday 19 February 2008 17:53:45 Greg Smith wrote:
On Tue, 19 Feb 2008, Douglas J Hunley wrote:
The db resides on a HP Modular Storage Array 500 G2. 4x72.8Gb 15k rpm
disks. 1 raid 6 logical volume. Compaq Smart Array 6404 controller
You might consider doing some simple disk tests
grant you that it's a 5.1G tar file, but 7 hours seems excessive.
Is that kind of timeframe 'abnormal' or am I just impatient? :) If the former,
I can provide whatever you need, just ask for it.
Thanks!
--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http
On Tuesday 19 February 2008 13:22:58 Tom Lane wrote:
Richard Huxton [EMAIL PROTECTED] writes:
Douglas J Hunley wrote:
I spent a whopping seven hours restoring a database late Fri nite for a
Oh, and have you tweaked the configuration settings for the restore?
Lots of work_mem, turn fsync
On Tuesday 05 June 2007 10:34:04 Douglas J Hunley wrote:
On Monday 04 June 2007 17:11:23 Gregory Stark wrote:
Those plans look like they have a lot of casts to text in them. How have
you defined your indexes? Are your id columns really text?
project table:
Indexes:
project_pk PRIMARY
On Monday 04 June 2007 17:17:03 Heikki Linnakangas wrote:
And did you use the same encoding and locale? Text operations on
multibyte encodings are much more expensive.
The db was created as:
createdb -E UNICODE -O user dbname
--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User
btree (folder_id)
item_name btree (name)
and yes, the 'id' column is always: character varying type
And you don't have a 7.4 install around to compare the plans do you?
I have a 7.3.19 db, if that would be useful
--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http
an 'initdb' to
another location on the same box and then copied those values into the config
file. That's cool to do, I assume?
--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http://doug.hunley.homeip.net
Cowering in a closet is starting to seem like a reasonable plan
there's no need to reply direct. I can get the
replies from the list
Thanks again for everyone's assistance thus far. Y'all rock!
--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http://doug.hunley.homeip.net
I feel like I'm diagonally parked in a parallel universe
'
AND (sfuser2.username='nobody' AND field_value2.value_class='Open');
takes 0m9.506s according to time.. it's attached as explain2
TIA, again
--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http://doug.hunley.homeip.net
It's not the pace of life that concerns me, it's
, and is done in such a fashion as to work on all our
supported dbs (pgsql, oracle, mysql).
Thanks a ton for the input thus far
--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http://doug.hunley.homeip.net
Anything worth shooting is worth shooting twice. Ammo is cheap. Life
to
the file? The file I sent is the working copy from the machine in question.
--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http://doug.hunley.homeip.net
Does it worry you that you don't talk any kind of sense?
---(end of broadcast
. There is 8Gb of RAM in the machine, and another 8Gb of swap.
Thank you in advance for any and all assistance you can provide.
--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http://doug.hunley.homeip.net
Handy Guide to Modern Science:
1. If it's green or it wiggles, it's
suggest that XFS is a fine
and safe choice for your application.
J. Andrew Rogers
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
with 64-bit Linux on
Opterons because the AMD64 systems tend to be both faster and
cheaper. Architectures like Sparc have never given us problems, but
they have not exactly thrilled us with their performance either.
Cheers,
J. Andrew Rogers
---(end of broadcast
significantly from the P4 in
capability.
J. Andrew Rogers
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
?
COPY table FROM STDIN using psql on the server
I should have gprof numbers on a similarly set up test machine soon ...
--
Daniel J. Luke
++
| * [EMAIL PROTECTED] * |
| *-- http://www.geeklair.net
On May 30, 2006, at 3:59 PM, Daniel J. Luke wrote:
I should have gprof numbers on a similarly set up test machine
soon ...
gprof output is available at http://geeklair.net/~dluke/
postgres_profiles/
(generated from CVS HEAD as of today).
Any ideas are welcome.
Thanks!
--
Daniel J. Luke
)?
Thanks for any insight!
--
Daniel J. Luke
++
| * [EMAIL PROTECTED] * |
| *-- http://www.geeklair.net -* |
++
| Opinions
potentially
have people querying it constantly, so I can't remove and re-create
the index.
--
Daniel J. Luke
++
| * [EMAIL PROTECTED] * |
| *-- http://www.geeklair.net
:) I'll keep searching the list archives and see if I find
anything else (I did some searching and didn't find anything that I
hadn't already tried).
Thanks!
--
Daniel J. Luke
++
| * [EMAIL PROTECTED
On May 24, 2006, at 4:13 PM, Steinar H. Gunderson wrote:
On Wed, May 24, 2006 at 04:09:54PM -0400, Daniel J. Luke wrote:
no warnings in the log (I did change the checkpoint settings when I
set up the database, but didn't notice an appreciable difference in
insert performance).
How about
we're upgrading from 8.1.3 to 8.1.4 today).
--
Daniel J. Luke
++
| * [EMAIL PROTECTED] * |
| *-- http://www.geeklair.net
don't think that's currently limiting performance).
--
Daniel J. Luke
++
| * [EMAIL PROTECTED] * |
| *-- http://www.geeklair.net
Hey guys, how u been. This is quite a newbie
question, but I need to ask it. I'm trying to wrap my mind around the syntax of
join and why and when to use it. I understand the concept of making a query go
faster by creating indexes, but it seems that when I want data from multiple
tables
If I want my database to go faster, due to X then I would think that the
issue is about performance. I wasn't aware of a paticular constraint on X.
I have more that a rudementary understanding of what's going on here, I was
just hoping that someone could shed some light on the basic principal
Yes, that helps a great deal. Thank you so much.
- Original Message -
From: Richard Huxton dev@archonet.com
To: [EMAIL PROTECTED]
Cc: pgsql-performance@postgresql.org
Sent: Thursday, January 26, 2006 11:47 AM
Subject: Re: [PERFORM] Query optimization with X Y JOIN
[EMAIL PROTECTED]
I have the answer I've been looking for and I'd like to share with all.
After help from you guys, it appeared that the real issue was using an index
for my order by X DESC clauses. For some reason that doesn't make good
sense, postgres doesn't support this, when it kinda should automatically.
Here's some C to use to create the operator classes, seems to work ok.
---
#include postgres.h
#include string.h
#include fmgr.h
#include utils/date.h
/* For date sorts */
PG_FUNCTION_INFO_V1(ddd_date_revcmp);
Datum ddd_date_revcmp(PG_FUNCTION_ARGS){
DateADT
I'm trying to query a table with 250,000+ rows. My
query requires I provide 5 colums in my "order by" clause:
selectcolumn
fromtable
where
column = '2004-3-22 0:0:0'order by
ds.receipt desc,
ds.carrier_id asc,
ds.batchnum asc,
encounternum asc,
ds.encounter_id ASC
limit 100 offset 0
wrong ?
- Original Message -
From: Josh Berkus josh@agliodbs.com
To: pgsql-performance@postgresql.org
Cc: [EMAIL PROTECTED]
Sent: Tuesday, January 17, 2006 5:25 PM
Subject: Re: [PERFORM] Multiple Order By Criteria
J,
I have an index built for each of these columns in my order
I've read all of this info, closely. I wish when I was searching for an
answer for my problem these pages came up. Oh well.
I am getting an idea of what I need to do to make this work well. I was
wondering if there is more information to read on how to implement this
solution in a more simple
and the upgrade cost is below the noise floor for
most database servers.
J. Andrew Rogers
---(end of broadcast)---
TIP 6: explain analyze is your friend
On Dec 12, 2005, at 2:19 PM, Vivek Khera wrote:
On Dec 12, 2005, at 5:16 PM, J. Andrew Rogers wrote:
We've swapped out the DIMMs on MegaRAID controllers. Given the
cost of a standard low-end DIMM these days (which is what the LSI
controllers use last I checked), it is a very cheap upgrade
AMD added quad-core processors to their public roadmap for 2007.
Beyond 2007, the quad-cores will scale up to 32 sockets
(using Direct Connect Architecture 2.0)
Expect Intel to follow.
douglas
On Nov 16, 2005, at 9:38 AM, Steve Wampler wrote:
[...]
Got it - the cpu is only
Hey, you can say what you want about my style, but you
still haven't pointed to even one article from the vast literature
that you claim supports your argument. And I did include a
smiley. Your original email that PostgreSQL is wrong and
that you are right led me to believe that you, like
A blast from the past is forwarded below.
douglas
Begin forwarded message:
From: Tom Lane [EMAIL PROTECTED]>
Date: August 23, 2005 3:23:43 PM EDT
To: Donald Courtney [EMAIL PROTECTED]>
Cc: pgsql-performance@postgresql.org, Frank Wiles [EMAIL PROTECTED]>, gokulnathbabu manoharan [EMAIL
Ron Peacetree sounds like someone talking out of his _AZZ_.
He can save his unreferenced flapdoodle for his SQL Server
clients. Maybe he will post references so that we may all
learn at the feet of Master Peacetree. :-)
douglas
On Oct 4, 2005, at 7:33 PM, Ron Peacetree wrote:
pg is
you are doing a lot of math on
the results.
YMMV, as always. Recommendations more specific than Opterons rule, Xeons
suck depend greatly on what you plan on doing with the database.
Cheers,
J. Andrew Rogers
---(end of broadcast)---
TIP 9
v2.6.12 kernel that I know of is FC4, which
is not really supported in the enterprise sense of course.
J. Andrew Rogers
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
. Using the patched kernel, one gets the
performance most people were expecting.
The v2.6.12+ kernels are a bit new, but they contain a very important
performance patch for systems like the one above. It would definitely be
worth testing if possible.
J. Andrew Rogers
it
is at least as fast as XFS for Postgres.
Since XFS is more mature than JFS on Linux, I go with XFS
by default. If some tragically bad problems develop with
XFS I may reconsider that position, but we've been very
happy with it so far. YMMV.
cheers,
J. Andrew Rogers
of
date anyway).
J. Andrew Rogers
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
needs.
Cheers,
J. Andrew Rogers
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
A good one page discussion on the future of SCSI and SATA can
be found in the latest CHIPS (The Department of the Navy Information
Technology Magazine, formerly CHIPS AHOY) in an article by
Patrick G. Koehler and Lt. Cmdr. Stan Bush.
Click below if you don't mind being logged visiting Space and
You asked for it! ;-)
If you want cheap, get SATA. If you want fast under
*load* conditions, get SCSI. Everything else at this
time is marketing hype, either intentional or learned.
Ignoring dollars, expect to see SCSI beat SATA by 40%.
* * * What I tell you three times is true * * *
Also,
Tom Lane wrote:
You might try the attached patch (which I just applied to HEAD).
It cuts down the number of acquisitions of the BufMgrLock by merging
adjacent bufmgr calls during a GIST index search. [...]
Thanks - I applied it successfully against 8.0.0, but it didn't seem to
have a noticeable
Tom Lane wrote:
I'm not completely convinced that you're seeing the same thing,
but if you're seeing a whole lot of semops then it could well be.
I'm seeing ~280 semops/second with spinlocks enabled and ~80k
semops/second ( 4 mil. for 100 queries) with --disable-spinlocks, which
increases total
Oleg Bartunov wrote:
On Thu, 3 Feb 2005, Marinos J. Yannikos wrote:
concurrent access to GiST indexes isn't possible at the moment. I [...]
there are should no problem with READ access.
OK, thanks everyone (perhaps it would make sense to clarify this in the
manual).
I'm willing to see some
} = \reaper; }
$SIG{CHLD} = \reaper;
for $i (1..$n)
{
if (fork() 0) { $running++; }
else
{
my
$dbh=DBI-connect('dbi:Pg:host=daedalus;dbname=censored','root','',{
AutoCommit = 1 }) || die !db;
for my $j (1..$nq)
{
my $sth=$dbh-prepare($sql
1 - 100 of 123 matches
Mail list logo