William Yu wrote:
We upgraded our disk system for our main data processing server earlier
this year. After pricing out all the components, basically we had the
choice of:
LSI MegaRaid 320-2 w/ 1GB RAM+BBU + 8 15K 150GB SCSI
or
Areca 1124 w/ 1GB RAM+BBU + 24 7200RPM 250GB SATA
My mistake
Usually when simple queries take a long time to run, it's the system
tables (pg_*) that have become bloated and need vacuuming. But that's
just random guess on my part w/o my detailed info.
Greg Stumph wrote:
Well, since I got no response at all to this message, I can only assume that
I've as
David Boreham wrote:
It isn't only Postgres. I work on a number of other server applications
that also run much faster on Opterons than the published benchmark
figures would suggest they should. They're all compiled with gcc4,
so possibly there's a compiler issue. I don't run Windows on any
of ou
[EMAIL PROTECTED] wrote:
I have an Intel Pentium D 920, and an AMD X2 3800+. These are very
close in performance. The retail price difference is:
Intel Pentium D 920 is selling for $310 CDN
AMD X2 3800+is selling for $347 CDN
Anybody who claims that Intel is 2X more exp
Steinar H. Gunderson wrote:
On Wed, Jan 18, 2006 at 01:58:09PM -0800, William Yu wrote:
The key is getting a card with the ability to upgrade the onboard ram.
Our previous setup was a LSI MegaRAID 320-1 (128MB), 4xRAID10,
fsync=off. Replaced it with a ARC-1170 (1GB) w/ 24x7200RPM SATA2 drives
Benjamin Arai wrote:
Obviously, I have done this to improve write performance for the update
each week. My question is if I install a 3ware or similar card to
replace my current software RAID 1 configuration, am I going to see a
very large improvement? If so, what would be a ball park figure?
Luke Lonergan wrote:
Note that host-based SCSI raid cards from LSI, Adaptec, Intel, Dell, HP
and others have proven to have worse performance than a single disk
drive in many cases, whether for RAID0 or RAID5. In most circumstances
This is my own experience. Running a LSI MegaRAID in pure pass
David Lang wrote:
raid 5 is bad for random writes as you state, but how does it do for
sequential writes (for example data mining where you do a large import
at one time, but seldom do other updates). I'm assuming a controller
with a reasonable amount of battery-backed cache.
Random write per
Juan Casero wrote:
Can you elaborate on the reasons the opteron is better than the Xeon when it
comes to disk io? I have a PostgreSQL 7.4.8 box running a DSS. One of our
Opterons have 64-bit IOMMU -- Xeons don't. That means in 64-bit mode,
transfers to > 4GB, the OS must allocated the mem
Michael Riess wrote:
Well, I'd think that's were your problem is. Not only you have a
(relatively speaking) small server -- you also share it with other
very-memory-hungry services! That's not a situation I'd like to be in.
Try putting Apache and Tomcat elsewhere, and leave the bulk of the 1GB
Alan Stange wrote:
Luke Lonergan wrote:
The "aka iowait" is the problem here - iowait is not idle (otherwise it
would be in the "idle" column).
Iowait is time spent waiting on blocking io calls. As another poster
pointed out, you have a two CPU system, and during your scan, as
iowait time i
Joshua Marsh wrote:
On 11/17/05, *William Yu* <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:
> No argument there. But it's pointless if you are IO bound.
Why would you just accept "we're IO bound, nothing we can do"? I'd do
everythi
Alex Turner wrote:
Opteron 242 - $178.00
Opteron 242 - $178.00
Tyan S2882 - $377.50
Total: $733.50
Opteron 265 - $719.00
Tyan K8E - $169.00
Total: $888.00
You're comparing the wrong CPUs. The 265 is the 2x of the 244 so you'll
have to bump up the price more although not enough to make a diffe
Welty, Richard wrote:
David Boreham wrote:
I guess I've never bought into the vendor story that there are
two reliability grades. Why would they bother making two
different kinds of bearing, motor etc ? Seems like it's more
likely an excuse to justify higher prices.
then how to account for t
David Boreham wrote:
>Spend a fortune on dual core CPUs and then buy crappy disks... I bet
>for most applications this system will be IO bound, and you will see a
>nice lot of drive failures in the first year of operation with
>consumer grade drives.
I guess I've never bought into the vendo
Alex Turner wrote:
Spend a fortune on dual core CPUs and then buy crappy disks... I bet
for most applications this system will be IO bound, and you will see a
nice lot of drive failures in the first year of operation with
consumer grade drives.
Spend your money on better Disks, and don't bother
Alex Stapleton wrote:
Your going to have to factor in the increased failure rate in your cost
measurements, including any downtime or performance degradation whilst
rebuilding parts of your RAID array. It depends on how long your
planning for this system to be operational as well of course.
James Mello wrote:
Unless there was a way to guarantee consistency, it would be hard at
best to make this work. Convergence on large data sets across boxes is
non-trivial, and diffing databases is difficult at best. Unless there
was some form of automated way to ensure consistency, going 8 ways i
Alex Turner wrote:
Not at random access in RAID 10 they aren't, and anyone with their
head screwed on right is using RAID 10. The 9500S will still beat the
Areca cards at RAID 10 database access patern.
The max 256MB onboard for 3ware cards is disappointing though. While
good enough for 95% o
Merlin Moncure wrote:
You could instead buy 8 machines that total 16 cores, 128GB RAM and
It's hard to say what would be better. My gut says the 5u box would be
a lot better at handling high cpu/high concurrency problems...like your
typical business erp backend. This is pure speculation of c
Carlos Henrique Reimer wrote:
I forgot to say that it´s a 12GB database...
Ok, I´ll set shared buffers to 30.000 pages but even so "meminfo" and
"top" shouldn´t show some shared pages?
I heard something about that Redhat 9 can´t handle very well RAM higher
than 2GB. Is it right?
Thanks in
Donald Courtney wrote:
I built postgreSQL 8.1 64K bit on solaris 10 a few months ago
and side by side with the 32 bit postgreSQL build saw no improvement. In
fact the 64 bit result was slightly lower.
I'm not surprised 32-bit binaries running on a 64-bit OS would be faster
than 64-bit/64-bit.
Donald Courtney wrote:
in that even if you ran postgreSQL on a 64 bit address space
with larger number of CPUs you won't see much of a scale up
and possibly even a drop. I am not alone in having the *expectation*
What's your basis for believing this is the case? Why would PostgreSQL's
depend
Ron wrote:
PERC4eDC-PCI Express, 128MB Cache, 2-External Channels
Looks like they are using the LSI Logic MegaRAID SCSI 320-2E
controller. IIUC, you have 2 of these, each with 2 external channels?
A lot of people have mentioned Dell's versions of the LSI cards can be
WAY slower than the on
A 4xDC would be far more sensitive to poor NUMA code than 2xDC so I'm
not surprised I don't see performance issues on our 2xDC w/ < 2.6.12.
J. Andrew Rogers wrote:
On 7/30/05 12:57 AM, "William Yu" <[EMAIL PROTECTED]> wrote:
I haven't investigated t
I've been running 2x265's on FC4 64-bit (2.6.11-1+) and it's been
running perfect. With NUMA enabled, it runs incrementally faster than
NUMA off. Performance is definitely better than the 2x244s they replaced
-- how much faster, I can't measure since I don't have the transaction
volume to compa
d
pgbench -I template1
and
pgbench -c 10 -t 50 -v -d 1
and played around from there
This is on IBM pSeries, AIX5.3, PG8.0.2
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of William Yu
Sent: Tuesday, June 21, 2005 12:05 PM
To: pgsql-performance@post
My Dual Core Opteron server came in last week. I tried to do some
benchmarks with pgbench to get some numbers on the difference between
1x1 -> 2x1 -> 2x2 but no matter what I did, I kept getting the same TPS
on all systems. Any hints on what the pgbench parameters I should be using?
In terms o
Rory Campbell-Lange wrote:
Processor:
First of all I noted that we were intending to use Opteron processors. I
guess this isn't a straightforward choice because I believe Debian (our
Linux of choice) doesn't have a stable AMD64 port. However some users on
this list suggest that Opterons work ver
We are considering two RAID1 system disks, and two RAID1 data disks.
We've avoided buying Xeons. The machine we are looking at looks like
this:
Rackmount Chassis - 500W PSU / 4 x SATA Disk Drive Bays
S2882-D - Dual Opteron / AMD 8111 Chipset / 5 x PCI Slots
2x - (Dual) AMD Opteron 246
A pretty awful way is to mangle the sql statement so the other field
logical statements are like so:
select * from mytable where 0+field = 100
Tobias Brox wrote:
Is it any way to attempt to force the planner to use some specific index
while creating the plan? Other than eventually dropping
I've used LSI MegaRAIDs successfully in the following systems with both
Redhat 9 and FC3 64bit.
Arima HDAMA/8GB RAM
Tyan S2850/4GB RAM
Tyan S2881/4GB RAM
I've previously stayed away from Adaptec because we used to run Solaris
x86 and the driver was somewhat buggy. For Linux and FreeBSD, I'd be
I'm sure there's some corner case where more memory helps. If you
consider that 1GB of RAM is about $100, I'd max out memory on the
controller just for the hell of it.
Josh Berkus wrote:
Steve,
Past recommendations for a good RAID card (for SCSI) have been the LSI
MegaRAID 2x. This unit comes w
92643
order entry writes
2x248 - 235107
1x175 - 257184
4x848 - 360008
2x275 - 392634
order entry stored procedures
2x248 - 2939
1x175 - 3215
4x848 - 4500
2x275 - 4908
Greg Stark wrote:
William Yu <[EMAIL PROTECTED]> writes:
It turns out the latency in a 2xDC setup is just so much lower and
4-way SMP Opteron system is actually pretty damn cheap -- if you get
2xDual Core versus 4xSingle. I just ordered a 2x265 (4x1.8ghz) system
and the price was about $1300 more than a 2x244 (2x1.8ghz).
Now you might ask, is a 2xDC comparable to 4x1? Here's some benchmarks
I've found that showing D
Unfortunately, Anandtech only used Postgres just a single time in his
benchmarks. And what it did show back then was a huge performance
advantage for the Opteron architecture over Xeon in this case. Where the
fastest Opterons were just 15% faster in MySQL/MSSQL/DB2 than the
fastest Xeons, it wa
The Linux kernel is definitely headed this way. The 2.6 allows for
several different I/O scheduling algorithms. A brief overview about the
different modes:
http://nwc.serverpipeline.com/highend/60400768
Although a much older article from the beta-2.5 days, more indepth info
from one of the prog
I posted this link a few months ago and there was some surprise over the
difference in postgresql compared to other DBs. (Not much surprise in
Opteron stomping on Xeon in pgsql as most people here have had that
experience -- the surprise was in how much smaller the difference was in
other DBs.)
My experience:
1xRAID10 for postgres
1xRAID1 for OS + WAL
Jeff Frost wrote:
Now that we've hashed out which drives are quicker and more money equals
faster...
Let's say you had a server with 6 separate 15k RPM SCSI disks, what raid
option would you use for a standalone postgres server?
a) 3xRAI
ate most of the performance issues.
Then you're just left with the management issues. Getting those 20
drives stuffed in a big case and keeping a close eye on the drives since
drive failure will be a much bigger deal.
Greg Stark wrote:
William Yu <[EMAIL PROTECTED]> writes:
Using th
Problem with this strategy. You want battery-backed write caching for
best performance & safety. (I've tried IDE for WAL before w/ write
caching off -- the DB got crippled whenever I had to copy files from/to
the drive on the WAL partition -- ended up just moving WAL back on the
same SCSI drive
performance was _very_ sub par.
If someone has a simple benchmark test database to run, I would be
happy to run it on our hardware here.
Alex Turner
On Apr 6, 2005 3:30 AM, William Yu <[EMAIL PROTECTED]> wrote:
Alex Turner wrote:
I'm no drive expert, but it seems to me that our write per
Alex Turner wrote:
I'm no drive expert, but it seems to me that our write performance is
excellent. I think what most are concerned about is OLTP where you
are doing heavy write _and_ heavy read performance at the same time.
Our system is mostly read during the day, but we do a full system
update
Jeremiah Jahn wrote:
I have about 5M names stored on my DB. Currently the searches are very
quick unless, they are on a very common last name ie. SMITH. The Index
is always used, but I still hit 10-20 seconds on a SMITH or Jones
search, and I average about 6 searches a second and max out at about
3
Bruce Momjian wrote:
William Yu wrote:
You can get 64-bit Xeons also but it takes hit in the I/O department due
to the lack of a hardware I/O MMU which limits DMA transfers to
addresses below 4GB. This has a two-fold impact:
1) transfering data to >4GB require first a transfer to <4GB an
You can get 64-bit Xeons also but it takes hit in the I/O department due
to the lack of a hardware I/O MMU which limits DMA transfers to
addresses below 4GB. This has a two-fold impact:
1) transfering data to >4GB require first a transfer to <4GB and then a
copy to the final destination.
2) Yo
Jim C. Nasby wrote:
On Tue, Feb 01, 2005 at 07:35:35AM +0100, Cosimo Streppone wrote:
You might look at Opteron's, which theoretically have a higher data
bandwidth. If you're doing anything data intensive, like a sort in
memory, this could make a difference.
Would Opteron systems need 64-bit postgr
I know what I would choose. I'd get the mega server w/ a ton of RAM and skip
all the trickyness of partitioning a DB over multiple servers. Yes your data
will grow to a point where even the XXGB can't cache everything. On the
otherhand, memory prices drop just as fast. By that time, you can ebay yo
Hervé Piedvache wrote:
My point being is that there is no free solution. There simply isn't.
I don't know why you insist on keeping all your data in RAM, but the
mysql cluster requires that ALL data MUST fit in RAM all the time.
I don't insist about have data in RAM but when you use PostgreS
Hervé Piedvache wrote:
Sorry but I don't agree with this ... Slony is a replication solution ... I
don't need replication ... what will I do when my database will grow up to 50
Gb ... I'll need more than 50 Gb of RAM on each server ???
This solution is not very realistic for me ...
Have you confi
Well you probably will need to run your own tests to get a conclusive
answer. It should be that hard -- compile once with gcc, make a copy of
the installed binaries to pgsql.gcc -- then repeat with the HP compiler.
In general though, gcc works best under x86 computers. Comparisons of
gcc on x86
My experience is RH9 auto detected machines >= 2GB of RAM and installs
the PAE bigmem kernel by default. I'm pretty sure the FC2/3 installer
will do the same.
[EMAIL PROTECTED] wrote:
I understand that the 2.6.* kernels are much better at large memory
support (with respect to performance issues
[EMAIL PROTECTED] wrote:
Since the optimal state is to allocate a small amount of memory to
Postgres and leave a huge chunk to the OS cache, this means you are
already hitting the PAE penalty at 1.5GB of memory.
How could I chang this hitting?
Upgrade to 64-bit processors + 64-bit linux.
--
I inferred this from reading up on the compressed vm project. It can be
higher or lower depending on what devices you have in your system --
however, I've read messages from kernel hackers saying Linux is very
aggressive in reserving memory space for devices because it must be
allocated at boot
Gavin Sherry wrote:
There is no problem with free Linux distros handling > 4 GB of memory. The
problem is that 32 hardware must make use of some less than efficient
mechanisms to be able to address the memory.
The theshold for using PAE is actually far lower than 4GB. 4GB is the
total memory addre
[EMAIL PROTECTED] wrote:
Now I turn hyperthreading off and readjust the conf . I found the bulb query
that was :
update one flag of the table [8 million records which I think not too much]
.When I turned this query off everything went fine.
I don't know whether update the data is much slower than i
Dave Cramer wrote:
William Yu wrote:
[EMAIL PROTECTED] wrote:
I will try to reduce shared buffer to 1536 [1.87 Mb].
1536 is probaby too low. I've tested a bunch of different settings on
my 8GB Opteron server and 10K seems to be the best setting.
Be careful here, he is not using opterons
[EMAIL PROTECTED] wrote:
I will try to reduce shared buffer to 1536 [1.87 Mb].
1536 is probaby too low. I've tested a bunch of different settings on my
8GB Opteron server and 10K seems to be the best setting.
also effective cache is the sum of kernel buffers + shared_buffers so it
should be big
IDE disks lie about write completion (This can be disabled on some
drives) whereas SCSI drives wait for the data to actually be written
before they report success. It is quite
easy to corrupt a PG (Or most any db really) on an IDE drive. Check
the archives for more info.
Do we have any real i
Alex wrote:
Hi,
i recently run pgbench against different servers and got some results I
dont quite understand.
A) EV1: Dual Xenon, 2GHz, 1GB Memory, SCSI 10Krpm, RHE3
B) Dual Pentium3 1.4ghz (Blade), SCSI Disk 10Krmp, 1GB Memory, Redhat 8
C) P4 3.2GHz, IDE 7.2Krpm, 1GBMem, Fedora Core2
>
Runnig P
Greg Stark wrote:
William Yu <[EMAIL PROTECTED]> writes:
Biggest speedup I've found yet is the backup process (PG_DUMP --> GZIP). 100%
faster in 64-bit mode. This drastic speed might be more the result of 64-bit
GZIP though as I've seen benchmarks in the past showing enc
I gave -O3 a try with -funroll-loops, -fomit-frame-pointer and a few
others. Seemed to perform about the same as the default -O2 so I just
left it as -O2.
Gustavo Franklin Nóbrega wrote:
Hi Willian,
Which are the GCC flags that you it used to compile PostgreSQL?
Best regards,
Gustavo Frankli
ersus
32-bit.
William Yu wrote:
I just finished upgrading the OS on our Opteron 148 from Redhat9 to
Fedora FC2 X86_64 with full recompiles of Postgres/Apache/Perl/Samba/etc.
The verdict: a definite performance improvement. I tested just a few CPU
intensive queries and many of them are a good 3
I just finished upgrading the OS on our Opteron 148 from Redhat9 to
Fedora FC2 X86_64 with full recompiles of Postgres/Apache/Perl/Samba/etc.
The verdict: a definite performance improvement. I tested just a few CPU
intensive queries and many of them are a good 30%-50% faster.
Transactional/batc
Josh Berkus wrote:
1) Query caching is not a single problem, but rather several different
problems requiring several different solutions.
2) Of these several different solutions, any particular query result caching
implementation (but particularly MySQL's) is rather limited in its
applicabilit
Ron St-Pierre wrote:
Yes, I know that it's not a very good idea, however queries are allowed
against all of those columns. One option is to disable some or all of the
indexes when we update, run the update, and recreate the indexes,
however it may slow down user queries. Because there are so many
Raoul Buzziol wrote:
I looked for some benchmarks, and I would know if I'm right on:
- Dual Opteron 246 have aproximately the same performance of a Dual Xeon 3Gh
(Opteron a little better)
- Opteron system equal or cheeper than Xeon system.
In terms of general database performance, top of the line d
You're not getting much of a bump with this server. The CPU is
incrementally faster -- in the absolutely best case scenario where your
queries are 100% cpu-bound, that's about ~25%-30% faster.
What about using Dual Athlon MP instead of a Xeon? Would be much less expensive,
but have higher performa
Rory Campbell-Lange wrote:
The present server is a 2GHz Pentium 4/512 KB cache with 2
software-raided ide disks (Maxtors) and 1GB of RAM.
I have been offered the following 1U server which I can just about
afford:
1U server
Intel Xeon 2.8GHz 512K cache 1
512MB PC2100 DDR ECC Reg
Anjan Dave wrote:
We have a Quad-Intel XEON 2.0GHz (1MB cache), 12GB memory, running RH9,
PG 7.4.0. There's an internal U320, 10K RPM RAID-10 setup on 4 drives.
We are expecting a pretty high load, a few thousands of 'concurrent'
users executing either select, insert, update, statments.
The qui
David Pradier wrote:
i'd like to know if it exists a system of cache for the results of
queries.
If you are willing to do this at an application level, you could
calculate a MD5 for every query you plan to run and then SELECT INTO a
temporary table that's based on the MD5 sum (e.g. TMP_CACHE_4512
David Teran wrote:
Hi,
we are trying to speed up a database which has about 3 GB of data. The
server has 8 GB RAM and we wonder how we can ensure that the whole DB is
read into RAM. We hope that this will speed up some queries.
regards David
---(end of broadcast)---
Tom Lane wrote:
easy or cheap to get a measurement that isn't skewed by kernel caching
behavior. (You need a test file significantly larger than RAM, and
even then you'd better repeat the measurement quite a few times to see
how much noise there is in it.)
I found a really fast way in Linux to flu
Russell Garrett wrote:
WAL on single drive: 7.990 rec/s
WAL on 2nd IDE drive: 8.329 rec/s
WAL on tmpfs: 13.172 rec/s
A huge jump in performance but a bit scary having a WAL that can
disappear at any time. I'm gonna workup a rsync script and do some
power-off experiments to see how badly it gets man
Some arbitrary data processing job
WAL on single drive: 7.990 rec/s
WAL on 2nd IDE drive: 8.329 rec/s
WAL on tmpfs: 13.172 rec/s
A huge jump in performance but a bit scary having a WAL that can
disappear at any time. I'm gonna workup a rsync script and do some
power-off experiments to see how ba
Shridhar Daithankar wrote:
FWIW, there are only two pieces of software that need 64bit aware for a
typical server job. Kernel and glibc. Rest of the apps can do fine as 32
bits unless you are oracle and insist on outsmarting OS.
In fact running 32 bit apps on 64 bit OS has plenty of advantages l
Jeff Bohmer wrote:
It seems I don't fully understand the bigmem situation. I've searched
the archives, googled, checked RedHat's docs, etc. But I'm getting
conflicting, incomplete and/or out of date information. Does anyone
have pointers to bigmem info or configuration for the 2.4 kernel?
Big
Jeff Bohmer wrote:
We're willing to shell out extra bucks to get something that will
undoubtedly handle the projected peak load in 12 months with excellent
performance. But we're not familiar with PG's performance on Linux and
don't like to waste money.
Properly tuned, PG on Linux runs really n
Ace's Hardware has put together a fairly comprehensive comparison
between Xeon & Opteron platforms running server apps. Unfortunately,
only MySQL "data mining" benchmarks as the review crew doesn't have that
much experience with OLTP-type systems but I'm gonna try to convince
them to add the OD
Ivar Zarans wrote:
I am experiencing strange behaviour, where simple UPDATE of one field is
very slow, compared to INSERT into table with multiple indexes. I have
two tables - one with raw data records (about 24000), where one field
In Postgres and any other DB that uses MVCC (multi-version concurr
Sean Shanny wrote:
First question is do we gain anything by moving the RH Enterprise
version of Linux in terms of performance, mainly in the IO realm as we
are not CPU bound at all? Second and more radical, has anyone run
postgreSQL on the new Apple G5 with an XRaid system? This seems like a
Tom Lane wrote:
William Yu <[EMAIL PROTECTED]> writes:
I then tried to put the WAL directory onto a ramdisk. I turned off
swapping, created a tmpfs mount point and copied the pg_xlog directory
over. Everything looked fine as far as I could tell but Postgres just
panic'd w
Josh Berkus wrote:
William,
When my current job batch is done, I'll save a copy of the dir and give
the WAL on ramdrive a test. And perhaps even buy a Sandisk at the local
store and run that through the hooper.
We'll be interested in the results. The Sandisk won't be much of a
performance te
Josh Berkus wrote:
William,
The SanDisks do seem a bit pokey at 16MBps. On the otherhand, you could
get 4 of these suckers, put them in a mega-RAID-0 stripe for 64MBps. You
shouldn't need to do mirroring with a solid state drive.
I wouldn't count on RAID0 improving the speed of SANDisk's much.
Josh Berkus wrote:
William,
When my current job batch is done, I'll save a copy of the dir and give
the WAL on ramdrive a test. And perhaps even buy a Sandisk at the local
store and run that through the hooper.
We'll be interested in the results. The Sandisk won't be much of a
performance te
This is an intriguing thought which leads me to think about a similar
solution for even a production server and that's a solid state drive for
just the WAL. What's the max disk space the WAL would ever take up?
There's quite a few 512MB/1GB/2GB solid state drives available now in
the ~$200-$500
My situation is this. We have a semi-production server where we
pre-process data and then upload the finished data to our production
servers. We need the fastest possible write performance. Having the DB
go corrupt due to power loss/OS crash is acceptable because we can
always restore from last
Rob Sell wrote:
Not being one to hijack threads, but I haven't heard of this performance hit
when using HT, I have what should all rights be a pretty fast server, dual
2.4 Xeons with HT 205gb raid 5 array, 1 gig of memory. And it is only 50% as
fast as my old server which was a dual AMD MP 1400's w
I have never worked with a XEON CPU before. Does anyone know how it performs
running PostgreSQL 7.3.4 / 7.4 on RedHat 9 ? Is it faster than a Pentium 4?
I believe the main difference is cache memory, right? Aside from cache mem,
it's basically a Pentium 4, or am I wrong?
Well, see the problem is of
Anjan Dave wrote:
Shared_buffers (25% of RAM / 8KB)) = 8589934592 * .25 / 8192 = 262144
250,000 is probably the max you can use due to the 2GB process limit
unless you recompile the Linux Kernel to use 3GB process/1GB kernel.
Yes, I've got 8GB also and I started at 262144 and kept working my way
So what is the ceiling on 32-bit processors for RAM? Most of the 64-bit
vendors are pushing Athalon64 and G5 as "breaking the 4GB barrier", and even
I can do the math on 2^32. All these 64-bit vendors, then, are talking
about the limit on ram *per application* and not per machine?
64-bit CPU o
1) Memory - clumsily adjusted shared_buffer - tried three values: 64,
128, 256 with no discernible change in performance. Also adjusted,
clumsily, effective_cache_size to 1000, 2000, 4000 - with no discernible
change in performance. I looked at the Admin manual and googled around
for how to set
Relaxin wrote:
I have a table with 102,384 records in it, each record is 934 bytes.
Using the follow select statement:
SELECT * from
PG Info: version 7.3.4 under cygwin on Windows 2000
ODBC: version 7.3.100
Machine: 500 Mhz/ 512MB RAM / IDE HDD
Under PG: Data is returned in 26 secs!!
Under SQ
Shridhar Daithankar wrote:
Just a guess here but does a precompiled postgresql for x86 and a x86-64
optimized one makes difference?
>
> Opteron is one place on earth you can watch difference between 32/64
> bit on same machine. Can be handy at times..
I don't know yet. I tried building a 64-bit ke
Shridhar Daithankar wrote:
Be careful here, we've seen that with the P4 Xeon's that are
hyper-threaded and a system that has very high disk I/O causes the
system to be sluggish and slow. But after disabling the hyper-threading
itself, our system flew..
Anybody has opteron working? Hows' the perform
| first of all I would like to learn that, any of you use the postgresql
| within the clustered environment? Or, let me ask you the question, in
| different manner, can we use postgresql in a cluster environment? If
| we can do what is the support method of the postgresql for clusters?
You could do
96 matches
Mail list logo