On 5/27/2014 9:09 AM, Shaun Thomas wrote:
On 05/27/2014 10:00 AM, Albe Laurenz wrote:
I know that Oracle recommends it - they even built an NFS client
into their database server to make the most of it.
That's odd. Every time the subject of NFS comes up, it's almost
immediately shot down wit
On 5/22/2014 7:27 AM, Dimitris Karampinas wrote:
Is there any way to get the call stack of a function when profiling
PostgreSQL with perf ?
I configured with --enable-debug, I run a benchmark against the system
and I'm able to identify a bottleneck.
40% of the time is spent on an spinlock yet I
On 11/26/2013 7:26 AM, Craig James wrote:
For those of us with small (a few to a dozen servers), we'd like to
get out of server maintenance completely. Can anyone with experience
on a cloud VM solution comment? Do the VM solutions provided by the
major hosting companies have the same good pe
On 9/4/2013 3:01 AM, Johan Loubser wrote:
I am tasked with getting specs for a postgres database server for the
core purpose of running moodle at our university.
The main question is at the moment is 12core AMD or 6/8core (E Series)
INTEL.
What would be the most in portend metric in planning a
On 5/22/2013 8:18 AM, Greg Smith wrote:
They can easily hit that number. Or they can do this:
Device: r/sw/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm
%util
sdd 2702.80 19.40 19.67 0.1614.91 273.68 71.74 0.37 100.00
sdd 2707.60 13.00 19.53 0.1014.78 2
On 3/20/2013 6:44 PM, David Rees wrote:
On Thu, Mar 14, 2013 at 4:37 PM, David Boreham wrote:
You might want to evaluate the performance you can achieve with a single-SSD
(use several for capacity by all means) before considering a RAID card + SSD
solution.
Again I bet it depends on the
On 3/14/2013 3:37 PM, Mark Kirkwood wrote:
I not convinced about the need for BBU with SSD - you *can* use them
without one, just need to make sure about suitable longevity and also
the presence of (proven) power off protection (as discussed
previously). It is worth noting that using unproven o
On 3/13/2013 9:29 PM, Mark Kirkwood wrote:
Just going through this now with a vendor. They initially assured us
that the drives had "end to end protection" so we did not need to
worry. I had to post stripdown pictures from Intel's s3700, showing
obvious capacitors attached to the board before I
On 3/13/2013 1:23 PM, Steve Crawford wrote:
What concerns me more than wear is this:
InfoWorld Article:
http://www.infoworld.com/t/solid-state-drives/test-your-ssds-or-risk-massive-data-loss-researchers-warn-213715
Referenced research paper:
https://www.usenix.org/conference/fast13/understa
On 12/11/2012 8:11 PM, Evgeny Shishkin wrote:
Quoting
http://www.storagereview.com/intel_ssd_dc_s3700_series_enterprise_ssd_review
Heh. A fine example of the kind of hand-waving of which I spoke ;)
Higher performance is certainly a benefit, although at present we can't
saturate even a single
On 12/11/2012 7:49 PM, Evgeny Shishkin wrote:
Yeah, s3700 looks promising, but sata interface is limiting factor for
this drive.
I'm looking towards SMART ssd
http://www.storagereview.com/smart_storage_systems_optimus_sas_enterprise_ssd_review
What don't you like about SATA ?
I prefer to avo
On 12/11/2012 7:38 PM, Evgeny Shishkin wrote:
Which drives would you recommend? Besides intel 320 and 710.
Those are the only drive types we have deployed in servers at present
(almost all 710, but we have some 320 for less mission-critical
machines). The new DC-S3700 Series looks nice too, but
On 12/11/2012 7:20 PM, Evgeny Shishkin wrote:
Oh, there is no 100% safe system.
In this case we're discussing specifically "safety in the event of power
loss shortly after the drive indicates to the controller that it has
committed a write operation". Some drives do provide 100% safety against
On 12/11/2012 7:13 PM, Evgeny Shishkin wrote:
Yes, i am aware of this issue. Never experienced this neither on intel
520, no ocz vertex 3.
Have you heard of them on this list?
People have done plug-pull tests and reported the results on the list
(sometime in the past couple of years).
But you
Here are the SELECT only pgbench test results from my E5-2620 machine,
with HT on and off:
HT off:
bash-4.1$ /usr/pgsql-9.2/bin/pgbench -T 600 -j 48 -c 48 -S
starting vacuum...end.
transaction type: SELECT only
scaling factor: 100
query mode: simple
number of clients: 48
number of threads: 48
d
On 11/8/2012 6:58 AM, Shaun Thomas wrote:
On 11/07/2012 09:16 PM, David Boreham wrote:
bash-4.1$ /usr/pgsql-9.2/bin/pgbench -T 600 -j 48 -c 48
Unfortunately without -S, you're not really testing the processors. A
regular pgbench can fluctuate a more than that due to writing and
checkp
Well, the results are in and at least in this particular case
conventional wisdom is overturned. Not a huge benefit, but throughput is
definitely higher with HT enabled and nthreads >> ncores:
HT off :
bash-4.1$ /usr/pgsql-9.2/bin/pgbench -T 600 -j 48 -c 48
starting vacuum...end.
transaction t
On 11/7/2012 6:37 AM, Devrim GÜNDÜZ wrote:
HT should be good for file servers, or say many of the app servers, or
small web/mail servers. PostgreSQL relies on the CPU power, and since
the HT CPUs don't have the same power as the original CPU, when OS
submits a job to that particular HTed CPU, qu
On 11/6/2012 9:16 PM, Mark Kirkwood wrote:
I've been benchmarking a E5-4640 (4 socket) and hyperthreading off
gave much better scaling behaviour in pgbench (gentle rise and flatten
off), whereas with hyperthreading on there was a dramatic falloff
after approx number clients = number of (hype
I'm bringing up a new type of server using Intel E5-2620 (unisocket)
which was selected for good SpecIntRate performance vs cost/power (201
for $410 and 95W).
Was assuming it was 6-core but I just noticed it has HT which is
currently enabled since I see 12 cores in /proc/cpuinfo
Question f
On 10/2/2012 2:20 AM, Glyn Astill wrote:
newer R910s recently all of a sudden went dead to the world; no prior symptoms
showing in our hardware and software monitoring, no errors in the os logs,
nothing in the dell drac logs. After a hard reset it's back up as if
nothing happened, and it's an is
On 9/28/2012 9:46 AM, Craig James wrote:
Your best warranty would be to have the confidence to do your own
repairs, and to have the parts on hand. I'd seriously consider
putting your own system together. Maybe go to a few sites with
pre-configured machines and see what parts they use. Order th
On 9/27/2012 3:16 PM, Claudio Freire wrote:
Careful with AMD, since many (I'm not sure about the latest ones)
cannot saturate the memory bus when running single-threaded. So, great
if you have a high concurrent workload, quite bad if you don't.
Actually we test memory bandwidth with John McCalp
On 9/27/2012 2:47 PM, Shaun Thomas wrote:
On 09/27/2012 02:40 PM, David Boreham wrote:
I think the newer CPU is the clear winner with a specintrate
performance of 589 vs 432.
The comparisons you linked to had 24 absolute threads pitted against
32, since the newer CPUs have a higher maximum
On 9/27/2012 2:55 PM, Scott Marlowe wrote:
Whatever you do, go for the Intel ethernet adaptor option. We've had so many
>headaches with integrated broadcom NICs.:(
Sound advice, but not a get out of jail card unfortunately : we had a
horrible problem with the Intel e1000 driver in RHEL for sever
On 9/27/2012 1:56 PM, M. D. wrote:
I'm in Belize, so what I'm considering is from ebay, where it's
unlikely that I'll get the warranty. Should I consider some other
brand rather? To build my own or buy custom might be an option too,
but I would not get any warranty.
I don't have any recent ex
On 9/27/2012 1:37 PM, Craig James wrote:
We use a "white box" vendor (ASA Computers), and have been very happy
with the results. They build exactly what I ask for and deliver it in
about a week. They offer on-site service and warranties, but don't
pressure me to buy them. I'm not locked in to
On 9/27/2012 1:11 PM, M. D. wrote:
I want to buy a new server, and am contemplating a Dell R710 or the
newer R720. The R710 has the x5600 series CPU, while the R720 has the
newer E5-2600 series CPU.
For this the best data I've found (excepting actually running tests on
the physical hardwar
On 5/16/2012 11:01 AM, Merlin Moncure wrote:
Although your assertion 100% supported by intel's marketing numbers,
there are some contradicting numbers out there that show the drives
offering pretty similar performance. For example, look here:
http://www.anandtech.com/show/4902/intel-ssd-710-200g
On 5/15/2012 12:16 PM, Rosser Schwarz wrote:
As the other posters in this thread have said, your best bet is
probably the Intel 710 series drives, though I'd still expect some
320-series drives in a RAID configuration to still be pretty
stupendously fast.
One thing to mention is that the 710 are
On 5/15/2012 9:21 AM, Віталій Тимчишин wrote:
We've reached to the point when we would like to try SSDs. We've got a
central DB currently 414 GB in size and increasing. Working set does
not fit into our 96GB RAM server anymore.
So, the main question is what to take. Here what we've got:
1) I
So the Intel 710 kind of sucks latency wise. Is it because it is also
heavily reading, and maybe WAL should not be put on it?
A couple quick thoughts:
1. There are a lot of moving parts in the system besides the SSDs.
It will take some detailed analysis to determine the cause for the
outlyi
On 10/28/2011 12:26 PM, Tomas Vondra wrote:
For example the Intel 710 SSD has a sequential write speed of 210MB/s,
while a simple SATA 7.2k drive can write about 50-100 MB/s for less than
1/10 of the 710 price.
Bulk data transfer rates mean almost nothing in the context of a database
(unless you
On 10/28/2011 12:40 AM, Amitabh Kant wrote:
Sadly, 710 is not that easily available around here at the moment.
All three sizes are in stock at newegg.com, if you have a way to export
from the US to your location.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.o
On 10/25/2011 8:55 AM, Claudio Freire wrote:
But what about unexpected failures. Faulty electronics, stuff like
that? I really don't think a production server can work without at
least raid-1.
Same approach : a server either works or it does not. The transition
between working and not workin
On 10/24/2011 4:47 PM, Claudio Freire wrote:
What about redundancy?
How do you swap an about-to-die SSD?
Software RAID-1?
The approach we take is that we use 710 series devices which have
predicted reliability similar to all the other components in the
machine, therefore the unit of replace
On 10/24/2011 3:31 PM, Merlin Moncure wrote:
4. Consider using Intel 710 series rather than 320 (pay for them with the
> money saved from #3 above). Those devices have much, much higher specified
> endurance than the 320s and since your DB is quite small you only need to
> buy one of them.
71
A few quick thoughts:
1. 320 would be the only SSD I'd trust from your short-list. It's the
only one with proper protection from unexpected power loss.
2. Multiple RAID'ed SSDs sounds like (vast) overkill for your workload.
A single SSD should be sufficient (will get you several thousand TPS o
I ran a test using Intel's timed workload wear indication feature on a
100G 710 series SSD.
The test works like this : you reset the wear indication counters, then
start running
some workload (in my case pgbench at scale 100 for 4 hours). During the
test run
a wear indication attribute can b
On 10/2/2011 11:55 PM, Gregory Gerard wrote:
Which repo did you get them from?
http://yum.postgresql.org/9.1/redhat/rhel-$releasever-$basearch
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpr
On 10/2/2011 10:49 PM, David Boreham wrote:
There are some OS differences between the old and new servers : old is
running CentOS 5.7 while the new is running 6.0.
Old server has atime enabled while new has relatime mount option
specified. Both are running PG 9.1.1 from the yum repo.
Also
On 10/2/2011 10:35 PM, Greg Smith wrote:
That sounds about the same performance as the 320 drive I tested
earlier this year then. You might try duplicating some of the
benchmarks I ran on that:
http://archives.postgresql.org/message-id/4d9d1fc3.4020...@2ndquadrant.com
Thanks. Actually I had
On 10/2/2011 6:26 PM, Gregory Gerard wrote:
If I may ask what were your top three candidates before choosing the intel?
All the other options considered viable were using traditional
rotational disks.
I personally don't have any confidence in the other SSD vendors today,
except perhaps for Fusi
On 10/2/2011 2:33 AM, Arjen van der Meijden wrote:
Given the fact that you can get two 320's for the price of one 710,
its probably always a bit difficult to actually make the choice
(unless you want a fixed amount of disks and the best endurance
possible for that).
One thing I'd add to th
On 10/1/2011 9:22 PM, Andy wrote:
Do you have an Intel 320? I'd love to see tests comparing 710 to 320
and see if it's worth the price premium.
Good question. I don't have a 320 drive today, but will probably get one
for testing soon.
However, my conclusion based on the Intel spec documents
On 10/1/2011 10:00 PM, Gregory Gerard wrote:
How does this same benchmark compare on similar (or same) hardware but
with magnetic media?
I don't have that data at present :(
So far I've been comparing performance with our current production
machines, which are older.
Those machines use 'rapto
I have a 710 (Lyndonville) SSD in a test server. Ultimately we'll run
capacity tests using our application (which in turn uses PG), but it'll
take a while to get those set up. In the meantime, I'd be happy to
entertain running whatever tests folks here would like to suggest,
spare time-permitting
On 8/24/2011 1:32 PM, Tomas Vondra wrote:
Why is that important? It's simply a failure of electronics and it has
nothing to do with the wear limits. It simply fails without prior
warning from the SMART.
In the cited article (actually in all articles I've read on this
subject), the failures we
On 8/24/2011 11:41 AM, Greg Smith wrote:
I've measured the performance of this drive from a couple of
directions now, and it always comes out the same. For PostgreSQL,
reading or writing 8K blocks, I'm seeing completely random workloads
hit a worst-case of 20MB/s; that's just over 2500 IOPS
On 8/24/2011 11:23 AM, Andy wrote:
According to the specs for database storage:
"Random 4KB arites: Up to 600 IOPS"
Is that for real? 600 IOPS is *atrociously terrible* for an SSD. Not
much faster than mechanical disks.
The underlying (Flash block) write rate really is terrible (and slower
On 8/24/2011 11:17 AM, Merlin Moncure wrote:
hm, I think they need to reconcile those numbers with the ones on this
page:
http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives-320-series.html
600 write ips vs 3.7k/23k.
They do provide an explanation (and what I find
Apologies if this has already been posted here (I hadn't seen it before
today, and
can't find a previous post).
This will be of interest to anyone looking at using SSDs for database
storage :
http://www.intel.com/content/www/us/en/solid-state-drives/ssd-320-enterprise-server-storage-applicati
On 8/22/2011 10:55 PM, Scott Marlowe wrote:
If you're running linux and thus stuck with the command line on the
LSI, I'd recommend anything else. MegaRAID is the hardest RAID
control software to use I've ever seen. If you can spring for the
money, get the Areca 1680:
http://www.newegg.com/Produ
On 8/23/2011 5:14 AM, Robert Schnabel wrote:
I'm by no means an expert but it seems to me if you're going to choose
between two 6 GB/s cards you may as well put SAS2 drives in. I have
two Adaptec 6445 cards in one of my boxes and several other Adaptec
series 5 controllers in others. They su
I'm buying a bunch of new machines (all will run an application that heavily
writes to PG). These machines will have 2 spindle groups in a RAID-1 config.
Drives will be either 15K SAS, or 10K SATA (I haven't decided if it is
better
to buy the faster drives, or drives that are identical to the o
This comment by the author I think tends to support my theory that most
of the
failures seen are firmware related (and not due to actual hardware
failures, which
as I mentioned in the previous thread are very rare and should occur
roughly equally
often in hard drives as SSDs) :
/As we expla
On 5/11/2011 9:17 PM, Aren Cambre wrote:
So here's what's going on.
If I were doing this, considering the small size of the data set, I'd
read all the data into memory.
Process it entirely in memory (with threads to saturate all the
processors you have).
Then write the results to the DB.
On 5/9/2011 6:32 PM, Craig James wrote:
Maybe this is a dumb question, but why do you care? If you have 1TB
RAM and just a little more actual disk space, it seems like your
database will always be cached in memory anyway. If you "eliminate
the cach effect," won't the benchmark actually give y
On 5/9/2011 3:11 PM, Merlin Moncure wrote:
The problem with bonnie++ is that the results aren't valid, especially
the read tests. I think it refuses to even run unless you set special
switches.
I only care about writes ;)
But definitely, be careful with the tools. I tend to prefer small
prog
hm, if it was me, I'd write a small C program that just jumped
directly on the device around and did random writes assuming it wasn't
formatted. For sequential read, just flush caches and dd the device
to /dev/null. Probably someone will suggest better tools though.
I have a program I wrote ye
On 4/6/2011 9:19 PM, gnuo...@rcn.com wrote:
SSDs have been around for quite some time. The first that I've found is Texas
Memory. Not quite 1977, but not flash either, although they've been doing so
for a couple of years.
Well, I built my first ram disk (which of course I thought I had
inven
Had to say a quick thanks to Greg and the others who have posted
detailed test results on SSDs here.
For those of us watching for the inflection point where we can begin the
transition from mechanical to solid state storage, this data and
experience is invaluable. Thanks for sharing it.
A shor
On 8/30/2010 3:18 PM, Chris Browne wrote:
... As long as you're willing to rewrite PostgreSQL in Occam 2...
Just re-write it in Google's new language 'Go' : it's close enough to
Occam and they'd probably fund the project..
;)
--
Sent via pgsql-performance mailing list (pgsql-performance@
Feels like I fell through a worm hole in space/time, back to inmos in
1987, and a guy from marketing has just
walked in the office going on about there's a customer who wants to use
our massively parallel hardware to speed up databases...
--
Sent via pgsql-performance mailing list (pgsql-per
Do you guys have any more ideas to properly 'feel this disk at its
teeth' ?
While an 'end-to-end' test using PG is fine, I think it would be easier
to determine if the drive is behaving correctly by using a simple test
program that emulates the storage semantics the WAL expects. Have it
wri
On 7/8/2010 3:18 PM, timothy.noo...@emc.com wrote:
How does the linux machine know that there is a BBU installed and to
change its behavior or change the behavior of Postgres? I am
experiencing performance issues, not with searching but more with IO.
It doesn't change its behavior at all. It'
On 7/8/2010 1:47 PM, Ryan Wexler wrote:
Thanks for the explanations that makes things clearer. It still
amazes me that it would account for a 5x change in IO.
The buffering allows decoupling of the write rate from the disk rotation
speed.
Disks don't spin that fast, at least not relative to t
James Mansion wrote:
Jakub Ouhrabka wrote:
How can we diagnose what is happening during the peaks?
Can you try forcing a core from a bunch of the busy processes? (Hmm -
does Linux have an equivalent to the useful Solaris pstacks?)
There's a 'pstack' for Linux, shipped at least in Red Hat distr
Tom Lane wrote:
Having malloc/free use
an internal mutex is necessary in multi-threaded programs, but the
backend isn't multi-threaded.
Hmm...confused. I'm not following why then there is contention for the
mutex.
Surely this has to be some other mutex that is in contention, not a heap
loc
Barry Moore wrote:
I have a very slow query that I'm trying to tune. I think my
performance tuning is being complicated by the system's page cache.
If a run the query after the system has been busy with other tasks
for quite a long time then the query can take up to 8-10 minutes to
compl
Tom Lane wrote:
In case I was mistaken, this explanation makes perfectly sens to me.
But then again it would indicate a 'bug' in libpq, in the sense that
it (apparently) sets TCP_NODELAY on linux but not on windows.
No, it would mean a bug in Windows in that it fails to honor TCP_NODELAY.
Alexander Staubo wrote:
No, fsync=on. The tps values are similarly unstable with fsync=off,
though -- I'm seeing bursts of high tps values followed by low-tps
valleys, a kind of staccato flow indicative of a write caching being
filled up and flushed.
Databases with checkpointing typically
Carlos H. Reimer wrote:
I´ve taken a look in the /var/log/messages and found some temperature
messages about the disk drives:
Nov 30 11:08:07 totall smartd[1620]: Device: /dev/sda, Temperature changed 2
Celsius to 51 Celsius since last report
Can this temperature influence in the performance?
Carlos H. Reimer wrote:
avg-cpu: %user %nice %system %iowait %idle
50.400.000.501.10 48.00
Device:rrqm/s wrqm/s r/s w/s rsec/s wsec/srkB/swkB/s
avgrq-sz avgqu-sz await svctm %util
sda 0.00 7.80 0.40 6.40 41.60 113.60
Unfortunately often operating system virtual memory
and filesystem caching code that does exactly the opposite of
what a database application would like.
For some reason the kernel guys don't see it that way ;)
Over the years there have been various kernel features added
with the overall goal of
really makes me think that that area is just a comfortable way to
access files on disk as memory areas; with the hope of propably better
caching then not-memory-mapped files.
No, absolutely not. CreateFileMaping() does much the same thing
as mmap() in Unix.
That would explain my disturbing i
I learned the hard way that just rising it can lead to a hard
performance loss :)
I looked back in the list archives to try to find your post on the
underlying problem, but could only find this rather terse sentence.
If you have more detailed information please post or point me at it.
But...m
Guoping Zhang wrote:
a) SERVER A to SERVER B: 0.35ms
SERVER A to itself (Local host): 0.022ms
0.35ms seems rather slow. You might try investigating what's in the path.
For comparison, between two machines here (three GigE switches in the
path), I see 0.10ms RTT. Between two machines on
My suggestion is to look at something like this:
http://www.abmx.com/1u-supermicro-amd-opteron-rackmount-server-p-210.html
1U rackmount opteron from Supermicro that can have two dual core
opterons and 4 drives and up to 16 gigs of ram. Supermicro server
motherboards have always treated me wel
Anthony Presley wrote:
I had an interesting discussion today w/ an Enterprise DB developer and
sales person, and was told, twice, that the 64-bit linux version of
Enterprise DB (which is based on the 64-bit version of PostgreSQL 8.1)
is SIGNIFICANTLY SLOWER than the 32-bit version. Since the gu
Tom Lane wrote:
[EMAIL PROTECTED] writes:
I'm using httperf/autobench for measurments and the best result I can get
is that my system can handle a trafiic of almost 1600 New con/sec.
As per PFC's comment, if connections/sec is a bottleneck for you then
the answer is to
I cannot scale beyond that value and the funny thing, is that none of the
servers is swapping, or heavy loaded, neither postgres nor apache are
refusing connexions.
Hearing a story like this (throughput hits a hard limit, but
hardware doesn't appear to be 100% utilized), I'd suspect
insuffi
2006-05-04 18:04:58 EDT USER=postgres DB=FIX1 [12427] PORT
= [local] ERROR: invalid memory alloc request size 18446744073709551613
Perhaps I'm off beam here, but any
time I've seen an app try to allocate a gazillion bytes, it's
due to some code incorrectly calculating the size of something
>While in general there may not be that much of a % difference between
the 2 chips,
>there's a huge gap in Postgres. For whatever reason, Postgres likes
Opterons.
>Way more than Intel P4-architecture chips.
It isn't only Postgres. I work on a number of other server applications
that also run
The reason AMD is has held off from supporting DDR2 until now are:
1. DDR is EOL. JEDEC is not ratifying any DDR faster than 200x2 while DDR2
standards as fast as 333x4 are likely to be ratified (note that Intel pretty
much avoided DDR, leaving it to AMD, while DDR2 is Intel's main RAM tech
My personal favorite pg platform at this time is one based on a 2 socket, dual
core ready mainboard with 16 DIMM slots combined with dual core AMD Kx's.
Right. We've been buying Tyan bare-bones boxes like this.
It's better to go with bare-bones than building boxes from bare metal
because th
Actually, that was from an article from this last month that compared
the dual core intel to the amd. for every dollar spent on the intel,
you got about half the performance of the amd. Not bigotry. fact.
But don't believe me or the other people who've seen the difference. Go
buy the I
Brendan Duddridge wrote:
Thanks for your reply. So how is that different than something like
Slony2 or pgcluster with multi-master replication? Is it similar
technology? We're currently looking for a good clustering solution
that will work on our Apple Xserves and Xserve RAIDs.
I think yo
Alan Stange wrote:
Not sure I get your point. We would want the lighter one,
all things being equal, right ? (lower shipping costs, less likely
to break when dropped on the floor)
Why would the lighter one be less likely to break when dropped on the
floor?
They'd have less kinetic energ
Alex Turner wrote:
Just pick up a SCSI drive and a consumer ATA drive.
Feel their weight.
Not sure I get your point. We would want the lighter one,
all things being equal, right ? (lower shipping costs, less likely
to break when dropped on the floor)
---(en
I suggest you read this on the difference between enterprise/SCSI and
desktop/IDE drives:
http://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf
This is exactly the kind of vendor propaganda I was talking about
and it proves my point quite
>Spend a fortune on dual core CPUs and then buy crappy disks... I bet
>for most applications this system will be IO bound, and you will see a
>nice lot of drive failures in the first year of operation with
>consumer grade drives.
I guess I've never bought into the vendor story that there are
two
Steve Wampler wrote:
Joshua D. Drake wrote:
The reason you want the dual core cpus is that PostgreSQL can only
execute 1 query per cpu at a time,...
Is that true? I knew that PG only used one cpu per query, but how
does PG know how many CPUs there are to limit the number of queries?
Piccarello, James (James) wrote:
Postgres recovery time
Does anyone know what factors affect
the recovery time of postgres if it does not shutdown cleanly? With the
same size database I've seen times from a few seconds to a few
minutes. The longest time was 33 minutes. The 33 minut
94 matches
Mail list logo