On Sun, Jan 11, 2009 at 8:08 PM, Robert Haas wrote:
>> Where you *will* have some major OS risk is with testing-level software
>> or "bleeding edge" Linux distros like Fedora. Quite frankly, I don't
>> know why people run Fedora servers -- if it's Red Hat compatibility you
>> want, there's CentOS.
M. Edward (Ed) Borasky wrote:
Greg Smith wrote:
Right, this is why I only rely on Linux deployments using a name I
trust: Dell.
Returning to reality, the idea that there are brands you can buy that
make all your problems go away is rather optimistic. The number of
"branded" servers I've see
Greg Smith wrote:
> Right, this is why I only rely on Linux deployments using a name I
> trust: Dell.
>
> Returning to reality, the idea that there are brands you can buy that
> make all your problems go away is rather optimistic. The number of
> "branded" servers I've seen that are just nearly o
On Sun, 11 Jan 2009, M. Edward (Ed) Borasky wrote:
And you're probably in pretty good shape with Debian stable and the RHEL
respins like CentOS.
No one is in good shape until they've done production-level load testing
on the system and have run the sort of "unplug it under load" tests that
S
Robert Haas wrote:
>> Where you *will* have some major OS risk is with testing-level software
>> or "bleeding edge" Linux distros like Fedora. Quite frankly, I don't
>> know why people run Fedora servers -- if it's Red Hat compatibility you
>> want, there's CentOS.
>
> I've had no stability proble
> Where you *will* have some major OS risk is with testing-level software
> or "bleeding edge" Linux distros like Fedora. Quite frankly, I don't
> know why people run Fedora servers -- if it's Red Hat compatibility you
> want, there's CentOS.
I've had no stability problems with Fedora. The worst
Luke Lonergan wrote:
> Not to mention the #1 cause of server faults in my experience: OS kernel bug
> causes a crash. Battery backup doesn't help you much there.
Well now ... that very much depends on where you *got* the server OS and
how you administer it. If you're talking a correctly-maintain
On Sun, Jan 11, 2009 at 4:16 PM, Luke Lonergan wrote:
> Not to mention the #1 cause of server faults in my experience: OS kernel bug
> causes a crash. Battery backup doesn't help you much there.
I've been using pgsql since way back, in a lot of projects, and almost
almost of them on some flavor
Markus Wanner wrote:
> M. Edward (Ed) Borasky wrote:
>> Check out the work of Jens Axboe and Alan Brunelle, specifically the
>> packages "blktrace" and "fio".
>> BTW ... I am working on my blktrace howto even as I type this. I don't
>> have an ETA -- that's going to depend on how long it takes me t
Not to mention the #1 cause of server faults in my experience: OS kernel bug
causes a crash. Battery backup doesn't help you much there.
Fsync of log is necessary IMO.
That said, you could use a replication/backup strategy to get a consistent
snapshot in the past if you don't mind losing some
On Sun, 11 Jan 2009, Glyn Astill wrote:
--- On Sun, 11/1/09, Scott Marlowe wrote:
They also told me we could never lose power in the hosting
center
because it was so wonder and redundant and that I was
wasting my time.
We'll that's just plain silly, at the very least there's always going to
--- On Sun, 11/1/09, Scott Marlowe wrote:
> They also told me we could never lose power in the hosting
> center
> because it was so wonder and redundant and that I was
> wasting my time.
We'll that's just plain silly, at the very least there's always going to be
some breakers / fuzes in between
On Sun, Jan 11, 2009 at 11:07 AM, Scott Marlowe wrote:
> running pgsql. The others, running Oracle, db2, Ingress and a few
> other databases all came back up with corrupted data on their drives
> and forced nearly day long restores.
Before anyone thinks I'm slagging all other databases here, the
On Sat, Jan 10, 2009 at 2:56 PM, Ron wrote:
> At 10:36 AM 1/10/2009, Gregory Stark wrote:
>>
>> "Scott Marlowe" writes:
>>
>> > On Sat, Jan 10, 2009 at 5:40 AM, Ron wrote:
>> >> At 03:28 PM 1/8/2009, Merlin Moncure wrote:
>> >>> just be aware of the danger . hard reset (power off) class of fail
> Here some number from a mine old pgfouine report:
> - query peak: 378 queries/s
> - select: 53,1%, insert 3,8%, update 2,2 %, delete 2,8 %
>
>
Actually the percentages are wrong (I think pgfouine counts also other types
of query like ET SESSION CHARACTERISTICS AS TRANSACTION READ WRITE;):
These a
Hi All,
I ran pgbench. Here some result:
-bash-3.1$ pgbench -c 50 -t 1000
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 100
number of clients: 50
number of transactions per client: 1000
number of transactions actually processed: 5/5
tps = 377.351354 (including co
Hi,
M. Edward (Ed) Borasky wrote:
> Check out the work of Jens Axboe and Alan Brunelle, specifically the
> packages "blktrace" and "fio".
Thank you for these pointers, that looks pretty interesting.
> There are also some more generic filesystem benchmarks like "iozone" and
> "bonnie++".
Those a
Ron wrote:
I think the idea is that with SSDs or a RAID with a battery backed
cache you
can leave fsync on and not have any significant performance hit since
the seek
times are very fast for SSD. They have limited bandwidth but
bandwidth to the
WAL is rarely an issue -- just latency.
Yes, Greg
18 matches
Mail list logo