John A Meinel wrote:
Isn't this actually more of a problem for the meta-data to give out in a
hardware situation? I mean, if the card you are using dies, you can't
just get another one.
With software raid, because the meta-data is on the drives, you can pull
it out of that machine, and put it in
> Has anyone ran Postgres with software RAID or LVM on a production box?
> What have been your experience?
Yes, we have run for a couple years Pg with software LVM (mirroring)
against two hardware RAID5 arrays. We host a production Sun box that
runs 24/7.
My experience:
* Software RAID (othe
t rate.
We have noticed a substantial improvement in performance with 8.0 vs
7.4.6. All of the update/insert problems seem to have gone away, save
WAL syncing.
I may have to take back what I said about indexes.
Olivier Sirven wrote:
Le Vendredi 21 Janvier 2005 19:18, Marty Scholes a écrit :
The i
Tatsuo,
I agree completely that vacuum falls apart on huge tables. We could
probably do the math and figure out what the ratio of updated rows per
total rows is each day, but on a constantly growing table, that ratio
gets smaller and smaller, making the impact of dead tuples in the table
propo
Randolf,
You probably won't want to hear this, but this decision likely has
nothing to do with brands, models, performance or applications.
You are up against a pro salesman who is likely very good at what he
does. Instead spewing all sorts of "facts" and statistics to your
client, the salesma
This is probably a lot easier than you would think. You say that your
DB will have lots of data, lots of updates and lots of reads.
Very likely the disk bottleneck is mostly index reads and writes, with
some critical WAL fsync() calls. In the grand scheme of things, the
actual data is likely
This was a lively debate on what was faster, single spindles or RAID.
This is important, because I keep running into people who do not
understand the performance dynamics of a RDBMS like Oracle or Pg.
Pg and Oracle make a zillion tiny reads and writes and fsync()
regularly. If your drive will c
Vitaly,
This looks like there might be some room for performance improvement...
> MS> I didn't see the table structure, but I assume
> MS> that the vote_avg and
> MS> vote_count fields are in bv_bookgenres.
>
> I didn't understand you. vote_avg is stored in bv_books.
Ok. That helps. The confusion
> Hello Marty,
>
> MS> Is that a composite index?
>
> It is a regular btree index. What is a composite index?
My apologies. A composite index is one that consists of multiple fields
(aka multicolumn index). The reason I ask is that it was spending
almost half the time just searching bv_bookgenr
Not knowing a whole lot about the internals of Pg, one thing jumped out
at me, that each trip to get data from bv_books took 2.137 ms, which
came to over 4.2 seconds right there.
The problem "seems" to be the 1993 times that the nested loop spins, as
almost all of the time is spent there.
Pers
Duane wrote:
> P.S. I've only just begun using PostgreSQL after having
> used (and still using) DB2 on a mainframe for the past 14
> years. My experience with Unix/Linux is limited to some
> community college classes I've taken but we do have
> a couple of experienced Linux sysadmins on our team.
After reading the replies to this, it is clear that this is a
Lintel-centric question, but I will throw in my experience.
> I am curious if there are any real life production
> quad processor setups running postgresql out there.
Yes. We are running a 24/7 operation on a quad CPU Sun V880.
> Sinc
I have some suggestions based on my anecdotal experience.
1. This is a relatively small DB -- the working set will likely be in
RAM at any moment in time, making read I/O time mostly irrelevant.
2. The killer will be write times -- specifically log writes. Small and
heavily synchronized writes
Six days ago I installed Pg 7.4.1 on Sparc Solaris 8 also. I am hopeful
that we as well can migrate a bunch of our apps from Oracle.
After doing some informal benchmarks and performance testing for the
past week I am becoming more and more impressed with what I see.
I have seen similar results
14 matches
Mail list logo