Thank you all very very much. After running CLUSTER on the USERS table, now
the speed is very very good. Now i have also understood the importance of
VACUUM and ANALYZE.
Once again thank you all very very much. You guys rock.
--- On Tue, 24/6/08, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
On Jun 24, 2008, at 9:12 PM, jay wrote:
I've a table with about 34601755 rows ,when I execute 'update
msg_table set
type=0;' is very very slow, cost several hours, but still not
complete?
Why postgresql is so slowly? Is the PG MVCC problem?
But I try it on Mysql, the same table and
Hi, Could anybody comment on the postgres-pr driver, from performance point
of view, is it faster than others?
What other options are available to access postgresql in ruby/ruby on rails?
which of them is most popular, better?
regards
Amol
DISCLAIMER
==
This e-mail may contain
Hi,
i, Could anybody comment on the postgres-pr driver, from performance point
of view, is it faster than others?
I guess, a more appropriate place to check out for ruby/rails postgres
drivers would be rubyforge.org itself. There is a libpq based postgres
driver available there (ruby-postgres)
jay wrote:
I've a table with about 34601755 rows ,when I execute 'update msg_table set
type=0;' is very very slow, cost several hours, but still not complete?
Why postgresql is so slowly? Is the PG MVCC problem?
Possibly. Because of MVCC, a full-table update will actually create a
new
Hi list,
We have a database with lots of small simultaneous writes and reads
(millions every day) and are looking at buying a good hardware for this.
What are your suggestions. What we are currently looking at is.
Dual Quad Core Intel
8 - 12 GB RAM
10 disks total.
4 x 146 GB SAS disk in
We have a database with lots of small simultaneous writes and reads
(millions every day) and are looking at buying a good hardware for this.
What are your suggestions. What we are currently looking at is.
Dual Quad Core Intel
8 - 12 GB RAM
10 disks total.
4 x 146 GB SAS disk in RAID 1+0
On Wed, 25 Jun 2008, Henrik wrote:
What are your suggestions. What we are currently looking at is.
Dual Quad Core Intel
8 - 12 GB RAM
More RAM would be helpful. It's not that expensive, compared to the rest
of your system.
10 disks total.
4 x 146 GB SAS disk in RAID 1+0 for database
6 x
25 jun 2008 kl. 12.56 skrev Claus Guttesen:
We have a database with lots of small simultaneous writes and reads
(millions every day) and are looking at buying a good hardware for
this.
What are your suggestions. What we are currently looking at is.
Dual Quad Core Intel
8 - 12 GB RAM
10
25 jun 2008 kl. 13.15 skrev Matthew Wakeling:
On Wed, 25 Jun 2008, Henrik wrote:
What are your suggestions. What we are currently looking at is.
Dual Quad Core Intel
8 - 12 GB RAM
More RAM would be helpful. It's not that expensive, compared to the
rest of your system.
True, as long as
If you have a good RAID controller with BBU cache, then there's no point
splitting the discs into two sets. You're only creating an opportunity to
under-utilise the system. I'd get ten identical discs and put them in a
single array, probably RAID 10.
OK, thats good to know. Really want to
On Wed, 25 Jun 2008, Henrik wrote:
Would you turn off fsync if you had a controller with BBU? =)
No, certainly not. Fsync is what makes the data move from the volatile OS
cache to the non-volatile disc system. It'll just be a lot quicker on a
controller with a BBU cache, because it won't
On Wed, 25 Jun 2008, Merlin Moncure wrote:
Has anyone done some benchmarks between hardware RAID vs Linux MD software
RAID?
I have here:
http://merlinmoncure.blogspot.com/2007/08/following-are-results-of-our-testing-of.html
The upshot is I don't really see a difference in performance.
The
Also sprach Matthew Wakeling:
Has anyone done some benchmarks between hardware RAID vs Linux MD software
RAID?
...
The upshot is I don't really see a difference in performance.
The main difference is that you can get hardware RAID with
battery-backed-up cache, which means small writes
On Wed, 25 Jun 2008, jay wrote:
Why postgresql is so slowly? Is the PG MVCC problem?
Update is extremely intensive not just because of MVCC, but because a
new version of all the rows are being written out. This forces both lots
of database commits and lots of complicated disk I/O to
On Wed, 25 Jun 2008, Henrik wrote:
4 x 146 GB SAS disk in RAID 1+0 for database
6 x 750 GB SATA disks in RAID 1+0 or RAID 5 for OS and transactions logs.
The transaction logs are not that big, and there's very little value to
striping them across even two disks. You should just get more SAS
On Wed, 25 Jun 2008, Peter T. Breuer wrote:
You can put the log/bitmap wherever you want in software raid, including
on a battery-backed local ram disk if you feel so inclined. So there is
no intrinsic advantage to be gained there at all.
You are technically correct but this is irrelevant.
On Wed, Jun 25, 2008 at 11:24 AM, Greg Smith [EMAIL PROTECTED] wrote:
SMART doesn't detect 100% of drive failures in advance, but you'd be silly
to setup a database system where you don't get to take advantage of the
~50% it does catch before you lose any data.
Can't argue with that one.
--
On Wed, 2008-06-25 at 11:30 -0400, Jonah H. Harris wrote:
On Wed, Jun 25, 2008 at 11:24 AM, Greg Smith [EMAIL PROTECTED] wrote:
SMART doesn't detect 100% of drive failures in advance, but you'd be silly
to setup a database system where you don't get to take advantage of the
~50% it does
On Wed, 25 Jun 2008, Greg Smith wrote:
A firewire-attached log device is an extremely bad idea.
Anyone have experience with IDE, SATA, or SAS-connected flash devices like
the Samsung MCBQE32G5MPP-0VA? I mean, it seems lovely - 32GB, at a
transfer rate of 100MB/s, and doesn't degrade much in
On Wed, 2008-06-25 at 09:53 -0600, Scott Marlowe wrote:
On Wed, Jun 25, 2008 at 5:05 AM, Adrian Moisey
[EMAIL PROTECTED] wrote:
Hi
I'm currently having a problem with a well known very large
servermanufacturer who shall remain unnamed and their semi-custom
RAID controller firmware not
On Wed, Jun 25, 2008 at 11:55 AM, Joshua D. Drake [EMAIL PROTECTED] wrote:
On Wed, 2008-06-25 at 09:53 -0600, Scott Marlowe wrote:
On Wed, Jun 25, 2008 at 5:05 AM, Adrian Moisey
[EMAIL PROTECTED] wrote:
I'm currently having a problem with a well known very large
servermanufacturer who shall
On Wed, Jun 25, 2008 at 9:03 AM, Matthew Wakeling [EMAIL PROTECTED] wrote:
On Wed, 25 Jun 2008, Merlin Moncure wrote:
Has anyone done some benchmarks between hardware RAID vs Linux MD
software
RAID?
I have here:
On Wed, Jun 25, 2008 at 01:35:49PM -0400, Merlin Moncure wrote:
experiences, i'm starting to be more partial to linux distributions
with faster moving kernels, mainly because i trust the kernel drivers
more than the vendor provided drivers.
While I have some experience that agrees with this,
On Wed, Jun 25, 2008 at 01:07:25PM -0500, Kevin Grittner wrote:
It doesn't have to be free software to cut that way. I've actually
found the free software to waste less of my time.
No question. But one of the unfortunate facts of the
no-charge-for-licenses world is that many people
On Wed, 25 Jun 2008, Peter T. Breuer wrote:
I refrained from saying in my reply that I would set up a firewire-based
link to ram in a spare old portable (which comes with a battery) if I
wanted to do this cheaply.
Maybe, but this is kind of a weird setup. Not many people are going to
run a
On Wed, 25 Jun 2008, Merlin Moncure wrote:
So, based on this and other experiences, i'm starting to be more partial
to linux distributions with faster moving kernels, mainly because i
trust the kernel drivers more than the vendor provided drivers.
Depends on how fast. I find it takes a
The query optimizer fails to use a hash aggregate most of the time. This is
an inconsistent behavior -- the queries below were happily using
hash_aggregate on a previous pg_restore from the data.
On one particular class of tables this is especially painful. The example
table has 25 million
This seems like a bug to me, but it shows up as a performance problem. Since
the column being queried is an integer, the second query (see below) can't
possibly match, yet Postgres uses a typecast, forcing a full table scan for a
value that can't possibly be in the table.
The application
Craig James [EMAIL PROTECTED] writes:
This seems like a bug to me, but it shows up as a performance problem.
emol_warehouse_1= explain analyze select version_id, parent_id from version
where version_id = 999;
If you actually *need* so many 9's here as to force it out
30 matches
Mail list logo