On 8/8/07, Mark Makarowsky [EMAIL PROTECTED] wrote:
Can you provide more detail on what you mean by your
two suggestions below:
Yeah, I've used vertical partitioning very
successfully in the past, though I've never done it
for just a single field. I'll typically leave the few
most common
On Thu, Aug 09, 2007 at 06:04:09PM +0530, Merlin Moncure wrote:
keep an eye for the HOT feature which will hopefully make 8.3 that
will highly reduce the penalty for (small) updates in many cases.
Is there an overview somewhere about how this feature works and what it
is expected to do? There
On 8/9/07, Michael Stone [EMAIL PROTECTED] wrote:
On Thu, Aug 09, 2007 at 06:04:09PM +0530, Merlin Moncure wrote:
keep an eye for the HOT feature which will hopefully make 8.3 that
will highly reduce the penalty for (small) updates in many cases.
Is there an overview somewhere about how this
On Thu, Aug 09, 2007 at 06:04:09PM +0530, Merlin Moncure wrote:
That said, 'update' is the slowest operation for postgresql relative
to other databases that are not MVCC.
Actually, it depends on how you do MVCC. In Oracle, DELETE is actually
the most expensive operation, because they have to
Mark,
You are not alone in the fact that when you post your system
specifications, CPU and memory are always listed while the
disk I/O subsystem invariably is not. This is a very disk
intensive operation and I suspect that your disk system is
maxed-out. If you want it faster, you will need more
Hi,
update valley set test='this is a test'
Such query updates ALL of your records in the table.
5 million records * 47 fields - that can be several gigabytes of data.
The system has to scan that gigabytes to change every record. This is a huge
task. Try vacuuming and see if it helps. It can
Erik Jones wrote:
Decibel! wrote:
I should mention that if you can handle splitting the
update into multiple transactions, that will help a
lot since it means you won't be doubling the size of
the table.
As I mentioned above, when you do an update you're actually inserting a
new row and
On Aug 8, 2007, at 3:00 AM, Heikki Linnakangas wrote:
Erik Jones wrote:
Decibel! wrote:
I should mention that if you can handle splitting the
update into multiple transactions, that will help a
lot since it means you won't be doubling the size of
the table.
As I mentioned above, when you do
I have a table with 4,889,820 records in it. The
table also has 47 fields. I'm having problems with
update performance. Just as a test, I issued the
following update:
update valley set test='this is a test'
This took 905641 ms. Isn't that kind of slow? There
aren't any indexes, triggers,
Mark Makarowsky wrote:
I have a table with 4,889,820 records in it. The
table also has 47 fields. I'm having problems with
update performance. Just as a test, I issued the
following update:
update valley set test='this is a test'
This took 905641 ms. Isn't that kind of slow?
The limiting
On 8/7/07, Mark Makarowsky [EMAIL PROTECTED] wrote:
I have a table with 4,889,820 records in it. The
table also has 47 fields. I'm having problems with
update performance. Just as a test, I issued the
following update:
update valley set test='this is a test'
This took 905641 ms. Isn't
On Tuesday 07 August 2007 05:58, Mark Makarowsky
[EMAIL PROTECTED] wrote:
I have a table with 4,889,820 records in it. The
table also has 47 fields. I'm having problems with
update performance. Just as a test, I issued the
following update:
update valley set test='this is a test'
This
On Tue, Aug 07, 2007 at 02:33:19PM +0100, Richard Huxton wrote:
Mark Makarowsky wrote:
I have a table with 4,889,820 records in it. The
table also has 47 fields. I'm having problems with
update performance. Just as a test, I issued the
following update:
update valley set test='this is a
[EMAIL PROTECTED] (Mark Makarowsky) writes:
I have a table with 4,889,820 records in it. The
table also has 47 fields. I'm having problems with
update performance. Just as a test, I issued the
following update:
update valley set test='this is a test'
This took 905641 ms. Isn't that
On 8/7/07, Decibel! [EMAIL PROTECTED] wrote:
On Tue, Aug 07, 2007 at 02:33:19PM +0100, Richard Huxton wrote:
Mark Makarowsky wrote:
I have a table with 4,889,820 records in it. The
table also has 47 fields. I'm having problems with
update performance. Just as a test, I issued the
On Tue, Aug 07, 2007 at 02:36:18PM -0500, Scott Marlowe wrote:
On 8/7/07, Decibel! [EMAIL PROTECTED] wrote:
On Tue, Aug 07, 2007 at 02:33:19PM +0100, Richard Huxton wrote:
Mark Makarowsky wrote:
I have a table with 4,889,820 records in it. The
table also has 47 fields. I'm having
Can you provide more detail on what you mean by your
two suggestions below:
Yeah, I've used vertical partitioning very
successfully in the past, though I've never done it
for just a single field. I'll typically leave the few
most common fields in the main table and pull
everything else into a
I am trying to update a field in one table with a
field from another table like:
update co set
firest_id=fco.firest_id,fire_dist=fco.fire_dist from
fco where co.xno=fco.xno
Table co has 384964 records
Table fco has 383654 records
The xno fields in both tables are indexed but they
don't seem to
Mark Makarowsky [EMAIL PROTECTED] writes:
Hash Join (cost=15590.22..172167.03 rows=383654
width=215) (actual time=1473.297..43032.178
rows=383654 loops=1)
Hash Cond: (co.xno = fco.xno)
- Seq Scan on co (cost=0.00..123712.64
rows=384964 width=195) (actual time=440.196..37366.682
The version is:
PostgreSQL 8.2.4 on i686-pc-mingw32, compiled by GCC
gcc.exe (GCC) 3.4.2 (mingw-special)
Here is the table definition for co and fco. There
aren't any rules constraints, triggers, etc. on the
tables. Only an index on each table for the xno
field. Any other thoughts?
CREATE
20 matches
Mail list logo