Am Donnerstag, 23. Oktober 2003 01:32 schrieb Rob Nagler:
> The concept of vacuuming seems to be problematic. I'm not sure why
> the database simply can't garbage collect incrementally. AGC is very
> tricky, especially AGC that involves gigabytes of data on disk.
> Incremental garbage collection
Hi
The Postgresql package came from the Redhat v9.0 CDROM.
I have checked the version using psql --version and it showed v7.3.2
The duplication of table names is in the same schema.
How to check the pg_dump version?
Thank you,
REgards.
Hi
The Postgresql package came from the Redhat v9.0 CDROM.
I have checked the version using psql --version and it showed v7.3.2
How to check the pg_dump version?
Thank you,
REgards.
Rob Nagler <[EMAIL PROTECTED]> writes:
> Here's the vmstat 5 at a random time:
>procs memoryswap io system cpu
> r b w swpd free buff cache si sobibo incs us sy id
> 0 0 0 272372 38416 78220 375048 0 3 2
Vivek Khera writes:
> AMI or Adaptec based?
Adaptec, I think. AIC-7899 LVD SCSI is what dmidecode says, and
Red Hat/Adaptec aacraid driver, Aug 18 2003 is what comes up when it
boots. I haven't be able to use the aac utilities with this driver,
however, so it's hard to interrogate the device.
>
Medora,
> Increasing effective_cache_size to 1 did it.
That would be 78MB RAM. If you have more than that available, you can
increase it further. Ideally, it should be about 2/3 to 3/4 of available
RAM.
>The query now
> takes 4 secs. I left random_page_cost at the default value of 4.
On Tue, 2003-10-21 at 14:27, Christopher Browne wrote:
> In the last exciting episode, [EMAIL PROTECTED] (Josh Berkus) wrote:
> > So what is the ceiling on 32-bit processors for RAM? Most of the
> > 64-bit vendors are pushing Athalon64 and G5 as "breaking the 4GB
> > barrier", and even I can do the
Josh,
> > So why did were the indices not used before when they yield
> a better plan?
>
> Your .conf settings, most likely. I'd lower your
> random_page_cost and raise
> your effective_cache_size.
Increasing effective_cache_size to 1 did it. The query now
takes 4 secs. I left random_
Medora,
> So why did were the indices not used before when they yield a better plan?
Your .conf settings, most likely. I'd lower your random_page_cost and raise
your effective_cache_size.
--
-Josh Berkus
Aglio Database Solutions
San Francisco
---(end of broadcast)-
>
> Medora,
>
> > I'm using pg 7.3.4 to do a select involving a join on 2 tables.
> > The query is taking 15 secs which seems extreme to me considering
> > the indices that exist on the two tables. EXPLAIN ANALYZE shows
> > that the indices aren't being used. I've done VACUUM ANALYZE on the
> "RN" == Rob Nagler <[EMAIL PROTECTED]> writes:
RN> This solution doesn't really fix the fact that VACUUM consumes the
RN> disk while it is running. I want to avoid the erratic performance on
RN> my web server when VACUUM is running.
What's the disk utilization proir to running vacuum? If
> "RN" == Rob Nagler <[EMAIL PROTECTED]> writes:
RN> Vendor: DELL Model: PERCRAID Mirror Rev: V1.0
RN> Type: Direct-AccessANSI SCSI revision: 02
AMI or Adaptec based?
If AMI, make sure it has write-back cache enabled (and you have
battery backup!), and disable
The suggestion that we are saturating the memory bus
makes a lot of sense. We originally started with a
low setting for shared buffers and resized it to fit
all our tables (since we have memory to burn). That
improved stand alone performance but not concurrent
performance - this would explain tha
Josh Berkus <[EMAIL PROTECTED]> writes:
>> We are running with shared buffers large enough to hold the
>> entire database
> Which is bad. This is not what shared buffers are for. See:
> http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html
In fact, that may be the cause of the performan
Rhaoni,
>First of all , thank's for your atention and fast answer. The system
> really bogs down when I'm doing a whole series of these updates.
That would be consistent with a single-disk problem.
> Take a
> look at my postgresql.conf I'm afraid of putting some parameters wrong (
> too high
On Wed, 22 Oct 2003, Hilary Forbes wrote:
> If I have a fixed amount of money to spend as a general rule is it
> better to buy one processor and lots of memory or two processors and
> less memory for a system which is transactional based (in this case
> it's handling reservations). I realise t
Rhaoni,
> Total runtime: 3.56 msec
> (4 rows)
Well, from that figure it's not the query that's holding you up.
You said that the system bogs down when you're doing a whole series of these
updates, or just one? If the former, then I'm afraid that it's your disk
that's to blame ... large
Medora,
> I'm using pg 7.3.4 to do a select involving a join on 2 tables.
> The query is taking 15 secs which seems extreme to me considering
> the indices that exist on the two tables. EXPLAIN ANALYZE shows
> that the indices aren't being used. I've done VACUUM ANALYZE on the
> db with no chang
Hi List;
Here follow the update query, explain analyze of it , my postgresql.conf and
my db configuration. This is my first PostgreSQL DB so I would like to know if
its performance is normal !
If there is some postgresql.conf's parameter that you think will optmize the
database just tell me
Simon,
> The issue is that no matter how much query load we throw at our server it
> seems almost impossible to get it to utilize more than 50% cpu on a
> dual-cpu box. For a single connection we can use all of one CPU, but
> multiple connections fail to increase the overall utilization (although
NEC,
> After a few weeks of usage, when we do a \d at the sql prompt, there was a
> duplicate object name, ie it can be a duplicate row of index or table.
> When we do a \d table_name, it will show a duplication of column names
> inside the table.
I think the version of PSQL and pg_dump which you
Heya
On Wed, 2003-10-22 at 01:13, Alexander Priem wrote:
> So I guess the PERC4/Di RAID controller is pretty good. It seems that
> RedHat9 supports it out-of-the-box (driver 1.18f), but I gather from the
> sites mentioned before that upgrading this driver to 1.18i would be
> better...
Actually up
Folks,
Im hoping someone can give me some pointers to resolving an issue with postgres and its ability to utilize multiple CPUs effectively.
The issue is that no matter how much query load we throw at our server it seems almost impossible to get it to utilize more than 50% cpu on a dual-cpu
I'm using pg 7.3.4 to do a select involving a join on 2 tables.
The query is taking 15 secs which seems extreme to me considering
the indices that exist on the two tables. EXPLAIN ANALYZE shows
that the indices aren't being used. I've done VACUUM ANALYZE on the
db with no change in results.
[EMAIL PROTECTED] writes:
> Currently we are running Postgresql v7.3.2 on Redhat Linux OS v9.0. We have
> Windows2000 client machines inserting records into the Postgresql tables
> via ODBC.
> After a few weeks of usage, when we do a \d at the sql prompt, there was a
> duplicate object name, ie it
Hi
Currently we are running Postgresql v7.3.2 on Redhat Linux OS v9.0. We have
Windows2000 client machines inserting records into the Postgresql tables
via ODBC.
After a few weeks of usage, when we do a \d at the sql prompt, there was a
duplicate object name, ie it can be a duplicate row of index
Hilary Forbes wrote:
If I have a fixed amount of money to spend as a general rule
>is it better to buy one processor and lots of memory or two
>processors and less memory for a system which is transactional
>based (in this case it's handling reservations). I realise the
>answer will be a general
If I have a fixed amount of money to spend as a general rule is it better to buy one
processor and lots of memory or two processors and less memory for a system which is
transactional based (in this case it's handling reservations). I realise the answer
will be a generalised one but all the per
So I guess the PERC4/Di RAID controller is pretty good. It seems that
RedHat9 supports it out-of-the-box (driver 1.18f), but I gather from the
sites mentioned before that upgrading this driver to 1.18i would be
better...
---(end of broadcast)---
TIP
29 matches
Mail list logo