is
specialist, and expensive.
Matthew
--
Nog: Look! They've made me into an ensign!
O'Brien: I didn't know things were going so badly.
Nog: Frightening, isn't it?
---(end of broadcast)---
TIP 4: Have you searche
a bigger
flash device? The "too many writes wear it out" argument is mostly not
true nowadays anyway.
Matthew
--
Don't worry! The world can't end today because it's already tomorrow
in Australia.
---(end of broadcast)-
orrectly will help the planner in this case.
Setting work_mem higher will improve the performance of the sort in the
second case.
Of course, what others have said about trying to avoid large offsets is
good advice. You don't actually need a unique index, but it makes it
simpler if you do.
M
dates. And it'll also pick up
other write activity onthe system...
Of course. My point was that 64MB should be quite sufficient if most
accesses are reads. We have a few machines here with 2GB BBU caches as we
do LOTS of writes - that sort of thing probably isn't necessary here.
vely over-providing or it may be woefully
inadequate, but this machine looks like a fairly good buy for the price.
Matthew
--
No trees were killed in the sending of this message. However a large
number of electrons were terribly inconvenienced.
---(end of broadcast)--
be able
to use the same algorithm to work out where the boundary is, therefore
they'll get the same result. No need to pass back information.
Matthew
--
There is something in the lecture course which may not have been visible so
far, which is reality --
change to the backend, but it would only work for dump restores, and would
require the client to be clever. I'm all for allowing this kind of
optimisation while writing normally to the database, and for not requiring
the client to think too hard.
Matthew
--
All of this sounds mildly turg
riting the data to the WAL, just without the
data bit.
Matthew
--
Failure is not an option. It comes bundled with your Microsoft product.
-- Ferenc Mantfeld
---(end of broadcast)---
TIP 5: don&
y the first few rows optimized and all the rest not.
Why would you need to lock the table?
Matthew
--
Picard: I was just paid a visit from Q.
Riker: Q! Any idea what he's up to?
Picard: No. He said he wanted to be "nice" to me.
Riker: I'll alert the crew.
On Tue, 5 Feb 2008, Richard Huxton wrote:
In the case of a bulk upload to an empty table (or partition?) could you not
optimise the WAL away?
Argh. If I hadn't had to retype my email, I would have suggested that
before you.
;)
Matthew
--
Unfortunately, university regulations pro
ckpointed,
no work would be required.
This would improve the performance of database restores and large writes
which expand the table's data file. So, would it work?
Matthew
--
If pro is the opposite of con, what is the opposite of progress?
--
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
very much for your time.
Regards
Matthew
Андрей Репко wrote:
Hello Matthew,
Monday, January 28, 2008, 2:02:26 PM, Вы писали:
ML> I have a query which runs pretty quick ( 0.82ms) but when I put it
ML> inside a stored procedure it takes 10 times as long (11.229ms). Is
ML> this what
this to
access the index and so was returning too many rows and then filtering
them. It looks like I still have to take a hit of 2ms or so to call the
function but I guess that is not unreasonable.
Thanks for your help and to everyone who answered this thread.
Regards
Matthew.
Euler Tavei
ce mailing list.
Matthew
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
Hi Tom,
Is there any way to work out what plan the query is using in side the
function? I think I have a similar problem with a query taking much
longer from inside a function than it does as a select statement.
Regards
Matthew
Tom Lane wrote:
Claire McLister <[EMAIL PROTECTED]>
your time. I'm using Thunderbird, maybe I need to upgrade.
On Jan 28, 2008 9:27 AM, Matthew Lunnon <[EMAIL PROTECTED]> wrote:
Scott Marlowe wrote:
On Jan 28, 2008 5:41 AM, Matthew Lunnon <[EMAIL PROTECTED]> wrote:
default_statistics_target = 1000
That's very high
t=0.00..0.27 rows=1 width=81) (actual time=0.003..0.004 rows=1
loops=189)"
" Index Cond: (mgpr.market_group_id = mg.market_group_id)"
" Filter: (live <> 'X'::bpchar)"
" -> Seq Scan on market_group_relation mgr (cost=0.00.
Hi Scott,
Thanks for your time
Regards
Matthew
Scott Marlowe wrote:
On Jan 28, 2008 5:41 AM, Matthew Lunnon <[EMAIL PROTECTED]> wrote:
Hi
I am investigating migrating from postgres 743 to postgres 826 but
although the performance in postgres 826 seems to be generally better
there ar
current cut-off of
signalling everyone when the queue really is full. The hope would be that
that would never happen.
Matthew
---(end of broadcast)---
TIP 6: explain analyze is your friend
Ahh, sorry, I have been too aggressive with my cutting, I am running
8.2.6 and the function is below.
Thanks.
Matthew
CREATE OR REPLACE FUNCTION sp_get_price_panel_id(int4, "varchar",
"varchar", "varchar", bpchar)
RETURNS SETOF t_market_price_panel AS
$BOD
maintenance_work_mem = 100MB
effective_cache_size = 2048MB
default_statistics_target = 1000
Thanks for any help.
Regards
Matthew.
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
ms to run on 862 and 2.332 to run on 743.
Thanks in advance for any help.
Regards
Matthew
8.2.6
shared_buffers = 500MB
work_mem = 10MB
maintenance_work_mem = 100MB
effective_cache_size = 2048MB
default_statistics_target = 1000
7.4.3
shared_buffers = 51200
sort_mem = 10240
vacuum_mem = 81920
, but
Greg is correct - you only need to put the WAL on a cached disc system.
That'd be quite a bit cheaper, I'd imagine.
Another case of that small SSD drive being useful, I think.
Matthew
---(end of broadcast)---
TIP 1: if posting/
that helps,
Matthew
---(end of broadcast)---
TIP 6: explain analyze is your friend
e write
performance of normal RAM, without wasting space, and could
(theoretically) be pretty cheap, and would improve the transaction speed
of Postgres significantly.
If someone doesn't already make one, they should!
Matthew
---(end of broadcast)--
On Fri, 14 Dec 2007, Tom Lane wrote:
> Matthew <[EMAIL PROTECTED]> writes:
> > Interesting thread. Now, I know absolutely nothing about how the data is
> > stored, but it strikes me as being non-optimal that every single block on
> > the disc needs to be written again ju
ut into a separate bitmap stored somewhere else? That
would mean the (relatively small) amount of data being written could be
written in a small sequential write to the disc, rather than very sparsely
over the whole table.
Matthew
--
If you let your happiness depend upon how somebody else feels about
Hi Sven,
Does this mean that one option I have is to use a multi core Intel based
server instead of an AMD based server?
Matthew
Sven Geisler wrote:
Hi Matthew,
I remember that I also an issue with AMD Opterons before Pg 8.1
There is a specific Opteron behaviour on shared memory locks
some overflow here. But some kind of limiting of connections is
probably required.
Thanks
Matthew
Sven Geisler wrote:
Hi Matthew,
The context switching isn't the issue. This is an indicator which is
useful to identify your problem.
What kind of application do you running? Can you limi
Hi Sven,
yes the patch would be great if you could send it to me, we have already
had to compile postgres to up the number of function parameters from 32
to 64.
Meanwhile I will try and persuade my colleagues to consider the upgrade
option.
Thanks
Matthew
Sven Geisler wrote:
Hi Matthew
that number.
Thanks
Matthew.
Claus Guttesen wrote:
I have a 4 * dual core 64bit AMD OPTERON server with 16G of RAM, running
postgres 7.4.3. This has been recompiled on the server for 64 stored
procedure parameters, (I assume this makes postgres 64 bit but are not
sure). When the server gets under
Ah I was afraid of that. Maybe I'll have to come out of the dark ages.
Matthew
Steinar H. Gunderson wrote:
On Wed, Dec 12, 2007 at 10:16:43AM +0000, Matthew Lunnon wrote:
Does anyone have any ideas what my bottle neck might be and what I can do
about it?
Your bottleneck is tha
e a closer look at the this
Thanks
Matthew
Sven Geisler wrote:
Hi Matthew,
I know exactly what you experience.
We had a 4-way DC Opteron and Pg 7.4 too.
You should monitor context switches.
First suggest upgrade to 8.2.5 because the scale up is much better with 8.2.
You need to limi
nyone have any ideas what my bottle neck might be and what I can
do about it?
Thanks for any help.
Matthew.
---(end of broadcast)---
TIP 6: explain analyze is your friend
On Thu, 6 Dec 2007, Tom Lane wrote:
> Matthew <[EMAIL PROTECTED]> writes:
> > ... For this query, Postgres would perform a nested loop,
> > iterating over all rows in the small table, and doing a hundred index
> > lookups in the big table. This completed very quickly. Ho
ontinue to the last
possible hit. This query completed quickly, as the min and max could be
answered quickly by the indexes.
Still, it's a pity Postgres couldn't work that out for itself, having all
the information present in its statistics and indexes. AIUI the planner
doesn't peek i
l
implementation. I didn't have reiserfs, jfs, or xfs available at that
time, but it would have been really interesting to compare. This is the
system I would have based my indexing thing on.
Matthew
--
Anyone who goes to a psychiatrist ought to have his head examined.
-
that particular job very
fast, so I have done a reasonable amount of research into the topic. In
Java, that is. It would add a little bit more performance for our system.
That wouldn't cover us - we still need to do complex queries with the same
problem, and that'll have to stay in Postgres.
rows with a column containing integer values ranging from zero to
ten thousand. Create an index on that column, analyse it. Then pick a
number between zero and ten thousand, and
"SELECT * FROM table WHERE that_column = the_number_you_picked"
Matthew
--
Experience is
r of requests, the longest reasonably-expected queue
length for a particular disc gets closer to the number of requests divided
by the number of discs, as the requests get spread more and more evenly
among the discs.
The larger the set of requests, the closer the performance will scale to
the num
On Tue, 4 Dec 2007, Mark Mielke wrote:
> The disk head has less theoretical distance to travel if always moving
> in a single direction instead of randomly seeking back and forth.
True... and false. The head can move pretty quickly, and it also has
rotational latency and settling time to deal with
t of the cache instead of from the disc. Of course, this is from a
simple Java programmer who doesn't know the OS interfaces for this sort of
thing.
Matthew
--
Here we go - the Fairy Godmother redundancy proof.
-- Computer Science Lecturer
---(end of broadcast)---
TIP 6: explain analyze is your friend
e bigger than fit in memory.
This would benefit a lot of multi-table joins, because being able to sort
a table faster would enable merge joins to be used at lower cost. That's
particularly valuable when you're doing a large summary multi-table join
that uses most of the database contents.
Matt
On Tue, 4 Dec 2007, Gregory Stark wrote:
> "Matthew" <[EMAIL PROTECTED]> writes:
>
> > Does Postgres issue requests to each random access in turn, waiting for
> > each one to complete before issuing the next request (in which case the
> > performance will no
ingle disc), or does it use some
clever asynchronous access method to send a queue of random access
requests to the OS that can be distributed among the available discs?
Any knowledgable answers or benchmark proof would be appreciated,
Matthew
--
"To err is human; to really louse things u
rted by different columns. Just remember to select from the
correct table to get the performance, and to write all changes to all the
tables! Kind of messes up transactions and locking a little though.
Matthew
--
No, C++ isn't equal to D. 'C' is undeclared, so we assume it
On Thu, 29 Nov 2007, Matthew T. O'Connor wrote:
> Matthew wrote:
> > For instance, the normal B-tree index on (a, b) is able to answer queries
> > like "a = 5 AND b > 1" or "a > 5". An R-tree would be able to index these,
> > plus queries like
iving the database the option to
> stop the evaluation earlier (when it reaches the output 500 rows)?
The planner doesn't always get it right. Simple.
Have you done a "VACUUM FULL ANALYSE" recently?
Matthew
--
It is better to keep your mouth closed and let people think you are a f
Matthew wrote:
For instance, the normal B-tree index on (a, b) is able to answer queries
like "a = 5 AND b > 1" or "a > 5". An R-tree would be able to index these,
plus queries like "a > 5 AND b < 1".
Sorry in advance if this is a stupid question, but
moment on my work's system, we call EXPLAIN before queries to find
out if it will take too long. This would improve performance by stopping
us having to pass the query into the query planner twice.
Matthew
--
An ant doesn't have a lot of processing power available to it. I'm not trying
ies like "a > 5 AND b < 1".
As far as I can see, it is not possible at the moment to write such an
index system for GiST, which is a shame because the actual R-tree
algorithm is very simple. It's just a matter of communicating both values
from the query to the index code.
Matthew
x27;t necessarily mean a full
table scan (for instance if there is a LIMIT), and where an index scan
*does* mean a full table scan (for instance, selecting the whole table and
ordering by an indexed field).
Matthew
--
Existence is a convenient concept to designate all of the f
On Tue, 27 Nov 2007, Steinar H. Gunderson wrote:
> On Tue, Nov 27, 2007 at 06:28:23PM +0000, Matthew wrote:
> > SELECT * FROM table WHERE a > 1 AND b < 4;
>
> This sounds like something an R-tree can do.
I *know* that. However, Postgres (as far as I can see) doesn't pro
able. It's silly. I would rather the query failed than have to wait
for a sequential scan of the entire table."
Yes, that would be really useful, if you have huge tables in your
database.
Matthew
--
Trying to write a program that can't be written is... well, i
b) &[EMAIL PROTECTED] fancy_type(1, 4);
which I don't want to do.
So, has this problem been solved before? Is there an already-existing
index that will speed up my query? Is there a way to get more than one
value into a GiST index?
Thanks,
Matthew
--
If you let your happiness depen
Anyone know what is up with this? I have two queries here which return
the same results, one uses a left outer join to get some data from a
table which may not match a constraint, and one that uses a union to get
the data from each constraint and put them together. The second one
isn't nearly as
I'm getting a san together to consolidate my disk space usage for my
servers. It's iscsi based and I'll be pxe booting my servers from it.
The idea is to keep spares on hand for one system (the san) and not have
to worry about spares for each specific storage system on each server.
This also makes
Mark Stosberg wrote:
Let me ask the question a different way: Is simply setting the two
values plus enabling autovacuuming generally enough, or is further
tweaking common place?
No, most people in addition to setting those two GUC settings also lower
the threshold values (there is a fair amoun
Jeremy Haile wrote:
Also, are other auto-vacuums and auto-analyzes showing up in the
pg_stats table? Maybe it's a stats system issue.
No tables have been vacuumed or analyzed today. I had thought that this
problem was due to my pg_autovacuum changes, but perhaps not. I
restarted Postgre
Jeremy Haile wrote:
I changed the table-specific settings so that the ANALYZE base threshold
was 5000 and the ANALYZE scale factor is 0. According to the documented
formula: analyze threshold = analyze base threshold + analyze scale
factor * number of tuples, I assumed that this would cause the
Frank Wiles wrote:
> On Thu, 4 Jan 2007 15:00:05 -0300
> "Charles A. Landemaine" <[EMAIL PROTECTED]> wrote:
>
>> I'm building an e-mail service that has two requirements: It should
>> index messages on the fly to have lightening search results, and it
>> should be able to handle large amounts of s
Tom Lane wrote:
"Joshua D. Drake" <[EMAIL PROTECTED]> writes:
On Wed, 2006-12-13 at 18:36 -0800, Josh Berkus wrote:
Mostly, though, pgbench just gives the I/O system a workout. It's not a
really good general workload.
It also will not utilize all cpus on a many cpu machine. We recently
foun
Joshua D. Drake wrote:
> I agree. I have many people that want to purchase a SAN because someone
> told them that is what they need... Yet they can spend 20% of the cost
> on two external arrays and get incredible performance...
>
> We are seeing great numbers from the following config:
>
> (2) HP
Carlo Stonebanks wrote:
Just a wild guess, but the performance problem sounds like maybe as your
data changes, eventually the planner moves some query from an index scan
to a sequential scan, do you have any details on what queries are taking
so long when things are running slow? You can turn
Just a wild guess, but the performance problem sounds like maybe as your
data changes, eventually the planner moves some query from an index scan
to a sequential scan, do you have any details on what queries are taking
so long when things are running slow? You can turn on the GUC var
"log_min_
Steven Flatt wrote:
Here is a potential problem with the auto-vacuum daemon, and I'm
wondering if anyone has considered this. To avoid transaction ID
wraparound, the auto-vacuum daemon will periodically determine that it
needs to do a DB-wide vacuum, which takes a long time. On our system,
i
partition_constraint_column = $2;
$$
LANGUAGE SQL;
Matthew A. Peters
Sr. Software Engineer, Haydrian Corp.
[EMAIL PROTECTED]
(mobile) 425-941-6566
Haydrian Corp.
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 26, 2006 9:15 AM
To: Matthew Peters
Cc: pgsql-performance
:text, 2, 1) = '5'::text) AND
(table_key = 10265512) AND (date_part('year'::text, (event_date)::timestamp
without time zone) = 2005::double precision))
-> Seq Scan on table5_p12
table5 (cost=0.00..2089.95 rows=1 width=
Tobias Brox wrote:
[Matthew T. O'Connor - Wed at 02:33:10PM -0400]
In addition autovacuum respects the work of manual or cron based
vacuums, so if you issue a vacuum right after a daily batch insert /
update, autovacuum won't repeat the work of that manual vacuum.
I was exp
into RAM then they are in-memory as
long as they're actively being used.
Hashtables and GDBM, as far as I know, are only useful for key->value
lookups. However, for this they are *fast*. If you can figure out a way
to make them work I'll bet things speed up.
--
Matthew Nuzum
new
45 min to a little over an hour but decreased
the memory usage to something like 45MB (vs dozens or hundreds of MB per
hashtable)
--
Matthew Nuzum
newz2000 on freenode
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
Marcin Mank wrote:
>> So the question is why on a relatively simple proc and I getting a query
>> performance delta between 3549ms and 7ms?
>
> What version of PG is it?
>
> I had such problems in a pseudo-realtime app I use here with Postgres, and
> they went away when I moved to 8.1 (from 7.4).
Jim C. Nasby wrote:
>
> It can cause a race if another process could be performing those same
> inserts or updates at the same time.
There are inserts and updates running all of the time, but never the
same data. I'm not sure how I can get around this since the queries are
coming from my radius
Jim,
Thanks for the help. I went and looked at that example and I don't see
how it's different than the "INSERT into radutmp_tab" I'm already doing.
Both raise an exception, the only difference is that I'm not doing
anything with it. Perhaps you are talking about the "IF (NOT FOUND)" I
put afte
List,
I posted a little about this a while back to the general list, but never
really got any where with it so I'll try again, this time with a little
more detail and hopefully someone can send me in the right direction.
Here is the problem, I have a procedure that is called 100k times a day.
Mo
Csaba Nagy wrote:
On Wed, 2006-09-27 at 18:08, Edoardo Ceccarelli wrote:
How can I configure the vacuum to run after the daily batch insert/update?
Check out this:
http://www.postgresql.org/docs/8.1/static/catalog-pg-autovacuum.html
By inserting the right row you can disable autovacuu
Vivek Khera wrote:
On Aug 30, 2006, at 5:29 AM, Matthew Sullivan wrote:
The hardware is a Compaq 6400r with 4G of EDO RAM, 4x500MHz Xeons
and a Compaq RAID 3200 in RAID 5 configuration running across 3
spindles (34G total space).
The OS is FreeBSD 5.4-RELEASE-p14
The PG Version is 8.1.3
All,
Got a little bit of a performance problem I hope that can be resolved.
All the files/info I believe you are going to ask for are here:
http://www.au.sorbs.net/~matthew/postgres/30.8.06/
The odd thing was it originally was fast (1-2 seconds) which is all I
need - the query is a
ing that can grow over time as our needs change. I don't want to
> buy a server only to find out later that it cannot meet our needs with
> future database projects. I have to balance a limited budget, room for
> future performance growth, and current system requirements.
Mikael Carneholm wrote:
This is where a "last_vacuumed" (and "last_analyzed") column in
pg_statistic(?) would come in handy. Each time vacuum or analyze has
finished, update the row for the specific table that was
vacuumed/analyzed with a timestamp in the last_vacuumed/last_analyzed
column. No mo
Gábriel Ákos wrote:
Luke Lonergan wrote:
Gabriel,
On 3/27/06 10:05 AM, "Gábriel Ákos" <[EMAIL PROTECTED]> wrote:
That gave me an idea. I thought that autovacuum is doing it right, but I
issued a vacuum full analyze verbose , and it worked all the day.
After that I've tweaked memory settings a
More detail please. It sounds like you running 8.1 and talking about
the integrated autovacuum is that correct? Also, what is the message
specifically from pgadmin?
Matt
Antoine wrote:
Hi,
I have enabled the autovacuum daemon, but occasionally still get a
message telling me I need to run vac
)
Pricing is tight-lipped, but searching shows $1.85 /GB. That's close
to $500,000 for 250GB. One report says a person paid $219,000 for 32GB
and 1TB costs "well over $1,000,000."
But they "guarantee the performance."
Too rich for me.
--
Matthew Nuzum
www.bearfrui
varlena/GeneralBits/Tidbits/index.php
Notice there's a section on performance tips.
Also, this list works because volunteers who have knowledge and free
time choose to help when they can. If you really need answers ASAP,
there are a few organizations who provide paid support.
at the explain analyze output of the query from pg 7.3,
figure out why the plan is bad and tweak your query to get optimum
performance.
Yes, I agree with the other statements that say, "upgrade to 7.4 or
8.x if you can" but if you can't, then you can still work on it.
--
Matthew
of linux, kernel and all.
>
> No, linux vserver is equivalent to a jail - and they work superbly imho.
> developer.pgadmin.org is just one such VM that I run.
>
> http://www.linux-vserver.org/
>
> Regards, Dave.
I can confirm this. I've been using linux-vserver for years. It
On 3/6/06, Marc G. Fournier <[EMAIL PROTECTED]> wrote:
> On Mon, 6 Mar 2006, Matthew Nuzum wrote:
> > My problem with running PG inside of a VPS was that the VPS used a
> > virtual filesystem... basically, a single file that had been formatted
> > and loop mounted so th
at works on pretty
much any linux OS.
Try it out, tinker with the values and that way you won't have to
guess when making your purchase decission.
[1] http://www.colinux.org/ Coperative Linux
[2] http://linux-vserver.org/ Linux-vserver project
--
Matthew Nuzum
www.bearfruit.org
---
Aaron Turner wrote:
So I'm trying to figure out how to optimize my PG install (8.0.3) to
get better performance without dropping one of my indexes.
What about something like this:
begin;
drop slow_index_name;
update;
create index slow_index_name;
commit;
vacuum;
Matt
d probably want to start with the GDB technique unless
you have a ton of available ram.
You might interpret this as being a knock against PostgreSQL since I
pulled the data out of the db, but it's not; You'd be hard pressed to
find anything as fast as the in-memory hashtable or th
Jim C. Nasby wrote:
Small tables are most likely to have either very few updates (ie: a
'lookup table') or very frequent updates (ie: a table implementing a
queue). In the former, even with vacuum_threshold = 0 vacuum will be a
very rare occurance. In the later case, a high threshold is likely to
Michael Riess wrote:
did you read my post? In the first part I explained why I don't want to
increase the FSM that much.
I'm sure he did, but just because you don't have enough FSM space to
capture all everything from your "burst", that doesn't mean that space
can't be reclaimed. The next ti
Jim C. Nasby wrote:
> On Wed, Dec 14, 2005 at 01:56:10AM -0500, Charles Sprickman wrote:
> You'll note that I'm being somewhat driven by my OS of choice, FreeBSD.
>
>>Unlike Solaris or other commercial offerings, there is no nice volume
>>management available. While I'd love to keep managing a
oss *twice* this year by using SMART
hard drive monitoring software.
I can't tell you how good it feels to replace a drive that is about to
die, as compared to restoring data because a drive died.
--
Matthew Nuzum
www.bearfruit.org
---(end of broadcast)
ideology so that a server should be replaced
after 3 years, where before I aimed for 5.
It seems to me that the least reliable components in servers these
days are the fans.
--
Matthew Nuzum
www.bearfruit.org
---(end of broadcast)---
TIP 9: In
ng like this
would definately peg your disk i/o.
Throwing more hardware at your problem will definately help, but I'm a
performance freak and I like to optimize everything to the max.
*Sometimes* you can get drastic improvements without adding any
hardware. I have seen some truly miraculu
ow.
I would suggest posting the explain analyze output for one of your
slow updates. I'll bet it is much more revealing and takes out a lot
of the guesswork.
--
Matthew Nuzum
www.bearfruit.org
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
essage is being sent to the list to serve as a
warning to other data warehouse admins that when you reach your
capacity, the downward spiral happens rather quickly.
Crud... Outlook just froze while composing the PHB memo. I've been
working on that for an hour. What a bad day.
--
Matthew Nuzum
ww
> /tmp/warn.txt
echo >> /tmp/warn.txt
top -bn 1 >> /tmp/warn.txt
echo >> /tmp/warn.txt
fi
NOW=`date`
CPU_LOAD=`cat /proc/loadavg | cut --delimiter=" " -f 1,2,3
--output-delimiter=\|`
echo -e $NOW\|$CPU_LOAD\|$DB_LOAD >> ~/LOAD_MONITOR.LOG
--
Matthew Nuzum
www.bearfruit.org
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
401 - 500 of 631 matches
Mail list logo