Re: [ADMIN] [GENERAL] PostgreSQL Cache

2008-09-29 Thread Joris Dobbelsteen

Matthew Pulis wrote:

Hi,

I need to perform some timed testing, thus need to make sure that disk 
cache does not affect me. Is clearing the OS (Ubuntu) disk cache, ( by 
running:  sudo echo 3 | sudo tee /proc/sys/vm/drop_caches ) enough to 
do this? If not can you please point me to some site please since all 
I am finding is such command.


Look for methodologies for doing performance tests. A problem is that 
the disk cache is an essential part that makes up for postgresql 
performance. Also do not forget about overhead and inaccuracies that you 
will affect your results.


In general performance tests are a rather large simulation of how your 
application would use the database. It should be large enough for many 
effects (such as initial cache state) to be neglected. It only provides 
an average for the performance on your system configuration.
If you run it a few times more, you can compute the variation. It 
provides some insight how stable your system is in handling the workload.


- Joris


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] [GENERAL] PostgreSQL Cache

2008-09-29 Thread Oleg Bartunov

A while ago I wrote a script based on Dave Plonka work
http://net.doit.wisc.edu/~plonka/fincore/

My script monitors system buffers and shared buffers 
(if pg_buffercache installed) and I found it's almost useless to 
check system buffers, since I got rather ridiculous numbers.



I use it to investigate OS cacheing of PostgreSQL files and was
surprized on 24 Gb server, total cache was about 30 Gb. How this is
possible ?


I can send script and perl module if you want to play with.



Oleg

On Mon, 29 Sep 2008, Greg Smith wrote:


On Mon, 29 Sep 2008, Matthew Pulis wrote:

I need to perform some timed testing, thus need to make sure that disk 
cache

does not affect me. Is clearing the OS (Ubuntu) disk cache, ( by running:
sudo echo 3 | sudo tee /proc/sys/vm/drop_caches ) enough to do this?


What you should do is:

1) Shutdown the database server (pg_ctl, sudo service postgresql stop, etc.)
2) sync
3) sudo echo 3 > /proc/sys/vm/drop_caches
4) Start the database server

That will clear both the database and OS cache with a minimum of junk left 
behind in the process; clearing the cache without a sync is a bad idea.


Note that all of this will still leave behind whatever cache is in your disk 
controller card or on the disk themselves available.  There are some other 
techniques you could consider.  Add a setp 2.5 that generates a bunch of data 
unused by the test, then sync again, and you've turned most of that into 
useless caching.


Ideally, your test should be running against a large enough data set that the 
dozens or couple of hundred megabytes that might be in those will only add a 
bit of noise to whatever you're testing.  If you're not running a larger test 
or going through tasts to make the caches clear, the only easy way to make 
things more clear is to reboot the whole server.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD




Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Do we need vacuuming when tables are regularly dropped?

2008-09-29 Thread Peter Kovacs
Hi,

We have a number of automated performance tests (to test our own code)
involving PostgreSQL. Test cases are supposed to drop and recreate
tables each time they run.

The problem is that some of the tests show a linear performance
degradation overtime. (We have data for three months back in the
past.) We have established that some element(s) of our test
environment must be the culprit for the degradation. As rebooting the
test machine didn't revert speeds to baselines recorded three months
ago, we have turned our attention to the database as the only element
of the environment which is persistent across reboots. Recreating the
entire PGSQL cluster did cause speeds to revert to baselines.

I understand that vacuuming solves performance problems related to
"holes" in data files created as a result of tables being updated. Do
I understand correctly that if tables are dropped and recreated at the
beginning of each test case, holes in data files are reclaimed, so
there is no need for vacuuming from a performance perspective?

I will double check whether the problematic test cases do indeed
always drop their tables, but assuming they do, are there any factors
in the database (apart from table updates) that can cause a linear
slow-down with repetitive tasks?

Thanks
Peter

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Do we need vacuuming when tables are regularly dropped?

2008-09-29 Thread Peter Kovacs
PS:
PGSQL version is: 8.2.7. (BTW, which catalog view contains the
back-end version number?)


On Mon, Sep 29, 2008 at 11:37 AM, Peter Kovacs
<[EMAIL PROTECTED]> wrote:
> Hi,
>
> We have a number of automated performance tests (to test our own code)
> involving PostgreSQL. Test cases are supposed to drop and recreate
> tables each time they run.
>
> The problem is that some of the tests show a linear performance
> degradation overtime. (We have data for three months back in the
> past.) We have established that some element(s) of our test
> environment must be the culprit for the degradation. As rebooting the
> test machine didn't revert speeds to baselines recorded three months
> ago, we have turned our attention to the database as the only element
> of the environment which is persistent across reboots. Recreating the
> entire PGSQL cluster did cause speeds to revert to baselines.
>
> I understand that vacuuming solves performance problems related to
> "holes" in data files created as a result of tables being updated. Do
> I understand correctly that if tables are dropped and recreated at the
> beginning of each test case, holes in data files are reclaimed, so
> there is no need for vacuuming from a performance perspective?
>
> I will double check whether the problematic test cases do indeed
> always drop their tables, but assuming they do, are there any factors
> in the database (apart from table updates) that can cause a linear
> slow-down with repetitive tasks?
>
> Thanks
> Peter
>

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] turning of pg_xlog

2008-09-29 Thread Jonny

Hi,

I have installes Postgres 8.0.15 on a embedded Linux and have only 130 
MB for Postgres.
Is it possible to turn off the comlete (Wal) pg_xlog? Because this is 
the biggest part.

I found an Entry how to minimize the count of stored xlogs.
Is it possible to store it  to /dev/null or something else?
Something like ln -s /dev/null  pg_xlog/ (I know this does not work :-) 


regards, Jonny

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Do we need vacuuming when tables are regularly dropped?

2008-09-29 Thread Tom Lane
"Peter Kovacs" <[EMAIL PROTECTED]> writes:
> We have a number of automated performance tests (to test our own code)
> involving PostgreSQL. Test cases are supposed to drop and recreate
> tables each time they run.

> The problem is that some of the tests show a linear performance
> degradation overtime. (We have data for three months back in the
> past.) We have established that some element(s) of our test
> environment must be the culprit for the degradation. As rebooting the
> test machine didn't revert speeds to baselines recorded three months
> ago, we have turned our attention to the database as the only element
> of the environment which is persistent across reboots. Recreating the
> entire PGSQL cluster did cause speeds to revert to baselines.

What it sounds like to me is that you're not vacuuming the system
catalogs, which are getting bloated with dead rows about all those
dropped tables.

regards, tom lane

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Do we need vacuuming when tables are regularly dropped?

2008-09-29 Thread Peter Kovacs
On Mon, Sep 29, 2008 at 2:16 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Peter Kovacs" <[EMAIL PROTECTED]> writes:
>> We have a number of automated performance tests (to test our own code)
>> involving PostgreSQL. Test cases are supposed to drop and recreate
>> tables each time they run.
>
>> The problem is that some of the tests show a linear performance
>> degradation overtime. (We have data for three months back in the
>> past.) We have established that some element(s) of our test
>> environment must be the culprit for the degradation. As rebooting the
>> test machine didn't revert speeds to baselines recorded three months
>> ago, we have turned our attention to the database as the only element
>> of the environment which is persistent across reboots. Recreating the
>> entire PGSQL cluster did cause speeds to revert to baselines.
>
> What it sounds like to me is that you're not vacuuming the system
> catalogs, which are getting bloated with dead rows about all those
> dropped tables.

Wow, great!

It is not immediately clear from the documentation, but the VACUUM
command also deals with the system catalogs as well, correct?

Thanks a lot!
Peter

>
>regards, tom lane
>

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Do we need vacuuming when tables are regularly dropped?

2008-09-29 Thread Tom Lane
"Peter Kovacs" <[EMAIL PROTECTED]> writes:
> It is not immediately clear from the documentation, but the VACUUM
> command also deals with the system catalogs as well, correct?

If it's run without any argument by a superuser, then yes.

(I think in recent versions we also allow a non-superuser database owner
to do this.)

regards, tom lane

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Do we need vacuuming when tables are regularly dropped?

2008-09-29 Thread Peter Kovacs
Thank you!
Peter

On Mon, Sep 29, 2008 at 2:42 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Peter Kovacs" <[EMAIL PROTECTED]> writes:
>> It is not immediately clear from the documentation, but the VACUUM
>> command also deals with the system catalogs as well, correct?
>
> If it's run without any argument by a superuser, then yes.
>
> (I think in recent versions we also allow a non-superuser database owner
> to do this.)
>
>regards, tom lane
>

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] understand the process of ID wraparound

2008-09-29 Thread KKreuzer
I am hoping someone can help a novice understand the process of ID 
wraparound, I have read many of the articles 
on the web but don't understand why my age(datfrozenxid) never gets reset. 
I am not sure if I even have a 
problem, just trying to be proactive.
 First the details:

select version() ;
"PostgreSQL 8.2.6 on powerpc-ibm-aix5.2.0.0, compiled by GCC gcc (GCC) 
4.0.0"

show vacuum_freeze_min_age;
"100,000,000"

show autovacuum_freeze_max_age;
"200,000,000"

show autovacuum;
"off"

SELECT datname, age(datfrozenxid) FROM pg_database;
"postgres"  31041670
"dprodxml"  31041670
"dflash"31041670
"pg_dprodcca"   31041670
"template1" 31041670
"template0" 31041670
"dstorens"  31041670
"dprod360"  31041670

We run a vacuum every morning at 2:45 am:   vacuumdb --all --analyze 
--echo

vacuumdb: vacuuming database "postgres"
SELECT datname FROM pg_database WHERE datallowconn;
VACUUM ANALYZE;
VACUUM
vacuumdb: vacuuming database "dprodxml"
VACUUM ANALYZE;
VACUUM
vacuumdb: vacuuming database "dflash"
VACUUM ANALYZE;
VACUUM
vacuumdb: vacuuming database "pg_dprodcca"
VACUUM ANALYZE;
VACUUM
vacuumdb: vacuuming database "template1"
VACUUM ANALYZE;
VACUUM
vacuumdb: vacuuming database "dstorens"
VACUUM ANALYZE;
VACUUM
vacuumdb: vacuuming database "dprod360"
VACUUM ANALYZE;
VACUUM

I run the query  "SELECT datname, age(datfrozenxid) FROM pg_database;" 
every morning with the
values continue to rise.
 
age(datfrozenxid)
9/2427,280,414
9/2527,688,967
9/2628,166,896
9/2931,040,346

If someone could help me understand the process, it would be greatly 
appreciated.



Keith Kreuzer
ext 3424



Re: [ADMIN] understand the process of ID wraparound

2008-09-29 Thread Tom Lane
[EMAIL PROTECTED] writes:
> I am hoping someone can help a novice understand the process of ID 
> wraparound, I have read many of the articles 
> on the web but don't understand why my age(datfrozenxid) never gets reset. 
> I am not sure if I even have a 
> problem, just trying to be proactive.

You don't have a problem.  The datfrozenxid values you are showing are
around 31 million transactions.  Nothing is going to happen until they
exceed vacuum_freeze_min_age, which is 100 million transactions.

regards, tom lane

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] understand the process of ID wraparound

2008-09-29 Thread KKreuzer
Thank you Tom..

Can you recommend any documentation that explains the process?

Keith Kreuzer
ext 3424





Tom Lane <[EMAIL PROTECTED]> 
09/29/2008 10:00 AM

To
[EMAIL PROTECTED]
cc
pgsql-admin@postgresql.org
Subject
Re: [ADMIN] understand the process of ID wraparound






[EMAIL PROTECTED] writes:
> I am hoping someone can help a novice understand the process of ID 
> wraparound, I have read many of the articles 
> on the web but don't understand why my age(datfrozenxid) never gets 
reset. 
> I am not sure if I even have a 
> problem, just trying to be proactive.

You don't have a problem.  The datfrozenxid values you are showing are
around 31 million transactions.  Nothing is going to happen until they
exceed vacuum_freeze_min_age, which is 100 million transactions.

 regards, tom lane



Re: [ADMIN] understand the process of ID wraparound

2008-09-29 Thread Tom Lane
[EMAIL PROTECTED] writes:
> Can you recommend any documentation that explains the process?

Did you read
http://www.postgresql.org/docs/8.3/static/routine-vacuuming.html
(adjust URL if you are running some other major PG version)

regards, tom lane

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Error while trying to back up database: out of memroy

2008-09-29 Thread Peter Kovacs
On Mon, Sep 22, 2008 at 4:43 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Vladimir Rusinov" <[EMAIL PROTECTED]> writes:
>> But now I'm getting following error:
>> pg_dump: WARNING:  terminating connection because of crash of another server
>> process
>
> As a rule of thumb, you should disable OOM kill on any server system.

This document describes a few solutions potentially better than
outright disabling:
http://www.redhat.com/archives/taroon-list/2007-August/msg6.html .
(I don't know whether those solutions actually work or not, but may be
worth trying by the look of it.)

Peter

> However, you might want to look into why the system's aggregate memory
> requirements have now increased from what they used to be.  It seems
> unlikely that this is pg_dump's fault per se, if you're running a
> reasonably recent PG release.  (There were some memory leaks inside
> pg_dump, a long time ago...)
>
>regards, tom lane
>
> --
> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin
>

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Error while trying to back up database: out of memroy

2008-09-29 Thread Scott Marlowe
On Sun, Sep 28, 2008 at 2:18 AM, Peter Kovacs
<[EMAIL PROTECTED]> wrote:
> On Mon, Sep 22, 2008 at 4:43 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
>> "Vladimir Rusinov" <[EMAIL PROTECTED]> writes:
>>> But now I'm getting following error:
>>> pg_dump: WARNING:  terminating connection because of crash of another server
>>> process
>>
>> As a rule of thumb, you should disable OOM kill on any server system.
>
> This document describes a few solutions potentially better than
> outright disabling:
> http://www.redhat.com/archives/taroon-list/2007-August/msg6.html .
> (I don't know whether those solutions actually work or not, but may be
> worth trying by the look of it.)

While there are better solutions for other types of servers, like web
servers and what not, for PostgreSQL servers, overcommit isn't usually
needed, and OOM killer / overcommit can both be disabled.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] turning of pg_xlog

2008-09-29 Thread Andrew Sullivan
On Mon, Sep 29, 2008 at 01:00:41PM +0200, Jonny wrote:
> Is it possible to turn off the comlete (Wal) pg_xlog? 

No.

A

-- 
Andrew Sullivan
[EMAIL PROTECTED]
+1 503 667 4564 x104
http://www.commandprompt.com/

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Do we need vacuuming when tables are regularly dropped?

2008-09-29 Thread Steve Crawford



What it sounds like to me is that you're not vacuuming the system
catalogs, which are getting bloated with dead rows about all those
dropped tables.



Wow, great!

It is not immediately clear from the documentation, but the VACUUM
command also deals with the system catalogs as well, correct?

  


To expand on Tom's answer, rows in system tables are created not only 
for tables but for each column in the table, rules, indexes, etc. You  
can end up with a lot more row creation than you suspect. And temporary 
tables bloat the system tables just like regular tables. We discovered 
that cron scripts using temporary tables can cause very rapid 
system-table blotage.


Cheers,
Steve


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Do we need vacuuming when tables are regularly dropped?

2008-09-29 Thread Scott Marlowe
On Mon, Sep 29, 2008 at 11:12 AM, Steve Crawford
<[EMAIL PROTECTED]> wrote:
>
>>> What it sounds like to me is that you're not vacuuming the system
>>> catalogs, which are getting bloated with dead rows about all those
>>> dropped tables.
>>>
>>
>> Wow, great!
>>
>> It is not immediately clear from the documentation, but the VACUUM
>> command also deals with the system catalogs as well, correct?
>>
>>
>
> To expand on Tom's answer, rows in system tables are created not only for
> tables but for each column in the table, rules, indexes, etc. You  can end
> up with a lot more row creation than you suspect. And temporary tables bloat
> the system tables just like regular tables. We discovered that cron scripts
> using temporary tables can cause very rapid system-table blotage.

Also, there was a time when you couldn't do vacuum full on system
tables do to locking issues, and had to take the db down to single
user mode to do so.

Tom, is that still the case?

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] PID file

2008-09-29 Thread Ing . Jorge S Alanís Garza
Hello,

 

We are operating Postgresql 8.2 on an ISCSI environment. Sometimes there are
issues with the ISCSI so Postgresql refuses to shutdown properly, or start,
because of the pid file. My question is, what is the correct thing to do
with this pid file? On some test-environments I have deleted the pid and
then when starting up, I see postgres complain about the database not
shutting down cleanly. Is there a way to recover the non-working postgres
instance? Is this a very corruption-prone environment?

 

Thanks,

 

Jorge Santiago Alanís Garza 
Innovación y Desarrollo 
 
[EMAIL PROTECTED]

Tel: (81) .4044 
Cel: (811) 243-6570


  www.blocknetworks.com.mx 
Av. Lázaro Cárdenas 4000, L-17 
Col. Valle de las Brisas 
Monterrey, Nuevo León, CP 64790 
Tel: +52 (81)  4044  



 



Re: [ADMIN] PID file

2008-09-29 Thread Andrew Sullivan
On Mon, Sep 29, 2008 at 12:43:54PM -0500, Ing. Jorge S Alanís Garza wrote:
> shutting down cleanly. Is there a way to recover the non-working postgres
> instance? Is this a very corruption-prone environment?

It's sure corruption-prone if you delete the pidfile.  

If your iSCSI system keeps dropping out on you, then you need to fix
that.  Otherwise, things are going to break in a way you'll be unhappy
with later.

A

-- 
Andrew Sullivan
[EMAIL PROTECTED]
+1 503 667 4564 x104
http://www.commandprompt.com/

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Do we need vacuuming when tables are regularly dropped?

2008-09-29 Thread Tom Lane
"Scott Marlowe" <[EMAIL PROTECTED]> writes:
> Also, there was a time when you couldn't do vacuum full on system
> tables do to locking issues, and had to take the db down to single
> user mode to do so.

There was a short period when *concurrent* vacuum fulls on just the
wrong combinations of system catalogs could deadlock (because they both
needed to look up stuff in the other one).  AFAIK we fixed that.  It's
never been the case that it didn't work at all.

regards, tom lane

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Do we need vacuuming when tables are regularly dropped?

2008-09-29 Thread Steve Crawford

Tom Lane wrote:

"Scott Marlowe" <[EMAIL PROTECTED]> writes:
  

Also, there was a time when you couldn't do vacuum full on system
tables do to locking issues, and had to take the db down to single
user mode to do so.



There was a short period when *concurrent* vacuum fulls on just the
wrong combinations of system catalogs could deadlock (because they both
needed to look up stuff in the other one).  AFAIK we fixed that.  It's
never been the case that it didn't work at all.

regards, tom lane
  
Never personally had trouble with vacuum full or reindex on system 
tables. CLUSTER, however, is another story. While I've never run across 
anything explicitly documenting that clustering system tables is 
forbidden, I've also never used a version of PostgreSQL that allows it 
(though I've never tried in single-user mode):


[EMAIL PROTECTED]> CLUSTER pg_class USING pg_class_oid_index ;
ERROR:  "pg_class" is a system catalog

Should the docs 
(http://www.postgresql.org/docs/8.3/interactive/sql-cluster.html) be 
updated to note this restriction?


Cheers,
Steve


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Do we need vacuuming when tables are regularly dropped?

2008-09-29 Thread Tom Lane
Steve Crawford <[EMAIL PROTECTED]> writes:
> [EMAIL PROTECTED]> CLUSTER pg_class USING pg_class_oid_index ;
> ERROR:  "pg_class" is a system catalog

I think the DB is probably protecting you from yourself here ;-).
If memory serves there are some system indexes whose relfilenode
numbers can't change, and pg_class_oid_index is one of them.  If
the CLUSTER had gone through you'd have hosed that database
irretrievably.

The protection check that is firing here is not so fine-grained as to
know the difference between pg_class and catalogs that this might be
safe for; but it does point up the moral that you need to know exactly
what you're doing if you are going to do DDL stuff on the system
catalogs.

regards, tom lane

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Do we need vacuuming when tables are regularly dropped?

2008-09-29 Thread Steve Crawford

Tom Lane wrote:

Steve Crawford <[EMAIL PROTECTED]> writes:
  

[EMAIL PROTECTED]> CLUSTER pg_class USING pg_class_oid_index ;
ERROR:  "pg_class" is a system catalog



I think the DB is probably protecting you from yourself here ;-).


And elsewhere. :)

I wasn't advocating for a change of behavior, just the addition of 
"Clustering is not permitted on system tables." to the documentation of 
the CLUSTER command.


Cheers,
Steve




--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Hex representation

2008-09-29 Thread Reece Hart
I can't resist one-liner games.

$ perl -e 'print "U"x(256*1024)' >Us

or, if you specifically want to specify hex:
$ perl -e 'print chr(hex(55))x(256*1024)' >Us

-Reece

-- 
Reece Hart, http://harts.net/reece/, GPG:0x25EC91A0