[GENERAL] Integrated Trigers

2011-03-03 Thread Alpha Beta
Hello,

I'm still at beginning regarding databases.  I got a question about trigers
that model the behavior of some data,
usualy a relational database may contain trigers and declarative sql
constraints.
My question is how could I detect these trigers within a database?
using postgresql would help me? and how ?

Best,


Re: [GENERAL] Integrated Trigers

2011-03-03 Thread Willy-Bas Loos
Hi,

do you want to know how to create a trigger?
http://www.postgresql.org/docs/9.0/interactive/sql-createtrigger.html

do
you want to know how to list the triggers in your database?
- in pgAdmin they are listed in the tree-view under each table.
- in psql use the command \d 

>using postgresql would help me? and how ?
PostgreSQL would be your database engine (RDBMS, database software). It
would be hard to have triggers without database software to begin with.

hth

WBL

On Thu, Mar 3, 2011 at 12:59 PM, Alpha Beta  wrote:

> Hello,
>
> I'm still at beginning regarding databases.  I got a question about trigers
> that model the behavior of some data,
> usualy a relational database may contain trigers and declarative sql
> constraints.
> My question is how could I detect these trigers within a database?
> using postgresql would help me? and how ?
>
> Best,
>
>


-- 
"Patriotism is the conviction that your country is superior to all others
because you were born in it." -- George Bernard Shaw


Re: [GENERAL] I need your help to get opinions about this situation

2011-03-03 Thread David Johnston
I'll leave the "can/cannot" responses to those more familiar with
high-load/use situations but I am curious;  what reasons other than cost are
leading you to discontinue using your current database engine?  With that
many entities any migration is likely to be quite challenging even if you
restricted initial development to standard SQL.

My estimate is that it is possible that PostgreSQL would meet your needs -
though as you say the use of various tools to connection pool and such are
going to be critical.  

As for recommendations - buy as much hardware as you can afford.

David J.

-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Rayner Julio
Rodríguez Pimentel
Sent: Wednesday, March 02, 2011 10:41 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] I need your help to get opinions about this situation

Hello to everybody,
I have this situation that I would to know your opinions about it, to
confirm some elements that secure me of use the amazing database system
PostgreSQL.
I have a database of 1000 tables, 300 of theirs are of major growing with
1 rows daily, the estimate growing for this database is of
2,6 TB every year.
There are accessing 5000 clients to this database of which will be accessed
500 concurrent clients at the same time.
There are the questions:
1.  Is capable PostgreSQL to support this workload? Some examples
better than this.
2.  It is a recommendation to use a cluster with load balancer and
replication for this situation? Which tools are recommended for this
purpose?
3.  Which are the hardware recommendations to deploy on servers? CPU,
RAM memory capacity, Hard disk capacity and type of RAID system recommended
to use among others like Operating System and network connection speed.
Greetings and thanks a lot.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Screencasts for PostgreSQL

2011-03-03 Thread Willy-Bas Loos
maybe this?
http://enterprisedb.com/resources-community/webcasts-podcasts-videos


cheers,

WBL

On Tue, Mar 1, 2011 at 3:34 PM, James B. Byrne wrote:

> I recently viewed a screen-cast on PostgreSQL developed by
> Peepcode.com and obtained a few really valuable insights respecting
> full text searches.  These were things that I was dimly aware of but
> that extensive reading had not revealed to me ( lacking as I am in
> the imagination necessary ).
>
> I was wondering if any here know of similar presentations on
> PostgreSQL usage and administration that might be available to me.
> Free is good but I am willing to pay a reasonable fee for such
> things as I did for the material from Peepcode.
>
> Any suggestions?
>
> --
> ***  E-Mail is NOT a SECURE channel  ***
> James B. Byrnemailto:byrn...@harte-lyne.ca
> Harte & Lyne Limited  http://www.harte-lyne.ca
> 9 Brockley Drive  vox: +1 905 561 1241
> Hamilton, Ontario fax: +1 905 561 0757
> Canada  L8E 3C3
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
"Patriotism is the conviction that your country is superior to all others
because you were born in it." -- George Bernard Shaw


Re: [GENERAL] Postgresql not start during Startup

2011-03-03 Thread Willy-Bas Loos
also nice:
sysv-rc-conf - SysV init runlevel configuration tool for the terminal


On Tue, Mar 1, 2011 at 2:09 PM, Ray Stell  wrote:

> On Tue, Mar 01, 2011 at 06:37:35PM +0530, Adarsh Sharma wrote:
> >
> > But I want to start it after booting automatically.
> >
>
>
> http://embraceubuntu.com/2005/09/07/adding-a-startup-script-to-be-run-at-bootup/
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
"Patriotism is the conviction that your country is superior to all others
because you were born in it." -- George Bernard Shaw


Re: [GENERAL] data type

2011-03-03 Thread Willy-Bas Loos
On Thu, Mar 3, 2011 at 6:41 AM, Nick Raj  wrote:

> Which type of data type will be used in above function (in place of ?)
> that can collect more than one row(20,000) ?
>

Maybe the id that those 20M records have in common?

hth,

WBL

On Thu, Mar 3, 2011 at 6:41 AM, Nick Raj  wrote:

> Hi,
> I am writing some function in postgres pl/sql.
>
> My function is of type St_ABC((select obj_geom from XYZ),(select
> boundary_geom from boundary))
> I have table XYZ with 20,000 tuples and in boundary, i have only one
> geometry.
>
> In postgres, ST_intersects(obj_geom, boundary_geom) checks each obj_geom
> with boundary_geom and returns true/false. It returns true/false 20,000
> times
> I want to write function that return only one true/false according to my
> calculation.
>
> So, create or replace function ST_ABC(?, geometry) returns boolean
>
> Which type of data type will be used in above function (in place of ?)
> that can collect more than one row(20,000) ?
>
> Thanks
> Raj
>



-- 
"Patriotism is the conviction that your country is superior to all others
because you were born in it." -- George Bernard Shaw


[GENERAL] opening connection more expensive than closing connection?

2011-03-03 Thread Rob Sargent
A developer here accidentally flooded a server with connection opens and
closes, essentially one per transaction during a multi-threaded data
migration process.

We were curious if this suggests that connection clean up is more
expensive than creation thereby exhausting resources, or if perhaps the
server wasn't returning to the essentially backgrounded clean-up task
adroitly (enough)?

Don't worry we are now using a connection pool for the migration, but
the situation was, um, er, entertaining.  The server side said
eof-from-client, the client side said many variations of cannot-connect,
then they would work out their differences briefly and do the same dance
over again after sufficient connections.

Cheers,
rjs




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] invalid byte sequence

2011-03-03 Thread Maximilian Tyrtania
After upgrading to pg 9.0.3 (from 8.4.2) on my Mac OS 10.6.2 machine i find 
this in my log file (a lot):

STATEMENT:  SELECT 
pg_file_read('pg_log/postgresql-2011-03-03_00.log', 25, $
ERROR:  invalid byte 
sequence for encoding "UTF8": 0xe3bc74

Apparently pg doesn't like the contents of that logfile.
The folks from the pgadmin list (i noticed the problem using the pgadmin 
logviewer and asked for help there) advised me to change LC_messages locale to 
'C' (i had it on 'de_DE-UTF8' before), but that doesn't appear to help. The 
Server encoding is UTF8. No special client encoding is set.

Any help would be appreciated,

Max

Maximilian Tyrtania Software-Entwicklung
Dessauer Str. 6-7
10969 Berlin
http://www.contactking.de


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] opening connection more expensive than closing connection?

2011-03-03 Thread Rob Sargent
top posting my self to clarify, since the title is in fact inverted:
close might be more expensive the open.


On 03/03/2011 08:28 AM, Rob Sargent wrote:
> A developer here accidentally flooded a server with connection opens and
> closes, essentially one per transaction during a multi-threaded data
> migration process.
> 
> We were curious if this suggests that connection clean up is more
> expensive than creation thereby exhausting resources, or if perhaps the
> server wasn't returning to the essentially backgrounded clean-up task
> adroitly (enough)?
> 
> Don't worry we are now using a connection pool for the migration, but
> the situation was, um, er, entertaining.  The server side said
> eof-from-client, the client side said many variations of cannot-connect,
> then they would work out their differences briefly and do the same dance
> over again after sufficient connections.
> 
> Cheers,
> rjs
> 
> 
> 
> 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] closing connection more expensive than opening connection?

2011-03-03 Thread Rob Sargent
A developer here accidentally flooded a server with connection opens and
closes, essentially one per transaction during a multi-threaded data
migration process.

We were curious if this suggests that connection clean up is more
expensive than creation thereby exhausting resources, or if perhaps the
server wasn't returning to the essentially backgrounded clean-up task
adroitly (enough)?

Don't worry we are now using a connection pool for the migration, but
the situation was, um, er, entertaining.  The server side said
eof-from-client, the client side said many variations of cannot-connect,
then they would work out their differences briefly and do the same dance
over again after sufficient connections.

Cheers,
rjs




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Why count(*) doest use index?

2011-03-03 Thread obamabarak


I use pgsql 9.0.3 and I know that postgresql tries to use the fields
in indexes instead of the original table if it possible 

But when I run


SELECT COUNT(id) FROM tab 

or 

SELECT COUNT(*) FROM tab 

where
there "id" is PRIMARY KEY and there are other indexes there I get
execution plan that doesnt use any indexes, but sequentab scanning the
original table. 

 "Aggregate (cost=38685.98..38685.99 rows=1 width=0)"

" -> Seq Scan on tab (cost=0.00..36372.38 rows=925438 width=0)"

Why is
it so? 

--- 

Paul

[GENERAL] Tracking table modifications / table stats

2011-03-03 Thread Derrick Rice
Hey folks,

I was looking through the contrib modules with 8.4 and hoping to find
something that satisfies my itch.
http://www.postgresql.org/docs/8.4/static/pgstatstatements.html comes the
closest.

I'm inheriting a database which has mostly unknown usage patterns, and would
like to figure them out so that I can allocate tablespaces and set
autovacuum settings appropriately.  To do this, it seems I need to know (at
least) the number of rows read, rows updated, rows deleted, and rows
inserted for each table (over time, or until reset).

I suppose things like disk usage and CPU usage would be interesting as well,
but I'm somewhat less concerned with those.  For one, CPU usage can't be
tied to a table as easily and is more about query optimization than
PostgreSQL configuration (excluding cost coefficients and memory size
settings).  For the other, disk usage can be mostly inferred from the row
size and and number of operations per table (this does exclude seq. scans
and heavy heavy index use, though).  I realize those statements are fuzzy
and short-sighted, but I'm trying to get "good enough" information, not
optimize a space shuttle.

There's no way I'm the first person to feel the need for this.  Is there a
doc or wiki which gives some recommendations?  I'd like to avoid parsing
logs or installing triggers.  I'd also like to avoid heavy statement-level
tracking like the above mentioned contrib does (sounds expensive, and I'm
not sure the users have parameterized SQL).

Thanks,

Derrick


Re: [GENERAL] Why count(*) doest use index?

2011-03-03 Thread Adrian Klaver

On 03/03/2011 05:29 AM, obamaba...@e1.ru wrote:

I use pgsql 9.0.3 and I know that postgresql tries to use the fields in
indexes instead of the original table if it possible

But when I run

SELECT COUNT(id) FROM tab

or

SELECT COUNT(*) FROM tab

where there "id" is PRIMARY KEY and there are other indexes there I get
execution plan that doesnt use any indexes, but sequentab scanning the
original table.

"Aggregate (cost=38685.98..38685.99 rows=1 width=0)"
" -> Seq Scan on tab (cost=0.00..36372.38 rows=925438 width=0)"

Why is it so?


See here:
http://wiki.postgresql.org/wiki/FAQ#Why_is_.22SELECT_count.28.2A.29_FROM_bigtable.3B.22_slow.3F



---

Paul




--
Adrian Klaver
adrian.kla...@gmail.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tracking table modifications / table stats

2011-03-03 Thread Merlin Moncure
On Thu, Mar 3, 2011 at 11:00 AM, Derrick Rice  wrote:
> Hey folks,
>
> I was looking through the contrib modules with 8.4 and hoping to find
> something that satisfies my itch.
> http://www.postgresql.org/docs/8.4/static/pgstatstatements.html comes the
> closest.
>
> I'm inheriting a database which has mostly unknown usage patterns, and would
> like to figure them out so that I can allocate tablespaces and set
> autovacuum settings appropriately.  To do this, it seems I need to know (at
> least) the number of rows read, rows updated, rows deleted, and rows
> inserted for each table (over time, or until reset).
>
> I suppose things like disk usage and CPU usage would be interesting as well,
> but I'm somewhat less concerned with those.  For one, CPU usage can't be
> tied to a table as easily and is more about query optimization than
> PostgreSQL configuration (excluding cost coefficients and memory size
> settings).  For the other, disk usage can be mostly inferred from the row
> size and and number of operations per table (this does exclude seq. scans
> and heavy heavy index use, though).  I realize those statements are fuzzy
> and short-sighted, but I'm trying to get "good enough" information, not
> optimize a space shuttle.
>
> There's no way I'm the first person to feel the need for this.  Is there a
> doc or wiki which gives some recommendations?  I'd like to avoid parsing
> logs or installing triggers.  I'd also like to avoid heavy statement-level
> tracking like the above mentioned contrib does (sounds expensive, and I'm
> not sure the users have parameterized SQL).


The old tried and true method of slow query logging
(min_statement_duration) works wonders.  Usually in a typical system
10% of the queries are doing 90% of the work.

If I'm coming into a new database created by someone else, priority #1
is to get logging under control: make sure it's being captured,
rotated properly, etc.  If there are lots of garbage errors being
dropped in there, try fixing them so that the logs become useful.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tracking table modifications / table stats

2011-03-03 Thread Andy Colson

On 3/3/2011 11:00 AM, Derrick Rice wrote:

Hey folks,

I was looking through the contrib modules with 8.4 and hoping to find
something that satisfies my itch.
http://www.postgresql.org/docs/8.4/static/pgstatstatements.html comes
the closest.

I'm inheriting a database which has mostly unknown usage patterns, and
would like to figure them out so that I can allocate tablespaces and set
autovacuum settings appropriately.  To do this, it seems I need to know
(at least) the number of rows read, rows updated, rows deleted, and rows
inserted for each table (over time, or until reset).

I suppose things like disk usage and CPU usage would be interesting as
well, but I'm somewhat less concerned with those.  For one, CPU usage
can't be tied to a table as easily and is more about query optimization
than PostgreSQL configuration (excluding cost coefficients and memory
size settings).  For the other, disk usage can be mostly inferred from
the row size and and number of operations per table (this does exclude
seq. scans and heavy heavy index use, though).  I realize those
statements are fuzzy and short-sighted, but I'm trying to get "good
enough" information, not optimize a space shuttle.

There's no way I'm the first person to feel the need for this.  Is there
a doc or wiki which gives some recommendations?  I'd like to avoid
parsing logs or installing triggers.  I'd also like to avoid heavy
statement-level tracking like the above mentioned contrib does (sounds
expensive, and I'm not sure the users have parameterized SQL).

Thanks,

Derrick


There are stat tables you can look at:

http://www.postgresql.org/docs/9.0/static/monitoring-stats.html

-Andy

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tracking table modifications / table stats

2011-03-03 Thread Derrick Rice
On Thu, Mar 3, 2011 at 12:34 PM, Andy Colson
wrote:There are stat tables you can look at:

>
> http://www.postgresql.org/docs/9.0/static/monitoring-stats.html
>
> -Andy
>

Aha! Thank you.


[GENERAL] orphaned?? tmp files

2011-03-03 Thread Reid Thompson

Given the below information, is it reasonable to assume that files in pgsql_tmp 
dated prior to 10 days ago can be safely removed?

postgres@hw-prod-repdb1> uptime
 15:01:35 up 10 days, 13:50,  3 users,  load average: 1.23, 1.15, 0.63
postgres@hw-prod-repdb1> pwd
/mnt/iscsi/psql_tmp/tmpdata
postgres@hw-prod-repdb1> ls -rtlh *
-rw--- 1 postgres postgres4 May 11  2009 PG_VERSION

pgsql_tmp:
total 0

41099:
total 3.0G
-rw--- 1 postgres postgres 8.0K Mar 12  2010 88326
-rw--- 1 postgres postgres0 Mar 12  2010 88324
-rw--- 1 postgres postgres0 Mar 16  2010 88580
-rw--- 1 postgres postgres0 Mar 16  2010 88577.3
-rw--- 1 postgres postgres0 Mar 16  2010 88577.4
-rw--- 1 postgres postgres0 Mar 16  2010 88577.5
-rw--- 1 postgres postgres 8.0K Mar 16  2010 88592
-rw--- 1 postgres postgres 8.0K Jun  2  2010 94968
-rw--- 1 postgres postgres0 Jun  2  2010 94966
-rw--- 1 postgres postgres 8.0K Jun  2  2010 94974
-rw--- 1 postgres postgres0 Jun  2  2010 94972
-rw--- 1 postgres postgres 8.0K Aug 26  2010 104559
-rw--- 1 postgres postgres0 Aug 26  2010 104557
-rw--- 1 postgres postgres 1.0G Oct 27 10:05 88577
-rw--- 1 postgres postgres 1.0G Oct 27 10:07 88577.1
-rw--- 1 postgres postgres 824M Oct 27 10:09 88577.2
-rw--- 1 postgres postgres 488K Nov  2 12:17 111724
-rw--- 1 postgres postgres  43M Dec  7 14:46 94963
-rw--- 1 postgres postgres  25M Dec  7 14:46 94969
-rw--- 1 postgres postgres 8.0K Feb 22 15:45 104560
-rw--- 1 postgres postgres 8.0K Feb 22 15:45 104554
-rw--- 1 postgres postgres  72M Mar  2 16:12 88321

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] orphaned?? tmp files

2011-03-03 Thread Tom Lane
Reid Thompson  writes:
> Given the below information, is it reasonable to assume that files in 
> pgsql_tmp dated prior to 10 days ago can be safely removed?

That listing doesn't show anything in pgsql_tmp ...

What the listing looks like to me is a tablespace containing a few
tables that haven't been touched in awhile.  Whether they're still
referenced from anywhere is not apparent from this data.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] I need your help to get opinions about this situation

2011-03-03 Thread Greg Williamson
Rayner --



<...>
> I have a database of 1000 tables, 300 of theirs are of major growing
> with 1 rows daily, the estimate growing for this database is of
> 2,6 TB every year.

In and of-itself sheer number of rows only hits you when you need to be
reading most of them; in that case good hardware (lots of spindles!) would
be needed for any database.

> There are accessing 5000 clients to this database of which will be
> accessed 500 concurrent clients at the same time.

That could be too many to handle natively; investigate pgPool and similar tools.

> There are the questions:
> 1.Is capable PostgreSQL to support this workload? Some examples
> better than this.

Depends on the native hardware and the types of queries. 

> 2.It is a recommendation to use a cluster with load balancer and
> replication for this situation? Which tools are recommended for this
> purpose?

Depends on what you mean -- there is no multimaster solution in postgreSQL
as far as I know, but if you only need one central servers and R/O slaves there
are several possible solutions (Slony as an add-on as well as the new 
capabilities
in the engine itself.

> 3.Which are the hardware recommendations to deploy on servers? CPU,
> RAM memory capacity, Hard disk capacity and type of RAID system
> recommended to use among others like Operating System and network
> connection speed.

 RAID-5 is generally a bad choice for databases. The specific answers to these 
questions
need more info on workload, etc.

I migrated a fairly large Informix system to postgres a few years ago and the 
main issues
had to do with postGIS vs. Informix Spatial Blade; the core tables converted 
cleanly; the
users and permissions were also easy. We needed to use pgPool to get the same 
number
of connections. This was actually a migration -- from Sun Solaris to Linux so 
comparing
the two directly wasn't easy. 

We moved "chunks" on the application and tested a lot; spatial data first and 
the bookkeeping
and accounting functions and finally the warehouse and large-but-infrequent 
jobs.

HTH,

Greg Williamson




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database is bigger after dump/restore - why? (60 GB to 109 GB)

2011-03-03 Thread Aleksey Tsalolikhin
On Tue, Mar 1, 2011 at 7:24 AM, Tom Lane  wrote:
> Adrian Klaver  writes:
>> Looks like the TOAST compression is not working on the second machine. Not 
>> sure
>> how that could come to be. Further investigation underway:)
>
> Somebody carelessly messed with the per-column SET STORAGE settings,
> perhaps?  Compare pg_attribute.attstorage settings ...



Thank you.  I compared the STORAGE settings and I have "extended" on
both databases,
no "external".

Any other ideas?

Yours truly,
-at

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] updating all records of a table

2011-03-03 Thread Gauthier, Dave
Hi:

I have to update all the records of a table.  I'm worried about what the table 
will look like in terms of fragmentation when this is finished.  Is there some 
sort of table healing/reorg/rebuild measure I should take if I want the 
resulting table to operate at optimal efficiency?  What about indexes, should I 
drop/recreate those?

(I remember the bad-ole days with Oracle where table defragging and index 
rebuilding was something we had to do)

Thanks  for any help !


Re: [GENERAL] updating all records of a table

2011-03-03 Thread Joshua D. Drake
On Thu, 2011-03-03 at 20:03 -0700, Gauthier, Dave wrote:
> Hi:
> 
> I have to update all the records of a table.  I'm worried about what
> the table will look like in terms of fragmentation when this is
> finished.  Is there some sort of table healing/reorg/rebuild measure I
> should take if I want the resulting table to operate at optimal
> efficiency?  What about indexes, should I drop/recreate those?

Well it depends on the size of table but yes it is going to create a lot
of dead space. A cluster or reindex of the table will solve this for
you.

JD

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Pgdump error "invalid page header in block"

2011-03-03 Thread tuanhoanganh
Yesterday, I had some problem with postgresql 9.0.2. Today i backup postgres
and has error

pg_dump: reading dependency data
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  invalid page header in block 299
of relation "pg_depend_depender_index"
pg_dump: The command was: SELECT classid, objid, refclassid, refobjid,
deptype FROM pg_depend WHERE deptype != 'p' ORDER BY 1,2
pg_dump: *** aborted because of error

Is there any way to fix it.

Thanks in advance

Tuan Hoang ANh