[GENERAL] LinuxWorld BOF

2001-08-28 Thread Bruce Momjian

There will be a BOF at LinuxWorld, San Francisco on Wednesday (today),
6:30pm in room B3.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[GENERAL] List archives moved and cleaned up ...

2001-08-28 Thread Marc G. Fournier


Finally figuring that enough is enough, I've been spending the past few
days working on the list archives ...

I've reformatted, so far, the following lists into a cleaner format:

pgsql-hackers
pgsql-sql
pgsql-bugs
pgsql-general
pgadmin-hackers
pgadmin-support

With more lists to be worked on over the next few days ...

Major changes include the following:

Replaced the wide banner in the center with two smaller, 120x120
banners in the corners ...

Provide a search facility incorporated into each page that
searches the mhonarc pages themselves ...

Change the colors to better match the main site ...

Moved the archives to its own URL/Domain so that it is no
longer part of the general mirror of the site ...

There is still alot of work that I'm planning on doing on this, but I want
to get all of the archives moved first ...

To access any of the archives that have been moved, go to:

http://archives.postgresql.org/

I've been modifying the links from the main web site for those lists that
I've moved, as I've moved them, so getting there through 'normal channels'
should also work ...

Once finished, there will also be links to the OpenFTS search facility
that we have online, which uses a different way of formatting/displaying
the messages, so you will have the choice of using either ...


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] How do I pack a class into a string? Is serializeavailable?

2001-08-28 Thread Gurunandan R. Bhat

On 28 Aug 2001, Doug McNaught wrote:

> "Gurunandan R. Bhat" <[EMAIL PROTECTED]> writes:
>
> > Is a postgres function to pack an entire row from a table into a
> > string available? I want something analogous to "serialize()" in php.

[expl. snipped]
>
> What problem are you trying to solve?

I wanted to write a small utility to keep two databases synchronised.
Whenever I add a row to a table in the local database, I intend to put a
trigger that would update a special table with the modified tablename and
the serialised version of the new row. I then copy this single table
across and rebuild my remote database.

And advice?

Thanks and regards,

Gurunandan



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[GENERAL] pg_dump cache lookup error

2001-08-28 Thread Killian May

Hi,

I'm running 6.5.2 (I plan on upgrading as soon as I get this fixed)
and I get the following error when I try to dump the db:

[kmay@infosport hockey]$ pg_dump gamesheet > gamesheet1.out
dumpRules(): SELECT failed for table attendance.  Explanation from
backend: 'ERR
OR:  cache lookup of attribute 4 in relation 343169 failed
'.

Thanks in advance,

Killian May
InfoSport Inc
[EMAIL PROTECTED]
(902)423-0662 fax(902)423-0390






---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[GENERAL] ERROR: int8 conversion to int4 is out of range

2001-08-28 Thread Colin Campbell

Hi,

I have a confusing situation that I suspect is system related but would
like to know what needs fixing. Both systems run postgresql-7.1.2 with
perl and tcl enabled. (I've been trying to get pgreplica to work).

==
Host1 is a Pentium II 233.
==
$ gcc --version
2.95.2

$ uname -sr
FreeBSD 4.2-RELEASE

$ psql test
Welcome to psql, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
   \h for help with SQL commands
   \? for help on internal slash commands
   \g or terminate with semicolon to execute query
   \q to quit

test=> select int4(4::int8/4::int8);
 int4
--
1
(1 row)

==
Host2 is a Pentium 200MMX (yes I know this is old :-)
==
$ gcc --version
2.7.2.1

$ uname -sr
FreeBSD 3.2-RELEASE

$ psql test
Welcome to psql, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
   \h for help with SQL commands
   \? for help on internal slash commands
   \g or terminate with semicolon to execute query
   \q to quit

test=# select int4(4::int8/4::int8);
ERROR:  int8 conversion to int4 is out of range
==

Is this a hardware/software/system library problem? What do I need to do
to fix it?

Colin
--
Colin Campbell
Unix Support/Postmaster/Hostmaster
CITEC
+61 7 3006 4710


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] How do I pack a class into a string? Is serialize available?

2001-08-28 Thread Doug McNaught

"Gurunandan R. Bhat" <[EMAIL PROTECTED]> writes:

>   Is a postgres function to pack an entire row from a table into a
> string available? I want something analogous to "serialize()" in php.

I'm not aware of one.  You could certainly, and fairly easily, write a
PL/pgSQL (or PL/TCL etc) function to do that for a given table, but
the general case would be a bit trickier, though doable (given what
little I know about PG internals).

What problem are you trying to solve?

-Doug
-- 
Free Dmitry Sklyarov! 
http://www.freesklyarov.org/ 

We will return to our regularly scheduled signature shortly.

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[GENERAL] Getting data out of DB with bad WAL files.

2001-08-28 Thread postgres

Hi,

I've got a DB where the WAL files were lost.  I know I've lost some data, but 
is there anyway to get what is in the primary DB back out again?  PG won't 
start with bad WAL files so...:(

Thanks for the help.

GB

-- 
GB Clark II | Roaming FreeBSD Admin
[EMAIL PROTECTED] | General Geek 
   CTHULU for President - Why choose the lesser of two evils?

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[GENERAL] workaround for pl/pgsql (no arrays, no temp tables)?

2001-08-28 Thread Frank Miles

I have an application includes data which describe tree-like structures
of varying width and breadth.  It is important for me to be able to
recover the node-IDs for these data trees.  The goal is to be able
to recover the tree node ID list given the root node ID:

Essentially, the tables are of the form:
root-table:
root node ID
+other data

major-branch-table:
root node ID
minor-branch node ID
branch trim (whether branch is still included or not)
+other data

minor-branch table:
A-minor-branch node ID
B-minor-branch node ID
+other data

The last table enables one minor-branch to lead to another (indeterminant
depth).  The data structure partly results from the way that branches
may be combined to (re)create trees of varying content -- it's not
desirable to merge these tables.

There appear to be serious performance problems with using temporary tables
(many 'execute's required), and I haven't found a way to declare an
array within a pl/pgsql function (i.e. not a part of the arguments).

I can live with the result returned as a comma-delimited string containing
all the node IDs, but the iterative searching required for reconstructing the
node list means that building the string (without a temporary table or a
working array) means iteratively parsing the string many times (though
the trees don't include any cycles).

I hope that someone will be able to suggest a suitable mechanism or
strategy for recovering the tree node lists using a pl/pgsql function.
Isn't there a better way? Hoping that someone will say "Listen, dummy,
it's obvious..."

TIA!

-frank


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[GENERAL] How do I pack a class into a string? Is serialize available?

2001-08-28 Thread Gurunandan R. Bhat


Hi all,

Is a postgres function to pack an entire row from a table into a
string available? I want something analogous to "serialize()" in php.

Thanks and regards,
Gurunandan



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] PL/java?

2001-08-28 Thread Mitch Vincent

> Yup.  We wrote the client that is accessing the database.  It's using
> PHP, and we don't even *use* transactions currently.  But that isn't the
> problem.  From what I gather so far, the server is under fairly high
> load (6 right now) so vacuuming the database (520MB in files, 5MB dump)
> takes a *long* time.  While it's vacuuming, anything using that database
> just has to wait, and that's our problem.

Well every query is in it's own transaction unless you explicitly say BEGIN
and END -- so you technically are using transactions...

I have a 14x larger (70 meg, dumped) database running on a dual PII400 that
only takes 2 minutes or so to vacuum analyze (lots-o-indexes too), though I
guess that's a long time in some settings, we only do it once day though..

> Actually, on a whim, I dumped that 520MB database to it's 5MB file, and
> reimported it into an entirely new DB.  It was 14MB.  We vacuum at least
> once an hour (we have a loader that runs every hour, it may run multiple
> concurrent insert scripts).  We also use vacuum analyze.  So, I really
> can't see a reason for it to balloon to that horridly expanded size.

> Maybe stale indexes?  Aborted vacuums?  What on earth would cause that?

I've read that you can take the size of the data out of the database,
multiply it by 6 and you'll get the approximate size of the same data stored
in the database... Obviously that's not working in your case..

Every UPDATE and DELETE leaves the tuple that was updated or deleted until
vacuum is run but I don't see how that would happen on a fresh import into a
newly created DB...

-Mitch



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] PL/java?

2001-08-28 Thread Shaun Thomas

On 28 Aug 2001, Doug McNaught wrote:

> > Maybe stale indexes?  Aborted vacuums?  What on earth would cause that?
>
> VACUUM doesn't currently vacuum indexes.  Yes, it's a serious wart.  :(

Ah, now that makes sense.  It would also explain why our daily inserts
of many thousands of rows on a fairly regular basis would slowly bloat
the db.  It would also explain why the old system, which didn't use
indexes at all, didn't have this problem.  It would also explain why
the query optimizer picks crap plans, since the indexes are completely
innaccurate.

Hmm.  That's more than a wart, that's nearly a show-stopping bug.

> I suggest drop/recreate the indexes at intervals.  Or try REINDEX,
> which may work better.

Reindex is really our only option.  The database schema is complex enough
that dropping and recreating the indexes is dangerous (esp. primary keys)
and we also want to keep user databases from doing this - and we don't
know the details of those DB's.

Unfortunately, reindex can only be run while the DB is down.  ::sigh::
So, looks like a cron job to run at 2am.

# --- Pseudocode --- #

Get list of DB's.
Take backend down.
For each DB
  REINDEX DATABASE DB
done
Put backend back up.

print "Damn Vacuum."

# --- End Pseudocode --- #

Ew

-- 
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+
| Shaun M. ThomasINN Database Programmer  |
| Phone: (309) 743-0812  Fax  : (309) 743-0830|
| Email: [EMAIL PROTECTED]AIM  : trifthen  |
| Web  : hamster.lee.net  |
| |
| "Most of our lives are about proving something, either to   |
|  ourselves or to someone else." |
|   -- Anonymous  |
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[GENERAL] C API

2001-08-28 Thread Stephan Bergmann

Hi List!

I'm using the libpq C library and I'm looking for a C function,
that returns me the version (e.g. 7.1.3) of the backend server
I'm conected to!

Some's any idea?

Thanx and greetings
Steve


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] PL/java?

2001-08-28 Thread Doug McNaught

Shaun Thomas <[EMAIL PROTECTED]> writes:

> Actually, on a whim, I dumped that 520MB database to it's 5MB file, and
> reimported it into an entirely new DB.  It was 14MB.  We vacuum at least
> once an hour (we have a loader that runs every hour, it may run multiple
> concurrent insert scripts).  We also use vacuum analyze.  So, I really
> can't see a reason for it to balloon to that horridly expanded size.
> 
> Maybe stale indexes?  Aborted vacuums?  What on earth would cause that?

VACUUM doesn't currently vacuum indexes.  Yes, it's a serious wart.  :(

That's why the data file size declined so much (modulo WAL stuff).

I suggest drop/recreate the indexes at intervals.  Or try REINDEX,
which may work better.

-Doug
-- 
Free Dmitry Sklyarov! 
http://www.freesklyarov.org/ 

We will return to our regularly scheduled signature shortly.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] PL/java?

2001-08-28 Thread Shaun Thomas

On 28 Aug 2001, Doug McNaught wrote:

> You obviously know what you're doing, but are you absolutely sure one
> of your clients isn't holding a transaction open?  That'll hang vacuum
> every time...

Yup.  We wrote the client that is accessing the database.  It's using
PHP, and we don't even *use* transactions currently.  But that isn't the
problem.  From what I gather so far, the server is under fairly high
load (6 right now) so vacuuming the database (520MB in files, 5MB dump)
takes a *long* time.  While it's vacuuming, anything using that database
just has to wait, and that's our problem.

Actually, on a whim, I dumped that 520MB database to it's 5MB file, and
reimported it into an entirely new DB.  It was 14MB.  We vacuum at least
once an hour (we have a loader that runs every hour, it may run multiple
concurrent insert scripts).  We also use vacuum analyze.  So, I really
can't see a reason for it to balloon to that horridly expanded size.

Maybe stale indexes?  Aborted vacuums?  What on earth would cause that?

-- 
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+
| Shaun M. ThomasINN Database Programmer  |
| Phone: (309) 743-0812  Fax  : (309) 743-0830|
| Email: [EMAIL PROTECTED]AIM  : trifthen  |
| Web  : hamster.lee.net  |
| |
| "Most of our lives are about proving something, either to   |
|  ourselves or to someone else." |
|   -- Anonymous  |
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [GENERAL] PL/java?

2001-08-28 Thread Mike Mascari

Shaun Thomas wrote:
...
> The really strange thing is, one of our newwer databases has
> started hanging on vacuums.  That's a 7.1.1, so the 8k thing shouldn't be
> any kind of issue in the slightest thanks to the new internal structures.
> 
> But something is corrupt enough to break vaccum badly.  That doesn't make
> me feel very good.  The worst part is, while it's hung on the vacuum, idle
> connections just start piling up until we have to restart the DB.

That is probably an issue with one of your applications keeping an
open transaction against the table vacuum is attempting to access:

In Session #1:

BEGIN;
SELECT * FROM employees;

In Session #2:

VACUUM employees; < VACUUM is waiting on Session #1

In Session #3:

SELECT * FROM employees; <--- Now waiting because of VACUUM

In Session #1:

END;

You will see the Session #2 VACUUM complete and then the Session #3
SELECT complete. You've got some broken client app that needs to
either COMMIT or ABORT.

Hope that helps,

Mike Mascari
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[GENERAL] Re: PL/java?

2001-08-28 Thread Gregory Wood

> Yeah, I know.  I was just trying to defend mysql. ^_^  We use both, and so
> far, it's been the smaller headache, so...

That may be true... until you have to implement transactions and/or foreign
keys at the application level.

> The really strange thing is, one of our newwer databases has
> started hanging on vacuums.  That's a 7.1.1, so the 8k thing shouldn't be
> any kind of issue in the slightest thanks to the new internal structures.

My guess is that there is an idle transaction somewhere. We've run into
problems with those locking up our database: Postgres don't like it if you
have an idle transaction that has touched the table(s) that you are trying
to drop, add and/or vacuum.

> But something is corrupt enough to break vaccum badly.

Why do you assume it's due to corrupt and not some other known behavior?

> That doesn't make me feel very good.

Deadlocked databases don't make anyone feel good.

> The worst part is, while it's hung on the vacuum, idle
> connections just start piling up until we have to restart the DB.
>
> That's no good.

That's one thing I've never liked about Postgres... an idle transaction
locks any tables it touches, which causes vacuum to lock the rest of the
database. My understanding is that 7.2 is supposed to be more lock friendly
when it comes to databases... but that doesn't help you out now.

Greg


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[GENERAL] Indexes and Timestamp Comparisons

2001-08-28 Thread Josh Voils

I have a table called session:
 Table "session"
 Attribute  |   Type   | Modifier
+--+--
 sessionid  | character(32)| not null
 usernumber | integer  | not null
 timestamp  | timestamp with time zone | not null

I have two indices on this table: moo, a btree on timestamp, and
session_pkey, the implicit unique btree on sessionid.

I need to periodically eliminate timed-out entries from this table.
This is where my problem is. I created a test table that has a million
entries in it. I try to select the valid session ids and it uses an
index scan with moo; I try to select the invalid sessionids and it uses
a sequential scan which is considerably slower. I was also playing
around with some other timestamps to compare values and found that just
by changing the > to a < I would change whether it used a seq scan or an

index scan (it wasn't consistent though as to which scan it used for
which boolean.  Any suggestions on what I can do? I vacuum analyzed the
table right before I performed these explains:

This is a where for invalid sessionids. This is the one I actually need
to be able to run.
main=# explain delete from session where timestamp < '2001-08-27
12:26:03-05'::timestamp;
NOTICE:  QUERY PLAN:

Seq Scan on session  (cost=0.00..22815.21 rows=999257 width=6)

EXPLAIN
This is a where for valid sessionids.
main=# explain delete from session where timestamp > '2001-08-27
12:26:03-05'::timestamp;
NOTICE:  QUERY PLAN:

Index Scan using moo on session  (cost=0.00..3653.36 rows=1000 width=6)

EXPLAIN

These are a couple of random wheres. If it's of any consequence, this
timestamp is before all of the timestamps in the table.
main=# explain delete from session where timestamp < '2000-08-27
12:26:03-05'::timestamp;
NOTICE:  QUERY PLAN:

Index Scan using moo on session  (cost=0.00..3653.36 rows=1000 width=6)

EXPLAIN
main=# explain delete from session where timestamp > '2000-08-27
12:26:03-05'::timestamp;
NOTICE:  QUERY PLAN:

Seq Scan on session  (cost=0.00..22815.21 rows=999257 width=6)

EXPLAIN


Any help would be appreciated,

Josh Voils




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] PL/java?

2001-08-28 Thread Shaun Thomas

On Mon, 27 Aug 2001, Tom Lane wrote:

> The latter is what I'm interested in, since \d doesn't invoke anything
> that I'd consider crash-prone.  Could you submit a debugger backtrace
> from the crash?

I should do that.  But, since it's the back-end that's crashing, I'd need
to find some way of getting a core dump.  So far, it isn't producing any.
I'll have to play with the environment to see why.

> Yes, I know, 6.* was not very careful about defending itself from tuples
> over 8K.  But 7.0 is, which is why I don't think that the tuple length
> is relevant.  I'd like to quit bandying accusations about and instead
> find out *exactly why* Postgres is crashing on you, so that we can
> either fix it or confirm that it's been fixed already.

Yeah, I know.  I was just trying to defend mysql. ^_^  We use both, and so
far, it's been the smaller headache, so...  I'll do what I can to get you
a backtrace.  The really strange thing is, one of our newwer databases has
started hanging on vacuums.  That's a 7.1.1, so the 8k thing shouldn't be
any kind of issue in the slightest thanks to the new internal structures.

But something is corrupt enough to break vaccum badly.  That doesn't make
me feel very good.  The worst part is, while it's hung on the vacuum, idle
connections just start piling up until we have to restart the DB.

That's no good.

-- 
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+
| Shaun M. ThomasINN Database Programmer  |
| Phone: (309) 743-0812  Fax  : (309) 743-0830|
| Email: [EMAIL PROTECTED]AIM  : trifthen  |
| Web  : hamster.lee.net  |
| |
| "Most of our lives are about proving something, either to   |
|  ourselves or to someone else." |
|   -- Anonymous  |
+-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] Re: createdb -E UNICODE dbname and encoding BIG5 not working

2001-08-28 Thread Karel Zak

On Tue, Aug 28, 2001 at 10:34:52PM +0900, Tatsuo Ishii wrote:
> 
> >  The 'BIG5' is client encoding only. PG can on the fly encode
> > data from some multibyte (unicode, mule_internal) encoding 
> > used for server to big5 used on client, but you can't directly 
> > use big5 at server (DB).
> 
> No. \encoding command sets the client encoding, not DB encoding.
> So "\encoding BIG5" is a valid command in his case.

 Oh sorry. I bad read original question and a little puzzle part
with createdb and \encodin. Tatsuo is right of course.

Karel

-- 
 Karel Zak  <[EMAIL PROTECTED]>
 http://home.zf.jcu.cz/~zakkr/
 
 C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] Re: createdb -E UNICODE dbname and encoding BIG5 notworking

2001-08-28 Thread Tatsuo Ishii

> On Tue, Aug 28, 2001 at 06:03:55PM +0800, Michael R. Fahey wrote:
> > I compiled 7.1.3 with configure --multibyte=UNICODE and
> > --enable-unicode-conversion (Red Hat Linux 6.1, Kernel 2.2.19). 
> > 
> > Now I'm trying to follow the instructions given by Tatsuo Ishii in his
> > 18 March 2001 post on how to convert big5. But createdb -E UNICODE
> > unicode fails with the message 
> > 
> > ERROR:  Multi-byte support is not enabled. 

You have multibyte disabled backend somewhere. If you have not made a
mistake in cut&paste your configure command, it should be:

configure --enable-multibyte=UNICODE

not:

configure --multibyte=UNICODE

> > Also, in psql \encoding BIG5 fails with the message 
> > 
> > BIG5: invalid encoding name.

This might be caused above.

>  The 'BIG5' is client encoding only. PG can on the fly encode
> data from some multibyte (unicode, mule_internal) encoding 
> used for server to big5 used on client, but you can't directly 
> use big5 at server (DB).

No. \encoding command sets the client encoding, not DB encoding.
So "\encoding BIG5" is a valid command in his case.
--
Tatsuo Ishii

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] Regarding vacuumdb

2001-08-28 Thread Tom Lane

Bhuvaneswari <[EMAIL PROTECTED]> writes:
> ERROR: mdopen: couldn't open test1: No such file or directory

Looks like you tried to roll back a DROP TABLE.  You can get out of the
immediate problem by doing
touch $PGDATA/base/db1/test1
and then drop the table.

Next, run do not walk to an archive site and get a newer release of
Postgres.  You are at least two major releases behind the times, and
I can't even begin to count the number of bugs fixed since then ...
but this is one of them.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[GENERAL] Regarding vacuumdb

2001-08-28 Thread Bhuvaneswari

hi,
I am getting the following error while doing vacuumdb,

ERROR: mdopen: couldn't open test1: No such file or directory
vacuumdb: database vacuum failed on db1.

Here 'db1' is the database and 'test1' is a table. When, displaying the
structure of the
table 'test1', it comes correctly. But I can't drop the table. What
could be wrong?
Any help.
Thanks
shan.




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly