I suppose you will want to use perform array_append(array,val) instead
of select. That is the plpgsql way to do it. Must have slipped my
mind. PostGresql has 2 different ways of doing the same thing
depending on where you are.
Using regular sql if you don't want to return any result you do a
select
Sim,
Thanks for your help !
I changed the array_append(array,val) into select array_append(array,val),
when I ran the function it says
SampleDB=# select getmatch(array[2]);ERROR: SELECT query has no destination for result dataHINT: If you want to discard the results, use PERFORM instead.
tri
A couple months ago, I posted a function that allows a user to alter views
with dependencies.
I recently discovered another quick way to do it and thought I would share
it with the community.
1) Cretae a new view
2) Modify the dependent views to utilize the new view
3) drop the old view
4) rename
Sim,
yeah , after inserting a exit when not found , it is not looping indefinitely, but the function is supposed to return an array as per the declaration , like I am calling the function with an array
select * from getmatch(array[2]);
and I am selecting the records into the cursor which matches t
I would guess that the cursor is not finding any records.
use notify in the code to see what what values it is finding.
see section 35.9 in the help for how to do this.
put a notify statement on each line that assigns a relevant value and
see what it is.
You can also do a notify on the array to see
Hello,
I have following problem:
I have table MY_TABLE with following records:
NAME
---
ccc
CCC
AAA
aaa
bbb
BBB
When I use default select that sort all data by NAME:
SELECT * FROM MY_TABLE ORDER BY NAME;
result is following:
NAME
---
AAA
Sim,
I have given a 'when not found raise exception '
yeah it says no matching records
but the records are there which matches 2 and 3
SampleDB=# select sys_id from subsystems where sys_id = 2 or sys_id =3; sys_id 2 3 2 3(4 rows)
there is a probs with array indexing
On Tue, 10 May 2005 07:41 pm, Julian Legeny wrote:
> Hello,
>
>I have following problem:
> But I would like to sort all data as following:
>
>NAME
> ---
>AAA
>aaa
>BBB
>bbb
>CCC
>ccc
>
>
> How can I write sql command (or set up ORDER BY options) for sele
if you order by upper(name) then it will mix them all together, so you won't
have capital before lowercase, but it will put all the lowercase a before
the uppercase b
"Julian Legeny" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> Hello,
>
>I have following problem:
>
> I have t
SELECT * FROM MY_TABLE
ORDER BY LOWER(NAME);
Thanks
Dinesh
Pandey
-Original
Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Julian Legeny
Sent: Tuesday, May 10, 2005 3:12 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] ORDER BY options (how
On Tue, 2005-05-10 at 11:41 +0200, Julian Legeny wrote:
> ...
> But I would like to sort all data as following:
>
>NAME
> ---
>AAA
>aaa
>BBB
>bbb
>CCC
>ccc
> How can I write sql command (or set up ORDER BY options) for selecting that?
how about ORDER BY lower
Hello,
that's what I was looking for.
Thanks to all for advices,
with best regards,
Julian Legeny
Tuesday, May 10, 2005, 12:14:38 PM, you wrote:
RS> SELECT * FROM MY_TABLE ORDER BY lower(NAME), NAME
RS> The second NAME is to ensure that AAA comes before aaa, otherwise the order
Bart Grantham wrote:
Hello again. I had a problem a few weeks ago with using IN ( some_array
) having really rough performance. Special thanks to Ron Mayer for the
suggestion of using int_array_enum(some_array) to join against. I had
to upgrade to PG8 but that technique works really well. No
unsuscribe
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
Arthur Hoogervorst schrieb:
Hi,
The company I work for actually uses the Zeos lib/Postgres extensively
to track the shipping and sales side for almost 3 years.
We're still running on a 7.2/7.4 Postgres database, because I haven't
been convinced yet to either update or upgrade to 8.x.x. I'm curious
Hy,
in thread "Adventures in Quest for GUI RAD" there awnsered some Delphi -
Developers.
We actually use Delphi to access PostgreSQL too. But with some problems:
The older Versions of microolab postgresql dac are absolutely trash. I
haven't tried the newer ones.
ODBC / Delphi BDE is ripped out b
Hi,
Im having problems in exporting my local pgdb to my server with data.
I do a backup and everytime i get stuck in functions or some $libdir or
function pg_logdir_ls() etc.
I ignore those function definations but then it crashes on some create
table statement.
I believe there is a easy way to g
Vikas wrote:
Hi,
Im having problems in exporting my local pgdb to my server with data.
I do a backup and everytime i get stuck in functions or some $libdir or
function pg_logdir_ls() etc.
pg_dump shouldn't ever "get stuck" at any point.
I ignore those function definations but then it crashes on so
Joshua D. Drake wrote:
My company has been looking for a good database modelling tool for
postgres and have yet to find something that completely satisfies our
needs. We are currently using a product called DBWrench which is
pretty good and has all the features we are looking for but is full of
MS SQL Server has what used to be called "Linked Servers" which could
use ODBC to connect directly to other data sources. Your tool talking
to MS SQL Server would need to use fully qualified table names,
including the name of the "Linked Server" but it might work.
On 5/9/05, Guy Rouillier <[EMAIL
> For example, this tool doesn't realize that in postgres you can't add a
> column and set not null in one ALTER TABLE statement. So we are forced to
> manually comb through the SQL scripts it creates and fix the buggy
> statements.
AFAIK started from version 8 it can be combined into single al
On Mon, 2005-05-09 at 23:35, Jerome Macaranas wrote:
> i didnt set fsm... the config i paste is all that i put into place...
OK, that's likely a part of your problem.
Did you run the vacuumdb -af I recommended? Did it help? If so, you
likely need to run plain (i.e. lazy) vacuums more often, a
Mark,
ERwin is way to go...Mark Borins <[EMAIL PROTECTED]> wrote:
Postgres Newsgroup,
My company has been looking for a good database modelling tool for postgres and have yet to find something that completely satisfies our needs. We are currently using a product called DBWrench which is p
On Tue, 2005-05-10 at 01:51 -0400, [EMAIL PROTECTED] wrote:
> Wolfgang, thanks! I am very persuaded by your arguments regarding
> Python. What you have written makes me look at Python in a different
> light.
>
> I happened to find a download of Python2.2 which I installed at work
> but have not
> I remember the Borland of old that offered extraordinarily powerful
> tools at a reasonable price. Unfortunately, they are not the same
> company they used to be.
The freepascal + lazarus + ported ZeosDB could do the trick...
Vlad
---(end of broadcast)-
Greg Stark <[EMAIL PROTECTED]> writes:
> Tom Lane <[EMAIL PROTECTED]> writes:
>> I think that efficient implementation of this would require explicitly
>> storing the hash code for each index entry,
> It seems that means doubling the size of the hash index. That's a pretty big
> i/o to cpu tradeof
I cannot do a /usr/bin/pgdump with my new Postgres 7.4. When I try,
here is what I get:
---
[ ~]$ /usr/bin/pg_dump dcf_20050404 >&
/home/japsey/DCF_RAID_01/source/sql/backup/bu/dcf_200
Hello,
I often test SQL in separate text files, loading them into the server
using psql -d -f from the command line, wrapping the SQL in a BEGIN;
... ROLLBACK; transaction until I'm sure it does what I want. Often the
SQL script is just a small part of a larger chunk of work I'm doing on
the de
This may be my second posting but I think I've done it correctly this time.
At this point, I am unable to do a pg_dump using our new Rec Hat
Enterprise Linux AS 4 version of Postgres which is version 7.4.
Here's what I get when I try to do a pg_dump of our database:
---
"Jimmie H. Apsey" <[EMAIL PROTECTED]> writes:
> This may be my second posting but I think I've done it correctly this time.
> At this point, I am unable to do a pg_dump using our new Rec Hat
> Enterprise Linux AS 4 version of Postgres which is version 7.4.
> Here's what I get when I try to do a pg
On Tue, May 10, 2005 at 12:10:57AM -0400, Tom Lane wrote:
> be responsive to your search.) (This also brings up the thought that
> it might be interesting to support hash buckets smaller than a page ...
> but I don't know how to make that work in an adaptive fashion.)
IIRC, other databases that s
On Tue, May 10, 2005 at 10:14:11AM +1000, Neil Conway wrote:
> Jim C. Nasby wrote:
> >> No, hash joins and hash indexes are unrelated.
> >I know they are now, but does that have to be the case?
>
> I mean, the algorithms are fundamentally unrelated. They share a bit of
> code such as the hash fun
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> What's the challange to making it adaptive, comming up with an algorithm
> that gives you the optimal bucket size (which I would think there's
> research on...) or allowing the index to accommodate different bucket
> sizes existing in the index at once?
On Mon, May 09, 2005 at 09:07:40PM -0400, Christopher Murtagh wrote:
> On Mon, 2005-05-09 at 17:01 -0400, Douglas McNaught wrote:
> > Why not have a client connection LISTENing and doing the
> > synchronization, and have the trigger use NOTIFY?
> >
> > Or, you could have the trigger write to a tab
Hi,
I'm getting trouble
with psqlodbc driver on Solaris 10.
I'm using IODBC
3.51.2 and psqlodbc 08.00.0101.
When I try to start
an application which makes some ODBC calls, I get the following error
:
fatal: relocation
error: file /usr/local/lib/psqlodbc.so: symbol utf8_to_ucs2_lf: ref
Me not.
The problem with lazarus is that they don't support many third party
components. We use havily Teechart and Developer Express in our software
so Lazarus isn't a choice for us.
Daniel
Zlatko Matic schrieb:
What about Lazarus Has anybody tried working with Lazarus?
- Original Message -
What about Lazarus Has anybody tried working with Lazarus?
- Original Message -
From: "Daniel Schuchardt" <[EMAIL PROTECTED]>
To:
Sent: Tuesday, May 10, 2005 2:27 PM
Subject: [GENERAL] Delphi - Developers start develop Access components for
Postgres?
Hy,
in thread "Adventures in Quest
Michael Glaesemann wrote:
> This works well, but I think I'll have to change all of the paths
> when I move the group of scripts to the production server to load
> them. What I'd like to do is be able to use paths relative to the
> file that contains the \i commands. This doesn't seem to work when
"Jimmie H. Apsey" <[EMAIL PROTECTED]> writes:
> At this point, I am unable to do a pg_dump using our new Rec Hat
> Enterprise Linux AS 4 version of Postgres which is version 7.4.
> Here's what I get when I try to do a pg_dump of our database:
> [ ~]$ /usr/bin/pg_dump dcf_20050404 >& /~/dcf_200504
On Tue, May 10, 2005 at 11:49:50AM -0400, Tom Lane wrote:
> "Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> > What's the challange to making it adaptive, comming up with an algorithm
> > that gives you the optimal bucket size (which I would think there's
> > research on...) or allowing the index to ac
Tom Lane <[EMAIL PROTECTED]> writes:
> > What if the hash index stored *only* the hash code? That could be useful for
> > indexing large datatypes that would otherwise create large indexes.
>
> Hmm, that could be a thought.
Hm, if you go this route of having hash indexes store tuples ordered by
On 5/9/05, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
> On Mon, May 09, 2005 at 02:05:14AM +1000, Brendan Jurd wrote:
> > CREATE TABLE foo (
> > foo int NOT NULL REFERENCES bar INDEX
> > );
> >
> > ... would be marvellous
>
> I agree that it would be handy. Another possibility is throwing a NOTICE
>
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> Well, in a hash-join right now you normally end up feeding at least one
> side of the join with a seqscan. Wouldn't it speed things up
> considerably if you could look up hashes in the hash index instead?
That's called a "nestloop with inner index scan"
On Tuesday 10 May 2005 17:24, Michael Glaesemann wrote:
> This works well, but I think I'll have to change all of the paths
> when I move the group of scripts to the production server to load
> them. What I'd like to do is be able to use paths relative to the
> file that contains the \i commands. T
The issues mentioned with Zeos have been fixed, the 6.x branch works very well
now and it much faster
thant the 5.x versions.
I use cursors all the time in my functions and return a refcursor which I then
fetch.
The latest version of PostgresDAC from microolap also works good, I tried there
1.x
Hi,
I'm looking into creating a hosted application with Postgres as the
SQL server. I would like to get some ideas and oppinions about the
different ways to separate the different clients, using postgres.
The options I had in mind:
1) Create a different database per client. How much overhead wil
Hi, how can i know the values generated by a column of type serial?
I mean, i have the following table
productos
(
id serial,
desc varchar(50)
)
select * from productos;
+-++
| id | desc |
+-++
| 1 | ecard1 |
| 2 | ecard2 |
| 3 | ecard3
Greg Stark <[EMAIL PROTECTED]> writes:
>>> What if the hash index stored *only* the hash code? That could be useful for
>>> indexing large datatypes that would otherwise create large indexes.
>>
>> Hmm, that could be a thought.
> Hm, if you go this route of having hash indexes store tuples ordere
In order to address several security issues identified over the past two
weeks, as well as one "low probability" race condition, we are releasing
new version of PostgreSQL as far back as the 7.2.x branch.
Please note that the security issues were those already reported by Tom
Lane, as well as a
Tony Caduto schrieb:
The issues mentioned with Zeos have been fixed, the 6.x branch works
very well now and it much faster
thant the 5.x versions.
I use cursors all the time in my functions and return a refcursor
which I then fetch.
??
How? On EoF of a TDataSet? Its clear that you can fetch manua
On Tue, 2005-05-10 at 11:11 -0500, Jim C. Nasby wrote:
> Well, LISTEN and NOTIFY are built into PostgreSQL
> (http://www.postgresql.org/docs/8.0/interactive/sql-notify.html). If the
> processes that you're trying to notify of the changes are connected to
> the database then this might be the easies
just obtain the next value from the sequence first, then do the insert:
CREATE OR REPLACE FUNCTION insert_row(text) returns text language plpgsql
AS $$
DECLARE
vdesc alias for $1;
new_id INTEGER;
BEGIN
SELECT nextval('sequence_name_here') INTO new_id;
INSERT INTO productos (id,
On Tue, 2005-05-10 at 15:02, Christopher Murtagh wrote:
> On Tue, 2005-05-10 at 11:11 -0500, Jim C. Nasby wrote:
> > Well, LISTEN and NOTIFY are built into PostgreSQL
> > (http://www.postgresql.org/docs/8.0/interactive/sql-notify.html). If the
> > processes that you're trying to notify of the chang
Christopher Murtagh <[EMAIL PROTECTED]> writes:
> I was given an example of how to spawn a forked process with plperlu,
> and it looks pretty simple and straightforward and exactly what I want:
> CREATE or REPLACE function somefunc() returns void as $$
> $SIG{CHLD}='IGNORE';
... let's see, you a
On Tue, May 10, 2005 at 04:02:59PM -0400, Christopher Murtagh wrote:
> On Tue, 2005-05-10 at 11:11 -0500, Jim C. Nasby wrote:
> > Well, LISTEN and NOTIFY are built into PostgreSQL
> > (http://www.postgresql.org/docs/8.0/interactive/sql-notify.html).
> > If the processes that you're trying to notify
On Tue, 2005-05-10 at 13:50 -0700, David Fetter wrote:
> Why do you think Slony won't work for this? One way it could do it is
> to have an ON INSERT trigger that populates one or more tables with
> the result of the XSLT, which table(s) Slony replicates to the other
> servers.
Because the nodes
Quoting "Jim C. Nasby" <[EMAIL PROTECTED]>:
> Well, in a hash-join right now you normally end up feeding at least
> one
> side of the join with a seqscan. Wouldn't it speed things up
> considerably if you could look up hashes in the hash index instead?
You might want to google on "grace hash" and
On Tue, 2005-05-10 at 16:17 -0400, Tom Lane wrote:
> ... let's see, you already broke the backend there --- unless its normal
> setting of SIGCHLD is IGNORE, in which case munging it is unnecessary
> anyway ...
Here's my (probably all garbled) explanation: Essentially what that code
is a self-daem
unless we are thinking of something different I do this:
sql.clear;
sql.add('select myfunction();');
sql.add('fetch all from return_cursor;');
open;
>
> ??
> How? On EoF of a TDataSet? Its clear that you can fetch manually. How do you
do that?
>
--
Tony Caduto
AM Software Design
Home of PG Lightni
On Tuesday 10 May 2005 14:34, vladimir wrote:
> > I remember the Borland of old that offered extraordinarily powerful
> > tools at a reasonable price. Unfortunately, they are not the same
> > company they used to be.
>
> The freepascal + lazarus + ported ZeosDB could do the trick...
Yes, it could
On Tue, May 10, 2005 at 05:31:56PM -0400, Christopher Murtagh wrote:
> > I'm not sure what happens when you do "exit" here, but I'll lay odds
> > against it being exactly the right things.
>
> It ends the daemonized process, kinda like a wrapper suicide. :-)
I think you have a problem here. Post
If the original paper was published in 1984, then it's been more than 20
years. Any potential patents would already have expired, no?
-- Mark Lewis
On Tue, 2005-05-10 at 14:35, Mischa Sandberg wrote:
> Quoting "Jim C. Nasby" <[EMAIL PROTECTED]>:
>
> > Well, in a hash-join right now you normally
Mischa Sandberg <[EMAIL PROTECTED]> writes:
> The PG hash join is the simplest possible: build a hash table in memory,
> and match an input stream against it.
> *Hybrid hash* is where you spill the hash to disk in a well-designed
> way. Instead of thinking of it as building a hash table in memory,
Hiya-
I need to do something like this:
SELECT t1.symbol AS app_name, t2.outside_key AS app_id
FROM t2 LEFT JOIN t1 ON t1.t2_id=t2.id AS my_join
LEFT JOIN rows of arbitrary (app_name, app_id) ON
my_join.app_name=rows.app_name AND my_join.app_id=rows.app_id
The arbitrary app_name, app_id come fro
On May 11, 2005, at 3:58, Leif B. Kristensen wrote:
On Tuesday 10 May 2005 17:24, Michael Glaesemann wrote:
This works well, but I think I'll have to change all of the paths
when I move the group of scripts to the production server to load
them. What I'd like to do is be able to use paths relative
Quoting Tom Lane <[EMAIL PROTECTED]>:
> Mischa Sandberg <[EMAIL PROTECTED]> writes:
> > The PG hash join is the simplest possible: build a hash table in
> memory, and match an input stream against it.
>
> [ raised eyebrow... ] Apparently you've not read the code. It's
> been hybrid hashjoin si
Tony Caduto schrieb:
unless we are thinking of something different I do this:
sql.clear;
sql.add('select myfunction();');
sql.add('fetch all from return_cursor;');
open;
>
> ??
> How? On EoF of a TDataSet? Its clear that you can fetch manually.
How do you do that?
>
??? Thats not really cursor f
On 05/10/05 06:17 PM CDT, Peter Fein <[EMAIL PROTECTED]> said:
> Hiya-
>
> I need to do something like this:
>
> SELECT t1.symbol AS app_name, t2.outside_key AS app_id
> FROM t2 LEFT JOIN t1 ON t1.t2_id=t2.id AS my_join
> LEFT JOIN rows of arbitrary (app_name, app_id) ON
> my_join.app_name=rows.a
Tom Lane <[EMAIL PROTECTED]> writes:
> No, not at all, because searching such an index will require a tree
> descent, thus negating the one true advantage of hash indexes.
The hash index still has to do a tree descent, it just has a larger branching
factor than the btree index.
btree indexes
Quoting Mark Lewis <[EMAIL PROTECTED]>:
> If the original paper was published in 1984, then it's been more than
> 20 years. Any potential patents would already have expired, no?
Don't know, but the idea is pervasive among different vendors ...
perhaps that's a clue.
And having now read beyond
Marc G. Fournier wrote:
Please note that the security issues were those already reported by Tom
Lane, as well as a manual fix for them. These releases are mainly to
ensure that those installing and/or upgrading existing installations
have those fixes automatically.
Note that if you're upgrading
I have two timestamps -- start and end. I use age(end,start)
to give me a "session" time and then I subtotal those. If
the total times are less than 24 hours I get a time is the
format of XX:xx:xx -- this is great. But if the time is over
24 hours I get 1 day XX:xx:xx, which really raises hell with
Is there a TODO anywhere in this discussion? If so, please let me know.
---
Mischa Sandberg wrote:
> Quoting Mark Lewis <[EMAIL PROTECTED]>:
>
> > If the original paper was published in 1984, then it's been more than
> > 2
Quoting Bruce Momjian :
> Mischa Sandberg wrote:
> > Quoting Bruce Momjian :
> > > Is there a TODO anywhere in this discussion? If so, please let
> me
> > > know.
> > >
> >
> > Umm... I don't think so. I'm not clear on what TODO means yet. 'Up
> for
> > consideration'? If a "TODO" means commit
Quoting Bruce Momjian :
>
> Is there a TODO anywhere in this discussion? If so, please let me
> know.
>
Umm... I don't think so. I'm not clear on what TODO means yet. 'Up for
consideration'? If a "TODO" means committing to do, I would prefer to
follow up on a remote-schema (federated server)
Mischa Sandberg wrote:
> Quoting Bruce Momjian :
>
> >
> > Is there a TODO anywhere in this discussion? If so, please let me
> > know.
> >
>
> Umm... I don't think so. I'm not clear on what TODO means yet. 'Up for
> consideration'? If a "TODO" means committing to do, I would prefer to
> follo
Bruce Momjian wrote:
Is there a TODO anywhere in this discussion? If so, please let me know.
There are a couple:
- consider changing hash indexes to keep the entries in a hash bucket
sorted, to allow a binary search rather than a linear scan
- consider changing hash indexes to store each key's h
Greg Stark <[EMAIL PROTECTED]> writes:
> Tom Lane <[EMAIL PROTECTED]> writes:
>> No, not at all, because searching such an index will require a tree
>> descent, thus negating the one true advantage of hash indexes.
> The hash index still has to do a tree descent, it just has a larger branching
>
On Mon, May 09, 2005 at 15:48:44 -0400,
Hrishikesh Deshmukh <[EMAIL PROTECTED]> wrote:
> Hi All,
>
> How can one use a table created for saving the results for a query be
> used in WHERE for subsequent query!!!
>
> Step 1) create table temp as select gene from dataTable1 intersect
> select gene
No.1 Python does not scale well.
map x=new map();
vs
self.d={};
Python is 10x slower than C++ or Java. If you do _any_ kind of OO
programming, python is _very_ slow.
I have implemented a great little web content engine based on Python
and XML using PyXML, and it has _serious_ problems with larg
It depends.
We do all of the above depending on the situation. Some programs we
want to have cross client reporting and querying, so we use just
application level security with a client_id field. Some programs we
just want to be able to dump restore the whole DB in one go so we just
different sc
Richard Huxton wrote:
Vikas wrote:
Hi,
Im having problems in exporting my local pgdb to my server with data.
I do a backup and everytime i get stuck in functions or some $libdir
or function pg_logdir_ls() etc.
pg_dump shouldn't ever "get stuck" at any point.
It was not the pg_dump which got stuck
What
is the command you have used while dumping the
objects?
Thanks
Dinesh
Pandey
-Original
Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Vikas
Sent: Wednesday, May 11, 2005 10:00 AM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] databa
On Tue, May 10, 2005 at 18:32:39 -0700,
Bob Lee <[EMAIL PROTECTED]> wrote:
> I have two timestamps -- start and end. I use age(end,start)
> to give me a "session" time and then I subtotal those. If
> the total times are less than 24 hours I get a time is the
> format of XX:xx:xx -- this is great.
Im doing through the GUI...
pg_dump.exe -i -h 192.168.1.31 -p 5432 -U postgres -F p -v -f
"c:\del.sql" -n public "DBNAME"
Dinesh Pandey wrote:
What is the command you have used while dumping the objects?
Thanks
Dinesh Pandey
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[E
Neil Conway <[EMAIL PROTECTED]> writes:
> Note that if you're upgrading within a release series (e.g. 8.0.x to
> 8.0.3) without a dump and reload, you will _not_ get the necessary
> system catalog changes automatically. Tom's earlier mail describes the
> procedure needed to correct the system ca
Hi,
I'm using the pg_dump and pg_restore client applications to implement
our database backup-restore strategies. Wrt this, inorder to be able to
autoamate the backup process (run it as a cron job), I'm using the
.pgpass file for automating the password input. This is working for
pg_dump.
My res
http://www.postgresql.org/docs/8.0/static/app-pgdump.html
Thanks
Dinesh Pandey
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Vikas
Sent: Wednesday, May 11, 2005 10:57 AM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] database export in pgs
88 matches
Mail list logo