[SQL] help on creating table

2000-10-23 Thread pgsql-sql

Hi All,

I'm planning to have data in a tree structure when fetched.
e.g.

NODE1
   + --- NODE1_1
   + --- NODE1_2
   |+ --- NODE1_2_1
   + --- NODE1_3

Is this possible? How?

I would really appreciate any help.

Sherwin




Re: [SQL] Large Objects

2000-10-23 Thread pgsql-sql


FROM test.pl of DBD-Pg-0.93.tar ...

# test large objects

# create large object from binary file

my ($ascii, $pgin);
foreach $ascii (0..255) {
$pgin .= chr($ascii);
};

my $PGIN = '/tmp/pgin';
open(PGIN, ">$PGIN") or die "can not open $PGIN";
print PGIN $pgin;
close PGIN;

# begin transaction
$dbh->{AutoCommit} = 0;

my $lobjId;
( $lobjId = $dbh->func($PGIN, 'lo_import') )
and print "\$dbh->func(lo_import) .. ok\n"
or  print "\$dbh->func(lo_import) .. not ok\n";

# end transaction
$dbh->{AutoCommit} = 1;

unlink $PGIN;


# blob_read

# begin transaction
$dbh->{AutoCommit} = 0;

$sth = $dbh->prepare( "" ) or die $DBI::errstr;

my $blob;
( $blob = $sth->blob_read($lobjId, 0, 0) )
and print "\$sth->blob_read  ok\n"
or  print "\$sth->blob_read  not ok\n";

$sth->finish or die $DBI::errstr;

# end transaction
$dbh->{AutoCommit} = 1;


# read large object using lo-functions

# begin transaction
$dbh->{AutoCommit} = 0;

my $lobj_fd; # may be 0
( defined($lobj_fd = $dbh->func($lobjId, $dbh->{pg_INV_READ}, 'lo_open')) )
and print "\$dbh->func(lo_open)  ok\n"
or  print "\$dbh->func(lo_open)  not ok\n";

( 0 == $dbh->func($lobj_fd, 0, 0, 'lo_lseek') )
and print "\$dbh->func(lo_lseek) ... ok\n"
or  print "\$dbh->func(lo_lseek) ... not ok\n";

my $buf = '';
( 256 == $dbh->func($lobj_fd, $buf, 256, 'lo_read') )
and print "\$dbh->func(lo_read)  ok\n"
or  print "\$dbh->func(lo_read)  not ok\n";

( 256 == $dbh->func($lobj_fd, 'lo_tell') )
and print "\$dbh->func(lo_tell)  ok\n"
or  print "\$dbh->func(lo_tell)  not ok\n";

( $dbh->func($lobj_fd, 'lo_close') )
and print "\$dbh->func(lo_close) ... ok\n"
or  print "\$dbh->func(lo_close) ... not ok\n";

( $dbh->func($lobjId, 'lo_unlink') )
and print "\$dbh->func(lo_unlink) .. ok\n"
or  print "\$dbh->func(lo_unlink) .. not ok\n";

# end transaction
$dbh->{AutoCommit} = 1;


# compare large objects

( $pgin cmp $buf and $pgin cmp $blob )
and print "compare blobs .. not ok\n"
or  print "compare blobs .. ok\n";

#


[EMAIL PROTECTED] writes:
>Hi,
>
>Could someone please provide a demo of creating the type "Lo".
>
>Regards,
>Craig May
>
>Enth Dimension
>http://www.enthdimension.com.au





Re(2): [SQL] Large Object dump ?

2000-11-01 Thread pgsql-sql

You can try the script I made for exporting all my Pg database.
Ideas were borrowed from pg_dumplo-0.0.5.
Make it sure that you have "Perl5 extension for PostgreSQL - Pg.pm"
installed.

sherwin

#!/usr/bin/perl -w
#
# Export large objects of all Pg database 
#  - Sherwin T. Daganato ([EMAIL PROTECTED])
#  - October 8, 2000  
#

use strict;
use Pg;

my $space = shift || die "Usage: $0 outputdir";

# make sure the directory doesn't end in '/'
$space =~ s/\/$//;

my $conn_all = Pg::connectdb("dbname=template1");
die $conn_all->errorMessage unless PGRES_CONNECTION_OK eq
$conn_all->status;

# find all database
my $sql = "SELECT datname FROM pg_database ORDER BY datname";
my $res_all = $conn_all->exec($sql);
die $conn_all->errorMessage unless PGRES_TUPLES_OK eq
$res_all->resultStatus;

my $counter = 0;
while (my ($database) = $res_all->fetchrow) {
  my $conn_db = Pg::connectdb("dbname=$database");
  die $conn_db->errorMessage unless PGRES_CONNECTION_OK eq
$conn_db->status;

  # find any candidate tables with columns of type oid
  $sql = qq/SELECT c.relname, a.attname 
FROM pg_class c, pg_attribute a, pg_type t 
WHERE a.attnum > 0 
AND a.attrelid = c.oid 
AND a.atttypid = t.oid 
AND t.typname = 'oid' 
AND c.relname NOT LIKE 'pg_%';
   /;

  my $res_db = $conn_db->exec($sql);
  die $conn_db->errorMessage unless PGRES_TUPLES_OK eq
$res_db->resultStatus;

  my $path;
  local (*F);
  while (my ($table, $fld) = $res_db->fetchrow) {

# find large object id
$sql = sprintf ("SELECT x.%s FROM %s x WHERE EXISTS (SELECT c.oid FROM
pg_class c WHERE c.relkind = 'l' AND x.%s = c.oid)",
$fld, $table, $fld);

# find large object id
#$sql = sprintf ("SELECT x.%s FROM %s x, pg_class c WHERE x.%s = c.oid
and c.relkind = 'l'", 
#$fld, $table, $fld);

my $res_tbl = $conn_db->exec($sql);
die $conn_db->errorMessage unless PGRES_TUPLES_OK eq
$res_tbl->resultStatus;

my $tuples;
next unless (($tuples = $res_tbl->ntuples) > 0);
$counter += $tuples;

$path = sprintf ("%s/%s", $space, $database);

if ( -d $path) {

  # creates file if it don't exist and appends to it
  open(F,">>$path/lo_dump.index") || die "\n $0 Cannot open $! \n"; 
} else {

  # create dir for database
  mkdir($path, 0755) || die "\n Can't create $path: $! \n";

  # opens file for writing. overwrite existing file
  open(F, ">$path/lo_dump.index") || die "\n $0 Cannot open $! \n";

  # temporarily turn off warnings
  # there might be undef
  local $^W = 0;

  print F "#\n# This is the PostgreSQL large object dump index\n#\n";
  printf F "#\tDate: %s\n", scalar(localtime);
  printf F "#\tHost: %s\n", $conn_db->host;
  printf F "#\tDatabase: %s\n", $conn_db->db;
  printf F "#\tUser: %s\n", $conn_db->user;
  printf F "#\n# oid\ttable\tattribut\tinfile\n#\n";
}

$path = sprintf ("%s/%s", $path, $table);

# create dir for table
mkdir($path, 0755) || die "\n Can't create $path: $! \n";
   
$path = sprintf ("%s/%s", $path, $fld); 
  
# create dir for field
mkdir($path, 0755) || die "\n Can't create $path: $! \n";

printf "dump %s.%s (%d large obj)\n", $table, $fld, $tuples;  

while (my ($lobjOid) = $res_tbl->fetchrow) {

  $path = sprintf ("%s/%s/%s/%s/%s", 
$space, $database, $table, $fld, $lobjOid);
   
  my $res_lobj = $conn_db->exec("BEGIN");
  die $conn_db->errorMessage unless PGRES_COMMAND_OK eq
$res_lobj->resultStatus;

  # export large object
  if ( 1 == $conn_db->lo_export($lobjOid, $path) ) {
printf F "%s\t%s\t%s\t%s/%s/%s/%s\n", 
  $lobjOid, $table, $fld, $database, $table, $fld, $lobjOid;
  } else {
printf STDERR "%s: %s\n", $conn_db->errorMessage, $0;
  }

  $res_lobj = $conn_db->exec("END");
  die $conn_db->errorMessage unless PGRES_COMMAND_OK eq
$res_lobj->resultStatus;
 
}
close(F);
  }
  undef $conn_db;
}

printf "\nExported %d large objects.\n\n", $counter;
undef $conn_all;


[EMAIL PROTECTED] writes:
>
>
>On Mon, 30 Oct 2000, [EMAIL PROTECTED] wrote:
>
>>Large objects are not dumped. It should be 
>>in the documentation for large objects.
>>
>>You need to write a script which writes them to
>>disk and then imports them back in after you have
>>installed your dbs.
>>
>>
>>Troy
>
>CREATE TABLE news  -- { chiave: id news ,newarchivio, newsnuove}
>(
>   "idnews"SERIAL primary key, 
>   "oidnotizia"OID,-- news as large object 
>   "autore"TEXTx   -- author
>);
>
>How should be done the script for this table ?
>
>I found something about large object only onthe programmesg guide are they
>present in other docs?
>
>Thanks in advance 
>
>Alex
>
>
>





Re(2): Re(2): [SQL] Large Object dump ?

2000-11-01 Thread pgsql-sql

because of some problems (see below). i was also thinking that it was 
just a compatibility problem of pg_dumplo-0.0.5 to postgresql 6.5.3. 
and because i needed it badly, i wrote a "quick & dirty" perl script.


1. When I tried showing all LO of my_db, I got this message:

pg_dumplo -w -d my_db
NOTICE:  (transaction aborted): queries ignored until END
NOTICE:  (transaction aborted): queries ignored until END
NOTICE:  (transaction aborted): queries ignored until END

Database 'my_db' contains 0 large objects.

My observation: my_db actually contains large objects. 


2. When I tried this
   pg_dumplo -a -d my_db -s /my_dump/dir
I got this.
   
<< CUT >>
dump for_payment.application (218 large obj)
lo_export: can't open inv object 4654657:
/fs/linux/work/tools/pg_dumplo-0.0.5/pg_dumplo
lo_export: can't open inv object 4654689:
/fs/linux/work/tools/pg_dumplo-0.0.5/pg_dumplo
lo_export: can't open inv object 4654881:
/fs/linux/work/tools/pg_dumplo-0.0.5/pg_dumplo
lo_export: can't open inv object 4654081:
/fs/linux/work/tools/pg_dumplo-0.0.5/pg_
<< CUT >>
Exported 1131 large objects.

NOTICE:  LockReleaseAll: xid loop detected, giving up

My observation: The tree (directories) were created but 1131 large objects
were not there.
   The lo_dump.index file is not readable
(contains garbage).


peace,

sherwin


[EMAIL PROTECTED] writes:
>
>On Wed, 1 Nov 2000, pgsql-sql wrote:
>
>> You can try the script I made for exporting all my Pg database.
>> Ideas were borrowed from pg_dumplo-0.0.5.
>> Make it sure that you have "Perl5 extension for PostgreSQL - Pg.pm"
>> installed.
>
> Why you re-write pg_dumplo to perl and not use directly it? Perl
>love? :-)
>
>   Karel
>





[ADMIN] alter pg_shadow

2000-11-08 Thread pgsql-sql

Is it safe to rename 'postgres' to any__name?

like

update pg_shadow set usename = 'any__name' where usename = 'postgres';


Sherwin





[SQL] 7.0.3 BUG

2000-11-24 Thread pgsql-sql

SELECT is returning bogus data.

migrate=# select version();
version
---
 PostgreSQL 7.0.3 on i686-pc-linux-gnu, compiled by gcc 2.95.3
(1 row)

migrate=# select userid from users where userid = '[EMAIL PROTECTED]';
   userid
-
 [EMAIL PROTECTED]
 admin
(2 rows)
 
migrate=# \d users
  Table "users"
Attribute | Type |Modifier
--+--+
 userrefkey   | integer  | not null default
nextval('users_userrefkey_seq'::text)
 userid   | varchar(128) | not null
 password1| char(20) | not null
 password2| char(50) |
 type | char(10) | not null
 partneremail | varchar(128) |
 adminlastname| char(40) | not null
 adminfirstname   | char(40) | not null
 adminaddress1| char(80) | not null
 adminaddress2| char(80) |
 admincity| char(80) | not null
 adminstateprov   | char(40) |
 admincountrycode | char(2)  | not null
 adminpostalcode  | char(10) |
 adminphone   | char(20) |
 adminfax | char(20) |
 checkpayableto   | char(80) | not null
 createdate   | timestamp| not null default now()
 lastaccessdate   | timestamp|
 lastmodifieddate | timestamp|
Indices: users_pkey,
 users_userid_key   







Re(2): [SQL] 7.0.3 BUG

2000-11-24 Thread pgsql-sql

[EMAIL PROTECTED] writes:
>"pgsql-sql" <[EMAIL PROTECTED]> writes:
>> migrate=# select userid from users where userid = '[EMAIL PROTECTED]';
>>userid
>> -
>>  [EMAIL PROTECTED]
>>  admin
>> (2 rows)
>
>That's a tad, um, startling :-(
>
>However, you haven't given us nearly enough information to have a shot
>at figuring out what's going on.
>
>   regards, tom lane


The query used indexscan. I tried using trim in userid and I got something
like ...
migrate=# select userid from users where trim(userid) = '[EMAIL PROTECTED]';
userid
 -
  [EMAIL PROTECTED]
 (1 row)

I thought it was a varchar problem ('coz userid is varchar) of 7.0.3 so I
changed to 7.0.2.
Is it really a varchar bug?
Anyway, I dumped all the database. When I was restoring it in 7.0.2 I got
these ...

You are now connected as new user root.
CREATE
You are now connected as new user pgsql.
ERROR:  copy: line 3910, Bad timestamp external representation '2000-01-05
00:00:60.00+08'
PQendcopy: resetting connection
ERROR:  copy: line 3910, Bad timestamp external representation '2000-01-05
00:00:60.00+08'
PQendcopy: resetting connection
ERROR:  copy: line 302, Bad timestamp external representation '2000-02-29
00:00:60.00+08'
PQendcopy: resetting connection
ERROR:  copy: line 13, Bad timestamp external representation '1970-01-01
08:04:60.00+08'
PQendcopy: resetting connection
ERROR:  copy: line 24, Bad timestamp external representation '1970-01-01
08:04:60.00+08'
PQendcopy: resetting connection
You are now connected as new user root.
ERROR:  copy: line 2, Bad timestamp external representation '1999-12-24
00:00:60.00+08'
PQendcopy: resetting connection
You are now connected as new user pgsql.
CREATE

I took a look at the database named 'migrate' (this is where the error
occured), 
the tables were all empty. Most of the tables  have field like this...

createdate   | timestamp | not null default now() 

Weird because those timestamps were generated by default now(). Is this an
another bug (timestamp bug)?
Take a look at this ...

migrate=# select version();
version
---
 PostgreSQL 7.0.2 on i686-pc-linux-gnu, compiled by gcc 2.95.3
(1 row)
 
migrate=# select '2000-01-05 00:00:60.00+08'::timestamp;
ERROR:  Bad timestamp external representation '2000-01-05 00:00:60.00+08'
migrate=# select '2000-11-25 14:05:23.00+08'::timestamp;
 ?column?
---
 2000-11-25 14:05:23.00+08
(1 row)
 
migrate=# create table testing (datetime timestamp);
CREATE
migrate=# insert into testing values('2000-01-05 00:00:60.00+08');
ERROR:  Bad timestamp external representation '2000-01-05 00:00:60.00+08'
migrate=# insert into testing values('2000-11-25 14:05:23.00+08');
INSERT 6834235 1
migrate=#


Thanks,
sherwin












Re(2): Re(2): [SQL] 7.0.3 BUG

2000-11-26 Thread pgsql-sql

[EMAIL PROTECTED] writes:
>"pgsql-sql" <[EMAIL PROTECTED]> writes:
>> ERROR:  copy: line 3910, Bad timestamp external representation
>'2000-01-05
>> 00:00:60.00+08'
>
>> Weird because those timestamps were generated by default now().
>
>Weird, all right.  I can get 7.0.2 to emit an incorrect representation
>like that by constructing a fractional-second value that needs to be
>rounded off:
>
>play=> set TimeZone TO 'GMT-8';
>SET VARIABLE
>play=> select  '2000-01-05 00:00:59.00+08'::timestamp + '0.999
>sec'::interval;
> ?column?
>---
> 2000-01-05 00:00:60.00+08
>(1 row)
>
>That's clearly a bug.  Evidently the rounding to 2 fractional digits
>needs to be done before we start conversion, not at the end, since in
>the worst case the effects could need to propagate all the way to the
>displayed year.
>
>However, that shouldn't matter for now() results, because AFAIK now()
>results should always be exact integer seconds.  Are you sure you
>weren't doing arithmetic on these values somewhere along the line?

Yes, I'm very sure. 
About the dump file, I made manual changes to all timestamp 
containing "00:00:60.00+08" (changed it to "00:00:00.00+08").
I did this because
migrate=# select version();
version
---
 PostgreSQL 7.0.2 on i686-pc-linux-gnu, compiled by gcc 2.95.3
(1 row)

migrate=# select '2000-01-05 00:00:60.00+08'::timestamp;
ERROR:  Bad timestamp external representation '2000-01-05 00:00:60.00+08'
migrate=# select '2000-01-05 00:00:00.00+08'::timestamp;
 ?column?
---
 2000-01-05 00:00:60.00+08
(1 row)


Is there a work-around to this aside from manually changing the dump file?
>
>In any case, I'm pretty sure nothing's changed in the timestamp code
>between 7.0.2 and 7.0.3, and I know of no changes that could explain
>your original report either.  I'm not sure why your update went so
>disastrously --- I'm wondering about possibilities like a corrupted
>download of Postgres.  What platform are you on (I think you said Linux,
>but which release of which distro)?  

Distribution Version:  Linux Mandrake release 7.2 (Odyssey) for
i586
Operating System Version:  #1 Thu Oct 5 13:16:08 CEST 2000
Operating System Release:  2.2.17-21mdk
Processor Type:i686

>Where and how did you obtain
>your Postgres files; if you compiled them yourself, how did you
>configure and compile?

It was shipped with Mandrake-Linux 7.2
>
>> migrate=# select version();
>> version
>> ---
>>  PostgreSQL 7.0.2 on i686-pc-linux-gnu, compiled by gcc 2.95.3
>
>That's a rather interesting version report, seeing as how there is
>no such gcc release as 2.95.3 according to the GCC project's homepage.
>What compiler are you using exactly, and what confidence do you have
>that it's bug-free?  You wouldn't be using that known-unstable gcc
>that RedHat shipped in their 7.0, would you?

We can be sure that the compiler is relatively bug free because it was
used to recompile the entire Linux distribution...
>

sherwin
>









Re(2): [SQL] 7.0.3 BUG

2000-11-28 Thread pgsql-sql

Thanks to you Tom and Thomas.
Now I know.

- sherwin

[EMAIL PROTECTED] writes:
>Ah ha (or rather, ha ha ha)! I'd suggest using the RPMs posted on the
>postgresql.org ftp site, which include a sample .rpmrc file which fixes
>disasterous bugs in Mandrake's default compiler settings for building
>RPMs. Specifically, Mandrake sets the -ffast-math flag, which the gcc
>folks warn is not compatible with -On optimizations. When I build RPMs I
>kill the fast-math option, and the rounding troubles go away.
>
>The rounding trouble does not show up on other platforms or Linux
>distros because no one else ignores the gcc recommendations to this
>extent :(
>
>  - Thomas




Re: [SQL] lo_import for storing Blobs

2001-03-03 Thread pgsql-sql

You can use 'DBI'

from test.pl of DBD::Pg

# create large object from binary file

my ($ascii, $pgin);
foreach $ascii (0..255) {
$pgin .= chr($ascii);
};

my $PGIN = '/tmp/pgin';
open(PGIN, ">$PGIN") or die "can not open $PGIN";
print PGIN $pgin;
close PGIN;

# begin transaction
$dbh->{AutoCommit} = 0;

my $lobjId;
( $lobjId = $dbh->func($PGIN, 'lo_import') )
and print "\$dbh->func(lo_import) .. ok\n"
or  print "\$dbh->func(lo_import) .. not ok\n";

# end transaction
$dbh->{AutoCommit} = 1;

unlink $PGIN;
  

or you can use 'Perl5 extension for PostgreSQL' ...
note: i didn't test the script

use strict;
use Pg;   

my $dbname = 'your dbname';
my $lo_path = 'path/to/you/binaryfile';
my ($tbl, $fld) = ('your table', 'oid field');

my $conn = Pg::connectdb("dbname=$dbname");
die $conn->errorMessage unless PGRES_CONNECTION_OK eq $conn->status;   

  my $result = $conn->exec("BEGIN");
  die $conn->errorMessage unless PGRES_COMMAND_OK eq $result->resultStatus;

  # import  large object and get its oid
  my $new_oid = $conn->lo_import($lo_path) or die $conn->errorMessage;

  $result = $conn->exec("END");
  die $conn->errorMessage unless PGRES_COMMAND_OK eq
$result->resultStatus; 

# insert the oid of the lobj
  my $sql = sprintf("INSERT INTO %s (%s) VALUES (%ld)",
$tbl, $fld, $new_oid);
 
  $result = $conn->exec($sql);
  die $conn->errorMessage unless PGRES_COMMAND_OK eq
$result->resultStatus;  

undef $conn; 


Sherwin

[EMAIL PROTECTED] writes:
>I need to store a binary file in a database. I use a cgi writed in shell
>to do it. So I can use Postgres user to execute the cgi.
>
>How can I store a binary file in a database with a cgi ?
>
>Thanks a lot.
>
>Laurent.
>
>
>
>
>---(end of broadcast)---
>TIP 2: you can get off all lists at once with the unregister command
>(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
>


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re(2): [SQL] Permissons on database

2001-03-14 Thread pgsql-sql

you can also do it by a shell script

grantall.sh:
## start of grantall.sh ###
#!/bin/ash
SQL="SELECT relname FROM pg_class WHERE (relkind = 'r' OR relkind = 'S')
AND relname !~ '^pg_' ORDER BY relname"
OBJ=`psql -t -c "${SQL}" $1`
# OBJ=`echo ${OBJ} | sed 's/EOF//g'`
OBJ=`echo ${OBJ} | sed 's/ /, /g'`
# SQL="REVOKE ALL ON ${OBJ} FROM PUBLIC"
SQL="GRANT ALL ON ${OBJ} TO PUBLIC"
echo ${SQL}
psql -c "${SQL}" $1   
## end of grantall.sh ###

syntax: grantall.sh name_of_database

sherwin

[EMAIL PROTECTED] writes:
>> "bk" == Boulat Khakimov <[EMAIL PROTECTED]> writes:
>
>bk> How do I grant permissions on everything in the selected
>bk> databes?
>
>bk> GRANT doesnt take as on object database name nor does it
>bk> accept wild chars
>
>Attached is some Perl code I wrote long ago to do this.  This
>particular code was done for Keystone, a problem tracking database and
>it would do a "GRANT ALL".  Modify it as needed.  Last I checked it
>worked with both PostgreSQL 6.5.x and 7.0.x
>
>
>roland
>-- 
>


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[SQL]

2001-03-14 Thread pgsql-sql

you can also do it by a shell script

grantall.sh:
## start of grantall.sh ###
#!/bin/ash
SQL="SELECT relname FROM pg_class WHERE (relkind = 'r' OR relkind = 'S')
AND relname !~ '^pg_' ORDER BY relname"
OBJ=`psql -t -c "${SQL}" $1`
# OBJ=`echo ${OBJ} | sed 's/EOF//g'`
OBJ=`echo ${OBJ} | sed 's/ /, /g'`
# SQL="REVOKE ALL ON ${OBJ} FROM PUBLIC"
SQL="GRANT ALL ON ${OBJ} TO PUBLIC"
echo ${SQL}
psql -c "${SQL}" $1   
## end of grantall.sh ###

syntax: grantall.sh name_of_database

sherwin

[EMAIL PROTECTED] writes:
>> "bk" == Boulat Khakimov <[EMAIL PROTECTED]> writes:
>
>bk> How do I grant permissions on everything in the selected
>bk> databes?
>
>bk> GRANT doesnt take as on object database name nor does it
>bk> accept wild chars
>
>Attached is some Perl code I wrote long ago to do this.  This
>particular code was done for Keystone, a problem tracking database and
>it would do a "GRANT ALL".  Modify it as needed.  Last I checked it
>worked with both PostgreSQL 6.5.x and 7.0.x
>
>
>roland
>-- 
>


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[SQL] trigger output to a file

2001-03-22 Thread pgsql-sql

Hello Everyone,

Here's my simple question.

I just want to know/get the recent changes made to a table.
Deeper? I wanted the Postgresql server to *publish* every
changes made to a table (similar to replication, incremental transfer,
etc.).
What is the best way to go about it?

My idea is to create a trigger for every insert/update that will
select the new/updated record and output it to a file. Or better
yet, I would like to output the actual sql query to file.
Is it possible to output the result of an sql query from a trigger
to a file? How?

I would appreciate any answer. Thank you very much.
Sherwin


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re(2): [SQL] trigger output to a file

2001-03-25 Thread pgsql-sql

Thanks Cedar, Jan, and Andy.

Actually the setup is something like this...
There are two remote servers-remoteA and remoteB.
The table of remoteA needs to be sychronized with the
table of remoteB all the time (well, there could be an interval).
remoteB will *publish* every changes and remoteA is *subscribe* to it.

These were my previous solutions:
1. Have a program (using PERL & DBI) in remoteA to connect to 
remoteB and do the synchronization.
>> I can't buy this 'coz remoteB has too many *hits*.
I just can't afford the cost.
2. Have a trigger in remoteB that will output to a file the result of
every sql
or the actually sql.
>> My understanding now is that this will not do it because
of a possible transaction rollback -- thanks again.

As much as possible I want to do the synchronization
*incrementally* (just deal with the difference between remoteA & remoteB).
But I guess I have to do it the hard way.

Here's my third solution. Please comment on this.
KNOWN FACTORS:
  ^ poor connection
   >>> the solution should be intelligent enough to handle such
situation.
3RD SOLUTION:
  ^ Have a script in remoteB to use pg_dump or sql copy and place it in
the 
 crontab. (say every 5 seconds)
  ^ Have a script in remoteA that will copy the dump.file from remoteB.
Place it in the crontab and use *scp* (secure copy) for the copying.
After dump.file is acquired, have another script to take care of it.

What do you think? Any better idea?
Thank you.

Sherwin


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[SQL] Making dirty reads possible?

2004-12-06 Thread pgsql-sql
Hi co-fanatics.
I am working on a small prove of concept, and am running into a small 
obstacle. (the prove of concept showing, amongs other things, that doing 
calculations on a database works well, and that it is possible to let it 
run 'jobs')
Considder the following stored procedure:

  For reasons of size the complete code is located on my site:
  http://www.vankoperen.nl/concepts/postgresql/primer/index.html
It generates prime numbers for (parameter) odd numbers starting from the 
biggest known prime in the primes table.

The "controller" table makes it possible to abort execution, something 
wich can be handy if you did a 'select primer(1000);'
I am just getting to grips with the read cashing and the way to 
circumvent it (using the EXECUTE function) so as to read data each time 
and thus react to the newest data, especialy the data in the 
"controller" table in this case.

Now what does not seem to work is the opposite thing: i can not, from 
the console etc, read the new data as the function is generating it.
If i 'SELECT count(*);' at the start, or near the end of the running 
function, it always returns the same. Only when the function is finished 
it commits and the external select returns the new and correct value.
To monitor the function's progress (and for other reasons too, wich are 
not important in this concept yet) i realy want to read either the 
UNCOMMITTED data. Or some way to COMMIT it during the functions 
execution, but currently only the whole function can be regarded as a 
transaction, and nested transactions is not (yet) supported.

Some digging in the mailinglist archives pointed to isolation levels.
Apparently 'dirty reads' in theory:
[quote http://archives.postgresql.org/pgsql-hackers/2004-08/msg01417.php ]
It's only allowed when the transaction is in READ UNCOMMITTED isolation 
level.
Something Postgres doesn't currently support. In fact I'm not aware of 
any SQL
database that supports it, though I'm sure there's one somewhere.

You wouldn't normally want to use such a thing, but it could be useful for,
for example, seeing what progress a transaction has made for a UI progress
meter.
[/quote]
But not possible for real at the moment?
So, summarising:
- Nested transactions is not (yet) supported
- READ UNCOMMITTED isolation level is not (yet) supported
- the EXECUTE plpgsql construct does not circumvent the transaction
Is there a way around this?
Regards,
Ellert.
---(end of broadcast)---
TIP 8: explain analyze is your friend


[SQL] optimize sql

2000-07-26 Thread pgsql-sql

HI!

The SQL below is too slow.

SELECT name FROM office, office_application 
WHERE code = office_code 
AND name NOT IN
(SELECT DISTINCT name FROM office, office_application
WHERE active = 't' AND code = office_code);

Can anyone tell me how to optimize it?
Thanks.





Re(2): [SQL] optimize sql

2000-07-26 Thread pgsql-sql

[EMAIL PROTECTED] writes:
>How does the output of the above differ from:
>
>SELECT name FROM office, office_application 
>WHERE code = office_code 
>AND active != 't';
>
>Without knowing the table structures (which tables to active, code, 
>and office_code belong to?) it's hard to suggest much else.
>
>Ross


The name and code fields belong to office table. While
office_code and active fields belong to office_application table.
The name field have duplicates and among the duplicates,
only one active field is TRUE. I just wanted to get name field
that has no TRUE active field. Any other idea? Thanks.

sherwin