Re: [HACKERS] Mirrors not tracking main ftp site?

2001-06-05 Thread Vince Vielhaber

On Mon, 4 Jun 2001, Tom Lane wrote:

 On hub, in /home/projects/pgsql/ftp/pub/dev I see

 *.tar.gz.md5postgresql-opt-snapshot.tar.gz
 doc postgresql-opt-snapshot.tar.gz.md5
 postgresql-base-snapshot.tar.gz postgresql-snapshot.tar.gz
 postgresql-base-snapshot.tar.gz.md5 postgresql-snapshot.tar.gz.md5
 postgresql-docs-snapshot.tar.gz postgresql-test-snapshot.tar.gz
 postgresql-docs-snapshot.tar.gz.md5 postgresql-test-snapshot.tar.gz.md5

 which agrees with the view at http://www.ca.postgresql.org/ftpsite/dev/.

 However, it seems that the mirrors have a lot more stuff:
 ftp://postgresql.readysetnet.com/pub/postgresql/dev/ shows dozens
 of files back to 7.1beta6, and so do the other several I checked in
 a random sample.  Is the update mechanism failing to cause old files
 to be removed from the mirrors?

Here's the syntax we tell them to use:

   rsync -avz --delete hub.org::[remote]/ [destination directory]

If that's not what they're using I can't go into their cronjobs and
fix it.


 Also, some of the mirrors claimed to be up-to-date by
 http://www.postgresql.org/index.html aren't. Fr instance,
 download.sourceforge.net doesn't have 7.1.1 nor 7.1.2.
 I thought that the up-to-date check was automated?

It is and here's the directory from sourceforge:

227 Entering Passive Mode (64,28,67,101,18,128).
150 Opening ASCII mode data connection for file list
-rw-r--r--   1 root root  8117016 May 24 16:37 postgresql-7.1.2.tar.gz
-rw-r--r--   1 root root   65 May 24 16:38 postgresql-7.1.2.tar.gz.md5
-rw-r--r--   1 root root  3240364 May 24 16:38 postgresql-base-7.1.2.tar.gz
-rw-r--r--   1 root root   70 May 24 16:38 postgresql-base-7.1.2.tar.gz.md5
-rw-r--r--   1 root root  2072096 May 24 16:38 postgresql-docs-7.1.2.tar.gz
-rw-r--r--   1 root root   70 May 24 16:38 postgresql-docs-7.1.2.tar.gz.md5
-rw-r--r--   1 root root  1803742 May 24 16:38 postgresql-opt-7.1.2.tar.gz
-rw-r--r--   1 root root   69 May 24 16:38 postgresql-opt-7.1.2.tar.gz.md5
-rw-r--r--   1 root root  1002166 May 24 16:38 postgresql-test-7.1.2.tar.gz
-rw-r--r--   1 root root   70 May 24 16:38 postgresql-test-7.1.2.tar.gz.md5
226-Transfer complete.
226 Quotas off

What is it you find missing about 7.1.2?  What were you actually looking
at?

Vince.
-- 
==
Vince Vielhaber -- KA8CSHemail: [EMAIL PROTECTED]http://www.pop4.net
 56K Nationwide Dialup from $16.00/mo at Pop4 Networking
Online Campground Directoryhttp://www.camping-usa.com
   Online Giftshop Superstorehttp://www.cloudninegifts.com
==




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Question about inheritance

2001-06-05 Thread Dmitry G. Mastrukov

  Christopher Kings-Lynne [EMAIL PROTECTED] writes:
   Am I misunderstanding how the mechanism works, or is this a big, not
 easily
   solved, problem?
 
  The latter.  Check the list archives for previous debates about this.
  It's not real clear whether an inherited primary key should be expected
  to be unique across the whole inheritance tree, or only unique per-table
  (IIRC, plausible examples have been advanced for each case).  If we want
  uniqueness across multiple tables, it'll take considerable work to
  create an index mechanism that'd enforce it.
 
 IMHO current behaviour of PostgreSQL with inherited PK, FK, UNIQUE is
simply
 bug not only from object-oriented but even object-related point of view.
Now
 I can violate parent PK by inserting duplicate key in child!

 Inherited tables should honours all constraints from parent. If I change
 some constraint (seems only FK, but not PK or UNIQUE) I should be able to
do
 it in more restrictive manner. For example, two base table is connected via
 FK. I can change such FK in childs from base1-base2 to child1-child2 (or
 child3) but not to child1-not_inherited_from_base2. CHECK, DEFAULT, NOT
 NULL are more free to changes, isn't it?

 IMHO last message in doc/TODO.details/inheritance from Oliver Elphick is a
 good direction for implementing with exception on more rectrictive child FK
 constraint (p.3 of message).

 As for me, I was pushed to rollback to scheme with no inheritance at all in
 my project for now. So I'm very interesting in implementing of right
 inheritance and I wanted to ask similar question in one of the lists in
near
 future.

 Regards,
 Dmitry




---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[HACKERS] full write log

2001-06-05 Thread Horst Herb

Greetings,

I need to implement a full write audit trail (every write access needs to be 
logged as a complete SQL statement with timestamp, user and host) in our 
database.

Which is the most efficient way to do this on the server side in Postgres? I 
tried to find something relevant in the documentation, but I could not find 
anything.

Horst

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[HACKERS] Multiprocessor performance

2001-06-05 Thread Valentin Puente

Hi all,

I'm not a postgres hacker, but I' think that you must be the most
appropriate person to give me a pointer about this question sorry for
any possible mistake.

Now I'm trying to use postgresql plus the pgbench like a
first test to stress the interconnection system in a parallel machine. I
know that tpc-b is just a toy (no too much real... but before to do
something more complex like tpc-c y want to see the postgres behavior).

Ok...well I'm running this benchmarks in different SMP machines (SGI with 4
to 8 processors and the results are odd). The best performance is achieved
with just one backend (1 client). When I try to run more clients the tps
falls quickly.

In all cases I see that when I increase the number of clients the total CPU
usage falls. With one client I can see a 100% usage (after a warm-up to get
all data from disk - I'm running without fsync and with a large shared
buffer).My systems have a lot of memory then this is normal. But when I try
with more clients each CPU usage falls between 40% for 2 clients to 10% to 8
clients. I assume the access to the shared memory through critical regions
(lock-unlock) must be one reason... but this is too much. I've heard that
locks in postgress are at page level instead tuple level. I'm wrong?.

Some suggestion about this?.

Thanks in advance for your support.

--vpuente


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Re: FYI: status of native language support

2001-06-05 Thread Diego Naya

I can help translating it to Spanish, just tell me :-)

Diego Naya
OSEDA
Sistemas
[EMAIL PROTECTED] 
- Original Message - 
From: Alessio Bragadini [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, June 05, 2001 9:38 AM
Subject: [HACKERS] Re: FYI: status of native language support


 Peter Eisentraut wrote:
 
  language supported in the next release, this would be a good time to
  gather up and volunteer for translation.
 
 I can help with Italian translation if no one else is volunteering (or
 coordinating a team)
 
 -- 
 Alessio F. Bragadini [EMAIL PROTECTED]
 APL Financial Services http://village.albourne.com
 Nicosia, Cyprus phone: +357-2-755750
 
 It is more complicated than you think
 -- The Eighth Networking Truth from RFC 1925
 
 ---(end of broadcast)---
 TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Mirrors not tracking main ftp site?

2001-06-05 Thread Tom Lane

Vince Vielhaber [EMAIL PROTECTED] writes:
 On Mon, 4 Jun 2001, Tom Lane wrote:
 Also, some of the mirrors claimed to be up-to-date by
 http://www.postgresql.org/index.html aren't. Fr instance,
 download.sourceforge.net doesn't have 7.1.1 nor 7.1.2.

 What is it you find missing about 7.1.2?  What were you actually looking
 at?

I went to ftp://download.sourceforge.net/pub/mirrors/postgresql/
(the link given by our homepage) and didn't see the v7.1.2 symlink,
nor did the source subdirectory have a v7.1.2 subdirectory.

As of this morning, though, both are there.  I suppose they synced up
overnight.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Mirrors not tracking main ftp site?

2001-06-05 Thread Vince Vielhaber

On Tue, 5 Jun 2001, Tom Lane wrote:

 Vince Vielhaber [EMAIL PROTECTED] writes:
  On Mon, 4 Jun 2001, Tom Lane wrote:
  Also, some of the mirrors claimed to be up-to-date by
  http://www.postgresql.org/index.html aren't. Fr instance,
  download.sourceforge.net doesn't have 7.1.1 nor 7.1.2.

  What is it you find missing about 7.1.2?  What were you actually looking
  at?

 I went to ftp://download.sourceforge.net/pub/mirrors/postgresql/
 (the link given by our homepage) and didn't see the v7.1.2 symlink,
 nor did the source subdirectory have a v7.1.2 subdirectory.

 As of this morning, though, both are there.  I suppose they synced up
 overnight.

Don't know what could have happened to it, I'm fairly certain I downloaded
it from them less than a week ago when I did some upgrading here.

Vince.
-- 
==
Vince Vielhaber -- KA8CSHemail: [EMAIL PROTECTED]http://www.pop4.net
 56K Nationwide Dialup from $16.00/mo at Pop4 Networking
Online Campground Directoryhttp://www.camping-usa.com
   Online Giftshop Superstorehttp://www.cloudninegifts.com
==




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Multiprocessor performance

2001-06-05 Thread Tom Lane

Valentin Puente [EMAIL PROTECTED] writes:
 Ok...well I'm running this benchmarks in different SMP machines (SGI with 4
 to 8 processors and the results are odd). The best performance is achieved
 with just one backend (1 client). When I try to run more clients the tps
 falls quickly.

What scale factor (-s parameter for pgbench init) are you using for the
benchmark?

At scale factor 1, there's only one branch row, so all the
transactions have to update the same row and naturally will spend most
of their time waiting to do so.

You want scale factor  # of concurrent clients to avoid interlock
effects.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



[HACKERS] importing from sybase

2001-06-05 Thread Vince Vielhaber


I'm trying to import data from a sybase bcp (tab separated dump) and am
encountering a really odd datetime type:

   Mar 27 1994 12:00:00:000AM

I've been looking in the books but haven't found anything yet and see
nothing in any of the PostgreSQL docs.  Anyone have any idea how I can
bring this data in without having to write something to read from the
sybase table and write to the postgres table?  I'd like to use copy to
keep things simple.

Vince.
-- 
==
Vince Vielhaber -- KA8CSHemail: [EMAIL PROTECTED]http://www.pop4.net
 56K Nationwide Dialup from $16.00/mo at Pop4 Networking
Online Campground Directoryhttp://www.camping-usa.com
   Online Giftshop Superstorehttp://www.cloudninegifts.com
==




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Re: Full text searching, anyone interested?

2001-06-05 Thread Teodor

   I would love to find a way to get a bitmap like index native to Postgres. I
[skip]

 We could implement bitmap handling functions based on one dimentional arrays of
 integers. That's how my stuff deals with them, and postgres already manages
 them.
 

look at contrib/intarray. gist__intbig_ops is a variant of signature
tree (from each array  get bitmap signature).

Regards,
Teodor

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] large objects dump

2001-06-05 Thread Maks N. Polunin

Hello, All!

I had PostgreSQL 7.0.3 (7.1 now) and one nice day I've noticed that much
number of my BLOBs are broken! Although they seems to be with good content
in file system (xinv[0-9]+ files) I was not able to get them via
lo_export... After spending some time trying to fix it, I decided to write
my own xinv2plainfile converter. I hope if someone has same troubles this
converter will help him.
Just compile it, put it in the dir with your xinv[0-9]+ files and run.
It will create new files with name eq to BLOB id from apropriate xinv.
xinv2palinfile.c---
#include sys/types.h
#include dirent.h
#include stdio.h
#include string.h
 
#define BLCKSIZE 8192
#define HPT_LEN 40
 
#define DEBUG
//#undef DEBUG
 
typedef unsigned short uint16;
typedef unsigned int uint32;
 
typedef struct ItemIdData
{
  unsigned lp_off:15,
lp_flags:2,
lp_len:15;
} ItemIdData;

typedef struct PageHeaderData
{
  uint16 pd_lower;
  uint16 pd_upper;
  uint16 pd_special;
  uint16 pd_opaque; //page size
  ItemIdData pd_linp[1];
} PageHeaderData;
 
int
extract(const char * filename)
{
  FILE * infile;
  FILE * outfile;
  ItemIdData linp;
  PageHeaderData* pg_head;
  char buff[BLCKSIZE];
  char data[BLCKSIZE];
  int tuple_no;

  //opening outpur file, if it is already presents, overwrite it!
  if ((outfile = fopen(filename + 4, w)) == NULL)
return -1;
 
  //opening input file
  if ((infile = fopen(filename, r)) == NULL)
return -1;
 
  while (fread(buff, BLCKSIZE, 1, infile))
{
  pg_head = (PageHeaderData*)buff;
#ifdef DEBUG
  printf(Page data: pd_lower=%d, pd_upper=%d, pd_special=%d, 
pd_opaque=%d\
 pg_head-pd_lower, pg_head-pd_upper, pg_head-pd_special,
pg_head
#endif

  for(tuple_no = 0; pg_head-pd_linp[tuple_no].lp_len; ++tuple_no)
{
  linp = pg_head-pd_linp[tuple_no];
  memcpy(data, buff + linp.lp_off + HPT_LEN, linp.lp_len -
HPT_LEN);
  data[linp.lp_len - HPT_LEN] = 0;
  //}
#ifdef DEBUG
  printf(Tuple %d: off=%d,\tflags=%d,\tlen=%d\n,\
 tuple_no, linp.lp_off, linp.lp_flags, linp.lp_len);
  printf(Data:\n%s\n--\n, data);
#endif
  fprintf(outfile, %s, data);
}
}
  fclose(infile);
  fclose(outfile);
  return 0;
}

int
main(void)
{
  DIR * curdir;
  struct dirent * curdirentry;
 
  //open current directory
  curdir = opendir(.);
  if (curdir == NULL)
{
  printf(Cannot open curdir!!!\n);
  return -1;
}

  //search through curdir for files 'xinv[0-9]+'
  while ((curdirentry = readdir(curdir)) != NULL)
  {
if (strstr(curdirentry-d_name, xinv) != curdirentry-d_name)
  continue;
//found entry with name begining with xinv.
//let's hope this is what we are looking for :)
printf(Trying to extract file '%s'... , curdirentry-d_name);
if (extract(curdirentry-d_name))
  printf(failed\n);
else
  printf(successed\n);
  }
 
  return 0;
}

---
With Best Regards,
  Maks N. Polunin.
Brainbench: http://www.brainbench.com/transcript.jsp?pid=111472
ICQ#:18265775


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



[HACKERS] SQL( if ...exists...),how to do it in the PostgreSQL?

2001-06-05 Thread Eric

¡¡  
   I can realize this function in the SYBase,but How can i do it in the PostgreSQL?
  
/SQL***/
if not exists(select id from test) insert into test(id) values (280);
/*/ 

_
ÊýÂë²úÆ·ÐÂÉÏÊУ¬¿á http://shopping.263.net/category21.htm
¾«Æ·Ð¡¼ÒµçÓ­ÏÄÈÈÂô http://shopping.263.net/category23.htm

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] Feature request : Remove identifier length constraints

2001-06-05 Thread Thomas Swan

I just got bit by the identifier name is too long and will be truncated 
limitation in Postgresql.

AFIAA there is a limit of 64 characters for identifiers (names of 
tables, sequences, indexes, etc...)

I had just started to get in the habit of using serial data types until 
I made to tables with long names and the automatic sequence names that 
were generated conflicted, *ouch* ...

Is there the possibility of a name conflict resolution during the table 
creation phase similar to the name I want to assign is already taken, 
so I'll pick a different name... on the serial data type?




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[HACKERS] Question about scalability in postgresql 7.1.2

2001-06-05 Thread Valentin Puente

Hi all,

I'm not a postgres hacker, but I' think that you must be the most
appropriate person to give me pointer about this question. Thus... sorry for
any possible mistake.

Now I'm trying the posibibility to use postgresql plus the pgbench like a
first test to stress the interconnection system in a parallel machine. I
know that tpc-b is just a toy (no too much real... but before to do
something more complex like tpc-c y want to see the posgres behavior).

Ok...well I'm running this benchmarks in different SMP machines (SGI with 4
to 8 processors and the results are odd). The best performance is achieved
with just one backend (1 client). When I try to run more clients the tps
falls quickly.

In all cases I see that when I increase the number of clients the total CPU
usage falls. With one client I can see a 100% usage (after a warm-up to get
all data from disk - I'm running without fsync and with a large shared
buffer).My systems have a lot of memory then this is normal. But when I try
with more clients each CPU usage falls between 40% for 2 clients to 10% to 8
clients. I assume the access to the shared memory through critical regions
(lock-unlock) must be one reason... but this is too much. I've heard that
locks in postgress are at table level instead tuple level. I'm wrong?.

Some suggestion about this?.

Thanks in advance for your support.

--vpuente


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



[HACKERS] remote database queries

2001-06-05 Thread Joe Conway

 Until we fix that (maybe for 7.2, maybe not) your existing hack is
 probably pretty reasonable.  You could save some cycles by avoiding
 conversion to text, though --- instead return an opaque datum that is
 pointer-to-tuple-slot and let the dblink_tok function extract fields
 from the tuple.  Look at SQL function support and the FieldSelect
 expression node type for inspiration.


I changed the dblink() function to return a pointer instead of concatenated
text, and dblink_tok() to use the pointer. FWIW, a query on a small (85
tuples) remote (a second PC on a 100baseT subnet) table takes about 34
milliseconds (based on show_query_stats) versus about 4 milliseconds when
run locally. It actually takes a bit longer (~65 milliseconds) when run
against a second database on the same PC. The original text parsing version
was about 25% slower.

Although shifting from text parsing to pointer passing is more efficient, I
have one more question regarding this -- for now ;) -- is there any way to
check the pointer passed to dblink_tok() to be sure it came from dblink()?

Thanks,

-- Joe


 dblink.patch


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[HACKERS] Acucobol interface

2001-06-05 Thread Roberto Fichera

Hi All,

I'm developing (currently in pre-alfa stage) a Acucobol interface for the 
Postgresql.
The Acucobol runtime have a generic FS API interface that handle the work 
with the
record oriented files, defining the open, close, read, write and so on low 
level function I can
extend the runtime to talk with any file and database.

My current work translate each Acucobol FS command in a relative Postgresql 
query and
the returned tuple will be translated in a record oriented view.
After some performance tests I've notice that this path have much overhead 
and because
this I was thinking to redesign the interface.

My first think was to bypass the SQL translation and use the Postgresql low 
level routines.
I need to see the tables as record oriented archive, so I can scan 
sequentially (forward and
backward) each record, lock/unlock it, insert and delete it and start to 
read the records with
a match of a specific key.

Does anyone know where can I start to search/read/learn/study some 
document/code of the
Postgresql low level routines ?

If need some detail, please ask ;-)!

Thanks in advance.


Roberto Fichera.


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [HACKERS] Question about inheritance

2001-06-05 Thread chris . bitmead




It's relatively straightforward to allow check constraints to be inherited -
but is it really possible to ever do the same with primary, unique or even
foreign constraints?

You would either have to check each index in the hierarchy or else have
a single index across the whole hierarchy and check that. Obviously the
latter would be generally more useful.

As with all things inheritance, it is usually the right thing, and a good
default that things be inherited. So ideally, indexes should work across
whole hierarchies as well as primary, unique and foreign constraints.
It could be argued that not inheriting is of very limited usefulness.




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[HACKERS] REPLACE INTO table a la mySQL

2001-06-05 Thread Dale Johnson

I know we're not in the business of copying mySQL,
but the REPLACE INTO table (...) values (...) could be
a useful semantic.  This is a combination INSERT or
UPDATE statement.  For one thing, it is atomic, and
easier to work with at the application level.  Also
if the application doesn't care about previous values,
then execution has fewer locking issues and race
conditions.

comments?

Dale Johnson



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] database synchronization

2001-06-05 Thread Ruke Wang



Hi there,

I see that pgsql replication is on TODO list. 
I wonder whether there is related sites about this issue or some developed 
resources. 

Thanks.


Ruke Wang
Software Engineer
Servgate Technologies, Inc.
(408)324-5717


[HACKERS] place for newbie postgresql hackers to work

2001-06-05 Thread James Buchanan

Could I ask a huge favour of the experienced PostgreSQL hackers to make a
simple page on the postgreSQL.org website listing TODO items that newbie
hackers can get stuck into? I was thinking of doing elog() myself, but then
again, I'm not experienced enough in PostgreSQL to do something that the
developers would actually like.

I know this is a big ask, but once its done its done, and we can all get on
with the job. Not trying to criticize anyone, this is not a flame! Just
asking for a place for newbies to start hacking.

:-)

Appreciated very much.

--
James



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] BLOBs

2001-06-05 Thread Thomas Swan

I know that BLOBs are on the TODO list, but I had an idea.

I think the storage of a BLOB outside of the table is an elegant 
solution and keeps table sizes down without the bloat of the stored 
object.   Granted, if you are searching with a regular expression or 
using like or ilike clauses, you're likely to be a little slower but it 
shouldn't be by much.  More than likely, you won't be searching for 
patterns in the BLOB but rather the fields in the table associated with 
the BLOB.

Wouldn't it be wonderful if you used the methods you had already 
implemented and instead create a behavoir similar to the following.

on an insert
take the data that was to be the blob...
create your externally to be referenced file
save the data to the file
store the reference to that file

on an update
take the data that was to be the blob...
create your externally to be referenced file
save the data to the file
store the reference to that file
delete the old referenced file

on a delete
delete the reference to your file
delete the external file

I was thinking that the BLOB column type might be a trigger for a macro 
that could handle the lo_import, lo_export juggling...

I know it seems overly simplified, but having fought with MySQL and then 
trying to wrestle with postgresql and importing,exporting BLOBs, it 
seemed there might be a little more room for discussion, although I 
doubt this may have added anything to it...

I'd love to see something done with BLOB support during 7.2.x *hint* :)

Besides, if someone could give me some pointers as to where I might be 
able to start, I might try to contribute something myself.

Thomas



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Acucobol interface

2001-06-05 Thread Tom Lane

Roberto Fichera [EMAIL PROTECTED] writes:
 My first think was to bypass the SQL translation and use the Postgresql low 
 level routines.
 I need to see the tables as record oriented archive, so I can scan 
 sequentially (forward and
 backward) each record, lock/unlock it, insert and delete it and start to 
 read the records with
 a match of a specific key.

I don't think you want an SQL database at all.  Possibly something like
Sleepycat's Berkeley DB package is closer to what you are looking for...

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] database synchronization

2001-06-05 Thread Mauricio Breternitz

Ruke:
check out http://www.greatbridge.org/genpage?replication_top
for a project on PostGres replication and related info

 Mauricio


From: Ruke Wang [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: [HACKERS] database synchronization
Date: Tue, 29 May 2001 11:36:13 -0700

Hi there,

I see that pgsql replication is on TODO list.  I wonder whether there is 
related sites about this issue or some developed resources.

Thanks.

Ruke Wang
Software Engineer
Servgate Technologies, Inc.
(408)324-5717

_
Get your FREE download of MSN Explorer at http://explorer.msn.com


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[HACKERS] URGENT PROBLEM

2001-06-05 Thread Bruce Irvine



Hi All,

This is my first post, so I hope I'm in the right 
area and doing it correctly.

We are having MAJOR  URGENT problems with 
Postresql occaisonly corrupting tables on insert. I had a quick look 
through your archive and couldn't find anything.It seems to happen mostly 
on large inserts (lots of data into one text field).This resultsin 
corrupting the table and hanging thepsql console wheneverI try to 
INSERT, UPDATE, DELETE, SELECT, etc. Doing an "EXPLAIN SELECT * FROM table" 
shows that I have around 100 - 1000 extra rows. The problem is often fixed by 
running VACUUM against the table, however VACUUM often hangs leaving the table 
locked until I delete the lock file.

Its only a basic INSERT statement into a basic 
table.

Thanks in advance.

Bruce Irvine.
SportingPulse.


Re: [HACKERS] BLOBs

2001-06-05 Thread Tom Lane

Thomas Swan [EMAIL PROTECTED] writes:
 I know that BLOBs are on the TODO list, but I had an idea.

I think you just rediscovered TOAST.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] URGENT PROBLEM

2001-06-05 Thread Tom Lane

Bruce Irvine [EMAIL PROTECTED] writes:
 We are having MAJOR  URGENT problems with Postresql occaisonly corrupting =
 tables on insert.

Can't help you with that much information.

What Postgres version is this?  (If your answer is not 7.0.3 or
7.1.2, I'm going to tell you to upgrade before anything else.)

What is the table schema?  What kind of corruption do you see exactly?

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [HACKERS] URGENT PROBLEM

2001-06-05 Thread David Ford

How 'bout posting what version of pgsql you're running, and we'll start 
back at square one :)

-d

Bruce Irvine wrote:

 Hi All,

  

 This is my first post, so I hope I'm in the right area and doing it 
 correctly.

  

 We are having MAJOR  URGENT problems with Postresql occaisonly 
 corrupting tables on insert.  I had a quick look through your archive 
 and couldn't find anything. It seems to happen mostly on large inserts 
 (lots of data into one text field). This results in corrupting the 
 table and hanging the psql console whenever I try to INSERT, UPDATE, 
 DELETE, SELECT, etc. Doing an EXPLAIN SELECT * FROM table shows that 
 I have around 100 - 1000 extra rows. The problem is often fixed by 
 running VACUUM against the table, however VACUUM often hangs leaving 
 the table locked until I delete the lock file. 

  

 Its only a basic INSERT statement into a basic table.

  

 Thanks in advance.

  

 Bruce Irvine.

 SportingPulse.




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] Idea: quicker abort after loss of client connection

2001-06-05 Thread Tom Lane

Currently, if the client application dies (== closes the connection),
the backend will observe this and exit when it next returns to the
outer loop and tries to read a new command.  However, we might detect
the loss of connection much sooner; for example, if we are doing a
SELECT that outputs large amounts of data, we will see failures from
send().

We have deliberately avoided trying to abort as soon as the connection
drops, for fear that that might cause unexpected problems.  However,
it's moderately annoying to see the postmaster log fill with
pq_flush: send() failed messages when something like this happens.

It occurs to me that a fairly safe way to abort after loss of connection
would be for pq_flush or pq_recvbuf to set QueryCancel when they detect
a communications problem.  This would not immediately abort the query in
progress, but would ensure a cancel at the next safe time in the
per-tuple loop.  You wouldn't get very much more output before that
happened, typically.

Thoughts?  Is there anything about this that might be unsafe?  Should
QueryCancel be set after *any* failure of recv() or send(), or only
if certain errno codes are detected (and if so, which ones)?

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [HACKERS] large objects dump

2001-06-05 Thread Denis Perchine

Hi,

 I had PostgreSQL 7.0.3 (7.1 now) and one nice day I've noticed that much
 number of my BLOBs are broken! Although they seems to be with good content
 in file system (xinv[0-9]+ files) I was not able to get them via
 lo_export... After spending some time trying to fix it, I decided to write
 my own xinv2plainfile converter. I hope if someone has same troubles this
 converter will help him.
 Just compile it, put it in the dir with your xinv[0-9]+ files and run.
 It will create new files with name eq to BLOB id from apropriate xinv.

Either use 7.1.x, or apply my patch to 7.0.3. And you will have no such 
problems at all. :-))

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[HACKERS] Re: FYI: status of native language support

2001-06-05 Thread Alessio Bragadini

Peter Eisentraut wrote:

 language supported in the next release, this would be a good time to
 gather up and volunteer for translation.

I can help with Italian translation if no one else is volunteering (or
coordinating a team)

-- 
Alessio F. Bragadini[EMAIL PROTECTED]
APL Financial Services  http://village.albourne.com
Nicosia, Cyprus phone: +357-2-755750

It is more complicated than you think
-- The Eighth Networking Truth from RFC 1925

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Imperfect solutions

2001-06-05 Thread Karel Zak

On Tue, Jun 05, 2001 at 04:16:06PM +0800, Christopher Kings-Lynne wrote:
 Hi Bruce,
 
 I was just looking at the TODO list and noticed my name in it - cool!  (You
 spelled it wrong - but hey :) )
 
 Just thought you might like to add
 
 * ALTER TABLE ADD PRIMARY KEY
 * ALTER TABLE ADD UNIQUE

 And what

 ALTER TABLE DROP PRIMARY KEY
 ALTER TABLE DROP UNIQUE 

 BTW, it's a little cosmetic feature if we have CREATE/DROP INDEX :-)

Karel

-- 
 Karel Zak  [EMAIL PROTECTED]
 http://home.zf.jcu.cz/~zakkr/
 
 C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [HACKERS] Re: [PATCHES] Re: AW: Re: Support for %TYPE in CREATE FUNCTION

2001-06-05 Thread Karel Zak

On Tue, Jun 05, 2001 at 11:07:03AM +0200, Pascal Scheffers wrote:
 On Mon, 4 Jun 2001, Bruce Momjian wrote:
 
  Because several people want this patch, Tom has withdrawn his
  objection.  Jan also stated that the elog(NOTICE) was good enough for
  him.
 
  Patch applied.
 
 Wonderful! Thank you all! Do you have any kind of ETA for when this
 feature will be publicly available? Is this going to be included in 7.1.3
 or is it 7.2 stuff (just curious)?


 I mean we're in 7.2 cycle -- into 7.1.x go bugfixes only.

Karel
-- 
 Karel Zak  [EMAIL PROTECTED]
 http://home.zf.jcu.cz/~zakkr/
 
 C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [CORE] Re: [HACKERS] Mirrors not tracking main ftp site?

2001-06-05 Thread The Hermit Hacker


okay, just removed the .hidden directory from the ftp server, which should
correct that ... I had setup that .hidden directory to be excluded though,
not sure why it was bothering things :(

On Mon, 4 Jun 2001, bpalmer wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

  However, it seems that the mirrors have a lot more stuff:
  ftp://postgresql.readysetnet.com/pub/postgresql/dev/ shows dozens
  of files back to 7.1beta6, and so do the other several I checked in
  a random sample.  Is the update mechanism failing to cause old files
  to be removed from the mirrors?

 Found the problem.  Since rsync gets a perms denied from .hidden,  it
 refuses to delete files.

 root@seraph:/root# ./rsync-postgres-ftp
 receiving file list ... opendir(.hidden): Permission denied
 done
 IO error encountered - skipping file deletion
 wrote 105 bytes  read 20762 bytes  2782.27 bytes/sec
 total size is 521221478  speedup is 24978.27

 When I changed the script to:

 #/usr/local/bin/rsync -avz --delete hub.org::postgresql-ftp
 /mnt/ftpd/pub/postgresql
 /usr/local/bin/rsync -avz --ignore-errors --delete
 hub.org::postgresql-ftp /mnt/ftpd/pub/postgresql

 It worked.  People need to either use the --ignore-errors or have the
 .hidden folder on the server removed.

 - - Brandon


 - 
  b. palmer,  [EMAIL PROTECTED]pgp:crimelabs.net/bpalmer.pgp5

 -BEGIN PGP SIGNATURE-
 Version: PGPfreeware 5.0i for non-commercial use
 Charset: noconv

 iQA/AwUBOxxDFPYgmKoG+YbuEQLeiACeIhRJQ0HTZQCJc+aqHzqSfTods7IAnjEO
 m9vtW2WRh3PMPXdlWeEBzTzY
 =u6ep
 -END PGP SIGNATURE-



 ---(end of broadcast)---
 TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Marc G. Fournier   ICQ#7615664   IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: [EMAIL PROTECTED]   secondary: scrappy@{freebsd|postgresql}.org


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[HACKERS] Re: Strange query plan

2001-06-05 Thread Tom Lane

Oleg Bartunov [EMAIL PROTECTED] writes:
 The best plan I've got eliminating  IN predicate:
 select msg_prt.tid as mid from msg_prt
  where exists (select idx.tid from idx where msg_prt.tid=idx.tid
and idx.did=1 and idx.lid = 1207 and idx.lid=59587 )

Surely that returns zero rows?

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] Re: Strange query plan

2001-06-05 Thread Tom Lane

Oleg Bartunov [EMAIL PROTECTED] writes:
 should be
 select msg_prt.tid as mid from msg_prt
  where exists (select idx.tid from idx where msg_prt.tid=idx.tid
and idx.did=1 and ( idx.lid = 1207 or idx.lid=59587 ));
 but this is not a big win.

Shouldn't be any win at all: the IN expression-list notation will get
translated to exactly that form.

 Anyway, what's about original query ?

IN/EXISTS subqueries suck.  This has been true for a long time and is
going to be true for a while longer, unless someone else fixes it before
I have a chance to look at it.  See if you can't rewrite your query as
a plain join.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



RE: [HACKERS] Imperfect solutions

2001-06-05 Thread Stephan Szabo

On Tue, 5 Jun 2001, Christopher Kings-Lynne wrote:

   Just thought you might like to add
  
   * ALTER TABLE ADD PRIMARY KEY
   * ALTER TABLE ADD UNIQUE
 
   And what
 
   ALTER TABLE DROP PRIMARY KEY
   ALTER TABLE DROP UNIQUE
 
   BTW, it's a little cosmetic feature if we have CREATE/DROP INDEX :-)
 
 Those two points are already mentioned - I have another 90% patch ready to
 go that will add that functionality as well...

As a question, are you doing anything to handle dropping referenced unique
constraints or are we just waiting on that until a referencing system
is built?



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Strange query plan

2001-06-05 Thread Tom Lane

Oleg Bartunov [EMAIL PROTECTED] writes:
 select msg_prt.tid as mid from msg_prt
  where exists (select idx.tid from idx where msg_prt.tid=idx.tid
 and idx.did=1 and idx.lid in (1207,59587) )
 NOTICE:  QUERY PLAN:

 Seq Scan on msg_prt  (cost=0.00..119090807.13 rows=69505 width=4)
   SubPlan
 -  Index Scan using idxidx, idxidx on idx  (cost=0.00..1713.40 rows=1 width=4)

Actually, this example does reveal an unnecessary inefficiency: the
planner is only using the idx.lid in (1207,59587) clause for the
indexscan, ignoring the fact that the did and tid clauses match the
additional columns of your three-column index.  The attached patch
should improve matters.

regards, tom lane


*** src/backend/optimizer/path/indxpath.c.orig  Sun May 20 16:28:18 2001
--- src/backend/optimizer/path/indxpath.c   Tue Jun  5 12:38:21 2001
***
*** 397,403 

clause, false);
  }
  
! /*
   * Given an OR subclause that has previously been determined to match
   * the specified index, extract a list of specific opclauses that can be
   * used as indexquals.
--- 397,403 

clause, false);
  }
  
! /*--
   * Given an OR subclause that has previously been determined to match
   * the specified index, extract a list of specific opclauses that can be
   * used as indexquals.
***
*** 406,415 
   * given opclause.However, if the OR subclause is an AND, we have to
   * scan it to find the opclause(s) that match the index.  (There should
   * be at least one, if match_or_subclause_to_indexkey succeeded, but there
!  * could be more.)Also, we apply expand_indexqual_conditions() to convert
!  * any special matching opclauses to indexable operators.
   *
   * The passed-in clause is not changed.
   */
  List *
  extract_or_indexqual_conditions(RelOptInfo *rel,
--- 406,430 
   * given opclause.However, if the OR subclause is an AND, we have to
   * scan it to find the opclause(s) that match the index.  (There should
   * be at least one, if match_or_subclause_to_indexkey succeeded, but there
!  * could be more.)
!  *
!  * Also, we can look at other restriction clauses of the rel to discover
!  * additional candidate indexquals: for example, consider
!  *... where (a = 11 or a = 12) and b = 42;
!  * If we are dealing with an index on (a,b) then we can include the clause
!  * b = 42 in the indexqual list generated for each of the OR subclauses.
!  * Essentially, we are making an index-specific transformation from CNF to
!  * DNF.  (NOTE: when we do this, we end up with a slightly inefficient plan
!  * because create_indexscan_plan is not very bright about figuring out which
!  * restriction clauses are implied by the generated indexqual condition.
!  * Currently we'll end up rechecking both the OR clause and the transferred
!  * restriction clause as qpquals.  FIXME someday.)
!  *
!  * Also, we apply expand_indexqual_conditions() to convert any special
!  * matching opclauses to indexable operators.
   *
   * The passed-in clause is not changed.
+  *--
   */
  List *
  extract_or_indexqual_conditions(RelOptInfo *rel,
***
*** 417,470 
Expr *orsubclause)
  {
List   *quals = NIL;
  
!   if (and_clause((Node *) orsubclause))
{
  
!   /*
!* Extract relevant sub-subclauses in indexkey order.  This is
!* just like group_clauses_by_indexkey() except that the input and
!* output are lists of bare clauses, not of RestrictInfo nodes.
!*/
!   int*indexkeys = index-indexkeys;
!   Oid*classes = index-classlist;
  
!   do
{
!   int curIndxKey = indexkeys[0];
!   Oid curClass = classes[0];
!   List   *clausegroup = NIL;
!   List   *item;
  
!   foreach(item, orsubclause-args)
{
if (match_clause_to_indexkey(rel, index,
   
  curIndxKey, curClass,
!  
  lfirst(item), false))
!   clausegroup = lappend(clausegroup, 
lfirst(item));
}
  
!   /*
!* If no clauses match this key, we're done; we don't want to
!* look at keys to its right.
!*/
!   if (clausegroup == NIL)
!

Re: [HACKERS] Can the backend return more than one error message per PQexec?

2001-06-05 Thread Tom Lane

Peter Eisentraut [EMAIL PROTECTED] writes:
 In PQexec() and also in parseInput() (both fe-exec.c) there is a provision
 for, if more than one result set is returned, to concatenate the error
 messages (while only returning the last result set).  My question is how a
 backend can return more than one error message per query string?

That concatenation hack was added to deal with an actual case where
information was getting dropped, but I am not sure that it was something
that would arise in the normal protocol.  IIRC it was something like

1. backend sends error in response to bogus user query;

2. backend encounters fatal problem during error cleanup (or gets
   shutdown signal from postmaster), and sends another error message
   to indicate this before it closes up shop.

I think there may also be cases where we need to stuff both
backend-generated messages and libpq-generated messages into the
error result.  That doesn't directly affect the protocol however.

Since there will always be asynchronous conditions to deal with, it'd
be pretty foolish to design a protocol that assumes that exactly one
'E' message will arrive during a PQexec cycle.

 I am currently looking into extending the protocol so that more fields can
 be in an ErrorResponse (e.g., error codes).  If this were to happen then
 we'd need a smarter way of handling more than one error message per cycle.

Only if you want to overload ErrorResponse so that successive 'E'
messages mean different things.  I do not think that would be a good
design.  It'd be better to allow ErrorResponse to carry multiple fields.
This'd imply a protocol version bump, but so what?  Changing the
semantics of ErrorResponse probably ought to require that anyway.

(I have some other ideas that would require a protocol version bump too,
like fixing the broken COPY and FastPath parts of the protocol...)

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



RE: [HACKERS] Imperfect solutions

2001-06-05 Thread Stephan Szabo


On Wed, 6 Jun 2001, Christopher Kings-Lynne wrote:

   Those two points are already mentioned - I have another 90%
  patch ready to
   go that will add that functionality as well...
 
  As a question, are you doing anything to handle dropping referenced unique
  constraints or are we just waiting on that until a referencing system
  is built?
 
 By that do you mean: what happens when you drop a primary key that is
 referenced by a foreign key?
 
 My answer: Forgot about that ;)  I'll see what I can do but anytime
 investigation of foreign keys is required it's a real pain.  Foreign keys
 are kinda next on my list for work, so I might look at it then if it's too
 difficult right now.  (I've got a query that can find all foreign keys on a
 relation, and what they relate to, that I'm going to add to psql).

I wouldn't worry all that much about it since you could still break it
with drop index, but I wanted to know if you'd done anything with it
and if so how general it was.

How'd you do the splitting of the arguments to get the columns referenced?
That was the biggest problem I was having, trying to get the bytea split
up.  (Well, without writing a function to do it for me)

 My other questions then are:
 
 Does anything else (other than fk's) ever reference a primary key?
 What can reference a unique key?

Foreign keys are the only one I know of, but they can reference either.


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster