Re: [HACKERS] Log rotation

2004-05-14 Thread Fernando Nasser
Lamar Owen wrote:

Anyway, Syslog is not an option for us.  We need flat files.

Ok, riddle me this:
If I have PostgreSQL set to log to syslog facility LOCAL0, and a local0.none 
on /var/log/messages and local0.* to /var/log/pgsql (assuming only one 
postmaster, unfortunately) then you get a flat file.

The problem is that sysloging has more overhead than a plain append to a 
file.  There are some very strict response time AppServer applications 
where we want to keep this things out of the picture.

The other problem is that we have some nice graphical tools for 
configuring logging but the /etc/syslog.conf is a very hard one to 
automate dur to the pattern matching.  We can add some lines there, like 
one to get local0 to a specific file, but it will probably be guesswork 
and require human inspection anyway.


I can see that in a multipostmaster setting how you might want some 
differentiation between postmasters, but ISTM that the tool reading these 
logs should be trained in how to separate loglines out.  I use a product 
called SmoothWall as a firewall/VPN solution, and its log reporting modules 
split out loglines in this way, so that I can have the ipsec logs in one 
browser page and the l2tp logs elsewhere, and the ppp logs elsewhere, and the 
kernel logs elsewhere

It may be desirable to logrotate them at different times as well, so 
they would have to be in different files.


Or you'll have to include some other log rotator.
That is what Tom is proposing.
--
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] COPY formatting

2004-03-19 Thread Fernando Nasser
Andrew Dunstan wrote:


Karel Zak wrote:

The problem with  CSV is that it will correctly  work with new protocol
only. Because old  versions of  clients are newline  sensitive. And CSV
can contains newline in by quotation marks defined attributes:
John, Smith, The White House
1600 Pennsylvania Avenue NW
Washington, DC 20500, male, open source software office
It is one record.


(Long Live President Smith!)

I have never seen such a beast,
Export from a spreadsheet where people have formated the cell with the 
address on it.

Regards,
Fernando
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [HACKERS] COPY formatting

2004-03-17 Thread Fernando Nasser
Hans-Jürgen Schönig wrote:
Karel Zak wrote:

 Hi,

 in TODO is item: * Allow dump/load of CSV format. I don't think
 it's clean idea. Why CSV and why not something other? :-)
 A why not allow to users full control of the format by they own
 function. It means something like:
 
 COPY tablename [ ( column [, ...] ) ]
 TO { 'filename' | STDOUT }
 [ [ WITH ]   [ BINARY ]
  [ OIDS ]
  [ DELIMITER [ AS ] 'delimiter' ]
  [ NULL [ AS ] 'null string' ]
  [ FORMAT funcname ] ]
   
  The formatting 
function API can be pretty simple:

 text *my_copy_format(text *attrdata, int direction,  int 
nattrs, int attr, oid attrtype, oid relation)

 -- it's pseudocode of course, it should be use standard fmgr
 interface.
 
 It's probably interesting for non-binary COPY version.
 
 Comments?

Karel



Karel,

This seems to be an excellent idea.
People have already asked for many different formats.
Usually I recommend them to use psql -c COPY ... dbname | awk 
Since Windows will be supported soon, it will be hard to pipe data to a 
useful program (awk, sed, ...). Maybe this feature would help a lot in 
this case.

Could a CSV-generating function be provided with the distribution then?

Regards,
Fernando
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] COPY formatting

2004-03-17 Thread Fernando Nasser
Tom Lane wrote:
Karel Zak [EMAIL PROTECTED] writes:

The formatting function API can be pretty simple:
text *my_copy_format(text *attrdata, int direction, 
int nattrs, int attr, oid attrtype, oid relation)


This seems like it could only reasonably be implemented as a C function.
I can't really imagine the average user of COPY wanting to write C in
preference to, say, an external perl script.  What's the real use-case
for the feature?
That is why I suggested providing a pre-written/pre-compiled/installed 
function for CSV (call it CSV?).  Advanced users could still write their 
own as people can write many other things if they know their ways.

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] Log rotation

2004-03-14 Thread Fernando Nasser
Lamar Owen wrote:
On Saturday 13 March 2004 10:36 am, Fernando Nasser wrote:

The problem is that sysloging has more overhead than a plain append to a
file.  There are some very strict response time AppServer applications
where we want to keep this things out of the picture.


Well, I have a couple of ideas on that.  First, a better syslog.  SDSC secure 
syslog has been available for some time and is BSD licensed, and is 
specifically designed for high-performance RFC3195 compliant secure syslog.

I hope someone in the OS group is watching it.  I will ask.


Second, if the syslog server is separate, then you might get better 
performance as the size of the logs grow, since appending a very large file 
is slower than generating an IP datagram.

It may be.  It takes some effort to convince customers to change their 
practices though.  I will try to set up something like this next time we 
run benchmarks and see if there is any noticeable change in the results.


Third, it seems that you don't have enough profiling data to support a 'syslog 
is bad' position. 
That is true.  It is from hearsay, from people who run production 
environments.  It may be a belief based on old experiences though.  If 
someone had real up-to-date reliable numbers maybe we could use it as an 
argument to the users.


Java is bad too, from a performance standpoint (at this 
time, at least, you always get a performance hit of some kind using any 
portable code). 
Actually, this is increasingly less noticeable.  With just in time 
compilation or pre-compiled code (like with GNU gcj) it is a one time 
hit or not even that.


But if this AppServer is based on the ArsDigita Java 
codebase, you have other performance issues already (the Tcl codebase, OTOH, 
is very fast, partly because AOLserver is a faster appserver than most Java 
appservers).).

No, it is based on ObjectWeb JOnAS (Java Open Source Application 
Server).  The Servelet Container is Tomcat and the database is 
PostgreSQL (it also works with Oracle, DB2, MySQL, etc.).



The other problem is that we have some nice graphical tools for
configuring logging but the /etc/syslog.conf is a very hard one to
automate dur to the pattern matching.  We can add some lines there, like
one to get local0 to a specific file, but it will probably be guesswork
and require human inspection anyway.


Control Center looks promising.  But I much prefer having a central syslog 
server than having each server in a cluster having separate logs.

There are certain advantages with doing it that way and I guess some 
customers will indeed do it.  I believe you can very well set up logging 
to a centralized syslog with Control Center (if not please open a buf 
report).



It may be desirable to logrotate them at different times as well, so
they would have to be in different files.


Different facilities can then go to different files.  The problem, of course, 
is syslog's limited number of facilities.  

If not used for other things, maybe 8 would be a good enough number.


Or you'll have to include some other log rotator.


That is what Tom is proposing.


I am not opposed to including a small logrotator for stderr logging.  I just 
think it is redundant when a good highly configurable logging facility 
already exists.  But, if Red Hat wants to pay Tom to do it... :-)
Maybe it is a question of preference or what is more convenient depends 
on the specific circunstances of the installation.  I would prefer to 
give the option to the users, if possible.

Cheers,
Fernando


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] Log rotation

2004-03-14 Thread Fernando Nasser
Lamar Owen wrote:
On Saturday 13 March 2004 01:00 pm, Fernando Nasser wrote:

There are some applicatons which run in servers with very strict
response times and any syscall, network packet that can be saved counts.


Ok, what about pipe overhead?  If we're gong to split hairs, let's split all 
of them.  The design of the pipeline in this logrotator filter will have to 
be such that minimal overhead occurs, because, at the real-time level, every 
concurrent process also counts.

Splitting hairs was not my intention :-)  But there is always something 
to gain or learn from a good debate...

What you say is true.  Nobody seems to consider that there si some 
overhead in piping as well.  Perhaps it is another (unproved?) belief 
that it will not be as significant as using a central syslog service.


The number of generated messgaes.


Maybe that is an area that can be worked on, i.e. reducing log
verbosity.  Is 7.4.x much better than 7.3.x in that respect?


There are several levels documented in postgresql.conf.  Try the terse level 
and see what happens.
It used to be that if you lowered it too much you would get fewer 
messages but not enough information.  There is a tendency that this 
improves with time, I would guess.

There are cases where a problem is suspected and then users have to 
raise the log level in the hopes that it gives them some clue when the 
problem manifests itself (which sometimes, at least on a first instance, 
only happens in the production environment as is triggered by some weird 
usage pattern).

Best regards,
Fernando
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [HACKERS] Log rotation

2004-03-13 Thread Fernando Nasser
Bruno Wolff III wrote:
On Fri, Mar 12, 2004 at 15:19:29 -0500,
  Fernando Nasser [EMAIL PROTECTED] wrote:
Bruno Wolff III wrote:

I can see their problem with making a dependency to all of apache or 
including
multilog in their distribution. But they probably could include something
that is only a logger either using some project that is only a logger or
splitting out the logger that is bundled with apache. Then it wouldn't
be unreasonable to make a dependency for postgres requiring that logging
rpm. Other services could also make use of this logging package as well.

Yes that would be nice.  I have no idea how difficult it would be to 
extricate the logrotate program from Apache.  I also don't know if there 
would be any license restrictions, would we be able to redistribute it as 
an independently package?  I don't know the answer.


I was suggesting this as something a distro maintainer (such as Redhat)
could do. I think that the postgres developers shouldn't be spending
time doing this. They should be just suggesting some possibilities
in the admin part of the documentation.
The distro maintainers should only pack proven solutions created by the 
community.  When they add man power to a project they must do it in the 
community process, not in house.

A decent logging mechanism (that allows log rotation) is important to 
postgres and should not require one to depend on a distro.  They should 
be able to get it from pgsql land.

It would be nice if there was a community recommended and community 
tested logging solution, even if the instructions to set it up required 
one to download and install some apache package.

But adding prodution quality logging to PostgreSQL is still a postgresql 
community TODO and should not be delegated to the distros and thus 
restricted to a distro only.

Regards,
Fernando


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Log rotation

2004-03-13 Thread Fernando Nasser
Lamar Owen wrote:

Anyway, Syslog is not an option for us.  We need flat files.


Ok, riddle me this:

If I have PostgreSQL set to log to syslog facility LOCAL0, and a local0.none 
on /var/log/messages and local0.* to /var/log/pgsql (assuming only one 
postmaster, unfortunately) then you get a flat file.

The problem is that sysloging has more overhead than a plain append to a 
file.  There are some very strict response time AppServer applications 
where we want to keep this things out of the picture.

The other problem is that we have some nice graphical tools for 
configuring logging but the /etc/syslog.conf is a very hard one to 
automate dur to the pattern matching.  We can add some lines there, like 
one to get local0 to a specific file, but it will probably be guesswork 
and require human inspection anyway.


I can see that in a multipostmaster setting how you might want some 
differentiation between postmasters, but ISTM that the tool reading these 
logs should be trained in how to separate loglines out.  I use a product 
called SmoothWall as a firewall/VPN solution, and its log reporting modules 
split out loglines in this way, so that I can have the ipsec logs in one 
browser page and the l2tp logs elsewhere, and the ppp logs elsewhere, and the 
kernel logs elsewhere

It may be desirable to logrotate them at different times as well, so 
they would have to be in different files.


Or you'll have to include some other log rotator.

That is what Tom is proposing.

--
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] Log rotation

2004-03-13 Thread Fernando Nasser
Robert Treat wrote:
Different postmasters = different conf files. Just set your syslog_facility 
and/or your syslog_ident differently and it should be pretty easy to seperate 
the logs.  Actually, now that I have started using syslog fairly regularly, I 
find it hard to believe it would be worth the effort to try to recreate a 
logging facility as feature complete as syslog when we have syslog available.  
ranks right up there with recreate cron, another feature many people think an 
enterprise database has to have.  Personally I think in place upgrades are 
far, far more important than either of those two, to both end users and to 
the development community.

The concerns with syslog were related to performance (additional load on 
the server).  Perhaps these concerns were unfounded.

I am not saying that upgrades are not important.  I have been asking for 
it and have even tried to come up with a process to keep a continuously 
functional pg_upgrade program in synch with the cvs head development, 
even adding bits to the catalog for one release to aloow the conversion 
to be done. Ask Bruce and Tom, I've mentioned this to them 2 or 3 years ago.

But the log rotation is a relatively small task in comparison, and it is 
 at least as equaly visible in terms of production users (at least in 
the enterprise).  I am talking about benefit/effort ratio.

Regards,
Fernando
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Log rotation

2004-03-13 Thread Fernando Nasser
Rod Taylor wrote:
I can see that in a multipostmaster setting how you might want some
differentiation between postmasters, but ISTM that the tool reading these
logs should be trained in how to separate loglines out. snip
Different postmasters = different conf files. Just set your syslog_facility 
and/or your syslog_ident differently and it should be pretty easy to seperate 
the logs.  Actually, now that I have started using syslog fairly regularly, I 


Not that I'm volunteering, but I think the biggest issue is many users
simply don't know how to approach the problem. Some docs on using
syslog, cron, etc. with PostgreSQL to accomplish maintenace jobs would
probably be enough.
It is very easy to setup logging with the Control Center tool.  Except 
that it does not handle the /etc/syslog.conf bit (you have to add the 
entry to get a specific LOCAL to a specific file by hand.

Maybe someone could make some measurements to desmistify the performance 
impact of syslog.  There is this generalized belief in IS departments 
that should minimize it.  Perhaps is an old fud that is not anylonger true.



--
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] Log rotation

2004-03-13 Thread Fernando Nasser
Patrick Welche wrote:
On Sat, Mar 13, 2004 at 10:36:23AM -0500, Fernando Nasser wrote:

Lamar Owen wrote:

Ok, riddle me this:

If I have PostgreSQL set to log to syslog facility LOCAL0, and a 
local0.none on /var/log/messages and local0.* to /var/log/pgsql (assuming 
only one postmaster, unfortunately) then you get a flat file.
The problem is that sysloging has more overhead than a plain append to a 
file.  There are some very strict response time AppServer applications 
where we want to keep this things out of the picture.


I thought it was an advantage to say log to that box running syslog over
there and leave my disk alone - what do you have in mind with AppServer
applications ?
And add more packets to the network, buffering etc.?  I don't think so.

There are some applicatons which run in servers with very strict 
response times and any syscall, network packet that can be saved counts.


It may be desirable to logrotate them at different times as well, so 
they would have to be in different files.


syslogd with newsyslog, just like any other log file? I must be missing
something.. I don't see why postgresql is different..
The number of generated messgaes.

Maybe that is an area that can be worked on, i.e. reducing log 
verbosity.  Is 7.4.x much better than 7.3.x in that respect?



--
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] Log rotation

2004-03-12 Thread Fernando Nasser
Tom Lane wrote:
Fernando Nasser [EMAIL PROTECTED] writes:

Please remind me again why the postmaster cannot close and open the log 
file when it receives a SIGHUP (to re-read configuration)?


(a) Because it never opened it in the first place --- the log file is
whatever was passed as stderr.
(b) Because it would not be sufficient to make the postmaster itself
close and reopen the file; every child process would have to do so also.
Doing this in any sort of synchronized fashion seems impossible.
Now I remember.  Thanks for reminding me of that.




It's much cleaner to have stderr be a pipe to some separate collector
program that can handle log rotation (ie, the Apache solution).
We could also create a pipe and start a new process (logger) and give 
it the other end of the pipe and the name of the log file.  We could 
send it a SIGHUP after we reread the configuration file.

But just doing a pipe on the OS level is way simpler.

I don't really care on how its done, but IMO an enterprise class 
database must be able to do log rotation.  Logging to Syslog is not an 
option (specially with our verbosity) -- users must be able to use flat 
files for logging.

I never seem some many customer complaints and bug reports about a 
single item like the ones we have received here about logging.  I think 
this should be the number 1 item in te TODO list.



Thanks again for the clarifications.

Regards,
Fernando
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [HACKERS] Log rotation

2004-03-12 Thread Fernando Nasser
Bruno Wolff III wrote:
I can see their problem with making a dependency to all of apache or including
multilog in their distribution. But they probably could include something
that is only a logger either using some project that is only a logger or
splitting out the logger that is bundled with apache. Then it wouldn't
be unreasonable to make a dependency for postgres requiring that logging
rpm. Other services could also make use of this logging package as well.
Yes that would be nice.  I have no idea how difficult it would be to extricate 
the logrotate program from Apache.  I also don't know if there would be any 
license restrictions, would we be able to redistribute it as an independently 
package?  I don't know the answer.

Regards,
Fernando
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [HACKERS] Log rotation

2004-03-12 Thread Fernando Nasser
Hi Lamar,

Lamar Owen wrote:
On Friday 12 March 2004 09:24 am, Fernando Nasser wrote:

I don't really care on how its done, but IMO an enterprise class
database must be able to do log rotation.  Logging to Syslog is not an
option (specially with our verbosity) -- users must be able to use flat
files for logging.


Uh, we have many many many different ways to use syslog.  So my other message 
on the topic.

Which other message?

Anyway, Syslog is not an option for us.  We need flat files.



I never seem some many customer complaints and bug reports about a
single item like the ones we have received here about logging.  I think
this should be the number 1 item in te TODO list.


Uh, upgrading?  I'm sure we have more reports about upgrading than logging.

Yeah, but that only comes with a full version upgrade :-)

The logging one keeps popping up as people try and use the server for production.


But see my reply to bug 103767 for more.
See my reply to your reply ;-)

Best regards,
Fernando


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[HACKERS] Log rotation

2004-03-11 Thread Fernando Nasser
Hi,

Please remind me again why the postmaster cannot close and open the log 
file when it receives a SIGHUP (to re-read configuration)?  This was 
discussed before but I cannot remember if and why this was not possible 
or if the arguments are still valid after -l was added.

If this was possible it could even be done after re-reading the 
configuration and even use the value of a GUC variable for the log file 
name, which would allow us to change the value of -l without the need to 
restart (actualy, with a GUC variable set in postgresql.conf one could 
set the log without the -l, but would perhaps lose a few initial messages).

regards to all,
Fernando
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[HACKERS] Multiple database services and multiple versions on Red Hat Linux systems

2003-10-27 Thread Fernando Nasser
Multiple database services and multiple versions on Red Hat Linux systems

The way it works is that we require a specific service script for each 
database service (that is listening on each port).  Each of these 
services has a init script in /etc/init.d and a corresponding 
configuration file in /etc/sysconfig.  We use the 'chkconfig' utility to 
decide if each of those services will be activated on boot or not (it 
manipulates links under the /etc/init.c for each SysV run level).

We currently support multiple versions running.  I have myself half a 
dozen database services on my system with versions that range from 7.1 
to 7.4.  As each configuration file for each service points to the 
location of the proper binaries we have no problems dealing with this.

For example:

# cat /etc/sysconfig/rhdb-production
PGDATA=/usr/local/pgsql73/data
PGDATA2=/var/lib/pgsql2
PGDATA3=/var/lib/pgsql3
PGDATA4=/var/lib/pgsql4
PGENGINE=/home/fnasser/INST/pgsql73/bin
PGPORT=5433
PGLOG=/var/log/rhdb/rhdb-production
PGINITOPTIONS=--lc-messages=pt_BR
As you can see the PGENGINE points to a binary that I built myself.  It 
is unfortunate that I can only have one RPM installed at a time.

Oliver Elphick has suggested different package names for each version 
that has a different catalog number (i.e., we need a pg_dump + 
pg_restore and we can't use these version's postmaster to access other 
version's data areas).

If we configure each of these packages with a different base path which 
includes the version and install, of course, to these versioned 
directories, we will end up with a setup similar to what I have on my 
system with the bakends I've built myself.  It can be even a Java-like 
solution

/usr/pgsql/postgresql71
/usr/pgsql/postgresql72
/usr/pgsql/postgresql73
/usr/pgsql/postgresql74
or have then scattered if the LSB so requires (I believe it does not 
address this case though).

As the binaries have been configured with the versioned paths, all RPMs 
are normal (not relocatable) and the binaries will refer to the 
libraries and other files of the proper version.  So by setting one's 
path, the user can use the version she or he seems fit.

For Red Hat's users (and Debian's, I believe), the 'alternatives' 
utility can be used to direct links from /usr/bin and such to the chosen 
version files, so a default could be established and for such there 
would be no need to change the PATH variable.

Also, the multiple versioning can be kept only on the server side.  On 
the client side the latest version will suffice if it guarantees a 
(minimum) 2 version backwards compatibility (as we do with the JDBC driver).

Besides the client side backaward compatibility, what the core 
postgresql team could also do to support this would be to add version 
checks and issue warnings on mismatches (or errors if used against a 
version too old).  Also, make sure the path of the binary does imply in 
the location of the other files (i.e., the path from configure is always 
used, and not some hardcoded value).

As you see, these goals can be achieved without any changes in the 
postgresql community sources.

Regards to all,
Fernando
--
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] postgres --help-config

2003-10-15 Thread Fernando Nasser
Tom Lane wrote:
Actually, I think the point Peter's been making is that it's not clear
we need a user-readable output format at all.  The variant you are
calling --help-config-raw is the only one that needs to be supported in
7.4, and anything else should (arguably) be left off so that it doesn't
constrain a future redesign.
I agree.  We can revive the pg_guc utility to obtain the raw input using 
'postgres --help-config' and format it the way you want and give it all 
the switches that people may want.

I just wonder if having a help-like option without human-readable output
is OK.  We can always document that it is for machine consumption, of 
course.

--
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] postgres --help-config

2003-10-15 Thread Fernando Nasser
Bruce Momjian wrote:

(...)
I guess iff someone needs raw with headers in the future, I guess we
could add --help-config-raw-headers.
I don't mind if you make it always with the headers.  We can easily 
strip the first line when reading the file and people can easily strip 
it piping the output through a filter before sorting it etc.

I am thinking that the headers would allow a output formatting utility 
to do it in a way that is independent of the version, and avoid having 
to run pg_config --version as well.  As the values are all strings it 
can blindly create a column for every header.  The only fields that the 
program need to know about (like name and group) are not likely to change.

No big deal though.

--
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] SQL flagger

2003-01-15 Thread Fernando Nasser
THe SQL Flagger is only required for Intermediate SQL.  SQL'92 23.4 says Entry 
SQL may, but are not required to.

This said, it is a nice to have feature for the reasons that Peter pointed out.

But as I understand it, this is a sort of warning feature, and depending on the 
extent of checking option may be just something that the parser itself detects 
(Sysntax only) or something we detect in the analyzer code (catalog lookup). 
The second one has security issues (the standard suggests using a specific 
Information Schema) so we may want to avoid it for now.

Basically we would issue a FLAGGER message, if level of flagging is set to 
Entry SQL Flagging every time the parser finds a clause that is not Entry SQL. 
 Similarly for non Intermediate SQL constructs if level is Intermediate SQL 
Flagging. We would, of course, issue a FLAGGER message for all our PostgreSQL 
specific extensions in any level (if Flagging enabled).

If I understood it correctly, we only need a new elog level and add a few elog 
calls in some of gram.y clauses...

Regards,
Fernando

Tom Lane wrote: Peter Eisentraut [EMAIL PROTECTED] writes:


The SQL standard requires conforming implementations to provide an
SQL flagger facility ...




I think we could implement this with relatively little intrusion if we
create an interface routine, say SQLFlagger(), which takes the entire
parsetree as its argument can then analyze the syntax in as much
detail as it likes.  (Of course that function would only be called if
a certain Boolean flag is set.)  But a few syntax elements would need
to checked right within gram.y, such as the omission of the drop
behavior or the use of TEMP vs. TEMPORARY, which is resolved right in
the parser and cannot be detected later.




Should we implement this?



I think we would be better off to implement this as a standalone program
rather than as a backend mode option.

In general, gram.y's behavior should never depend on any runtime
variables.  If it does, you get inconsistent results from
	SET var = val ; ... other stuff ...
(one query string) compared to
	SET var = val
	... other stuff ...
(two query strings), because the whole query string is fed through
gram.y before any of it is executed.

Plan B, if you really want to do this in the backend, would be to alter
gram.y's output trees so that all the non-spec constructs are still
recognizable in the raw parse tree, and any conversions needed are done
in analyze.c's processing (which would also be the place to issue the
flagger warnings).  This is not necessarily a bad idea; I've always
thought that we do too much work in gram.y anyway.  But you will be
fighting a permanent rear-guard action to keep people from
re-introducing variant syntaxes by quick gram.y hacks.

In general I like the idea of a standalone program better, however.
It would be able to have its own grammar tuned to its needs.  I don't
think there would be much maintenance problem introduced thereby,
since presumably the flagger's grammar is driven by the spec and won't
need to change when we change what Postgres accepts.

			regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly





--
Fernando Nasser
Red Hat - Toronto   E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] 7.4 - TODO : alter table drop foreign key

2002-12-05 Thread Fernando Nasser
Dan Langille wrote: On 5 Dec 2002 at 11:47, Dan Langille wrote:



Primary key: watch_list_staging_pkey
Check constraints: watch_list_stag_from_watch_list 
((from_watch_list = 't'::bool) OR (from_watch_list = 'f'::bool))
  watch_list_stagin_from_pkg_info ((from_pkg_info 
= 't'::bool) OR (from_pkg_info = 'f'::bool))
Triggers: RI_ConstraintTrigger_4278482,
 RI_ConstraintTrigger_4278488

No mention of FK constraints.


Found the solution:

drop trigger RI_ConstraintTrigger_4278488 on watch_list_staging;



You should now go to the table this RI constraint was referring to and delete 
the two triggers in there as well.  They will still be checking for deletions 
and updates.  Look for something like
RI_ConstraintTrigger_4278490
RI_ConstraintTrigger_4278492
and with the associated procedure RI_FKey_noaction_del and RI_FKey_noaction_upd


BTW, the rhdb-admin program can drop the constraints for you, even the unnamed 
ones on backends 7.2 up.  You can download it from:

http://sources.redhat.com/rhdb

Of course, now that you broke the set of triggers for this FK constraint you'll 
still need to drop the other ones by hand.  But the tool at least will show you 
the column and table involved so it will be easier to identify the two you have 
to get rid of.


Regards,
Fernando




--
Fernando Nasser
Red Hat - Toronto   E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org


Re: [HACKERS] PQnotifies() in 7.3 broken?

2002-12-05 Thread Fernando Nasser
Bruce Momjian wrote:


I will update for 7.4 now.  Too late for 7.3 clearly.



Bruce, why is it too late?

Most (all) will upgrade to 7.3.1 anyway, so it is a chance to get things right.

--
Fernando Nasser
Red Hat - Toronto   E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] ALTER TABLE schema SCHEMA TO new_schema?

2002-12-03 Thread Fernando Nasser
Rod Taylor wrote:

Why just restrict them to moving tables?  What if someone wants to move a
function or an aggregate to another schema?

What if they want to copy it?



Copying might be tricky, but I'd be happy to help with moving everything
else around.  Though I don't think sequences can move (until we can
properly track their dependencies) but everything else should be able
to.

Copy is another story all together.  But I'd like a

CREATE SCHEMA ... AS COPY schemaname;



Wouldn't it be better to use pg_dump/pg_restore for that?

If we could ask for just oen/some of the non-system schemas to be dumped 
it would be easy to restore it as another or even move it to another 
database.  And one could dump only the schema or schema+data, as needed.
Of course, dependencies would have to be handled as objects can refer to 
objects in other schemas.

--
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] ALTER TABLE schema SCHEMA TO new_schema?

2002-12-02 Thread Fernando Nasser
Joe Conway wrote:

Christopher Kings-Lynne wrote:


possible. We should probably just go with your suggestion. Anything else
beyond the relnamespace and pg_depend entries that need to be dealt 
with?


What about sequences for serial columns?  What about views or types that
depend on the table?



Yeah, good point. I think properly dealing with the pg_depends issues 
will catch anything of that nature, but what to do with them?

Probably should move dependent type, constraint, index entries to the 
same new namespace. We might want to move related sequences, but I'm not 
sure we'd want to do that silently, since the sequence could be in use 
for other tables as well. And we should probably restrict the change if 
there are dependent functions or views. Does this capture the issues?


Why not just leave the sequence and types in the original schema and 
make sure the table refers to them _there_?  We just need to make sure 
we have schema qualified references to the sequences and types.

Indexes, triggers (and constraints), toast tables etc. are related to 
just one table so they can migrate together, I think.

--
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html


Re: [HACKERS] ALTER TABLE schema SCHEMA TO new_schema?

2002-12-02 Thread Fernando Nasser
I wonder if the sequences created by SERIAL should not be going into a 
pg_sequence schema and protected like the toast tables are.

One could still share sequences by explicitly creating them and using a 
DEFAULT clause with nextval().

We could even stop printing that annoying NOTICE ;-)

Regards,
Fernando

Joe Conway wrote:
Fernando Nasser wrote:


Why not just leave the sequence and types in the original schema and 
make sure the table refers to them _there_?  We just need to make sure 
we have schema qualified references to the sequences and types.


Well, the type entry for the relation *is* related to just one table, so 
I'd be inclined to move it also. But leaving the sequence alone might be 
the best thing to do. Although, I think sequences created via SERIAL are 
dropped with their referencing table now, aren't they?

test=# create table myserial(id serial);
NOTICE:  CREATE TABLE will create implicit sequence 'myserial_id_seq' 
for SERIAL column 'myserial.id'
CREATE TABLE
test=# \ds myserial_id_seq
   List of relations
 Schema |  Name   |   Type   |  Owner
+-+--+--
 public | myserial_id_seq | sequence | postgres
(1 row)

test=# drop table myserial;
DROP TABLE
test=# \ds myserial_id_seq
No matching relations found.

Maybe that's an argument that they ought to also move to the new schema 
when the dependency exists.

Indexes, triggers (and constraints), toast tables etc. are related to 
just one table so they can migrate together, I think.


I agree.

Joe





--
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] error codes

2002-11-26 Thread Fernando Nasser
Insisting on Andreas suggestion,  why can't we just prefix all error message 
strings with the SQLState code?  So all error messages would have the format

CCSSS - 

Where CCSSS is the standard SQLState code and the message text is a more 
specific description.

Note that the standard allows for implementation-defined codes, so we can have 
our own CC classes and all the SSS subclasses that we need.


--
Fernando Nasser
Red Hat - Toronto   E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org


Re: [HACKERS] commands subdirectory continued -code cleanup

2002-04-19 Thread Fernando Nasser

John Gray wrote:
 
 and two macros:
 
 RECURSE_OVER_CHILDREN(relid);
 AlterTableDoSomething(childrel,...);
 RECURSE_OVER_CHILDREN_END;
 
 (this seems more straightforward than passing the text of the function
 call as a macro parameter).
 

Suggestion:

RECURSE_OVER_CHILDREN(inh, relid)
{
AlterTableDoSomething(childrel,...);
}


-- 
Fernando Nasser
Red Hat - Toronto   E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Operators and schemas

2002-04-16 Thread Fernando Nasser

Tom Lane wrote:
 
 Peter Eisentraut [EMAIL PROTECTED] writes:
  I had imagined that pg_dump would emit commands such as this:
 
  CREATE SCHEMA foo
CREATE TABLE bar ( ... )
CREATE otherthings
  ;
 
  which is how I read the SQL standard.  Are there plans to implement the
  CREATE SCHEMA command that way?  I think I recall someone from Toronto
  mentioning something along these lines.
 
 We have portions of that now, but I don't think there is any serious
 intent to support *all* Postgres CREATE statements inside CREATE SCHEMA.
 Because there are no semicolons in there, allowing random statements in
 CREATE SCHEMA tends to force promotion of keywords to full-reserved
 status (so you can tell where each sub-statement starts).  My
 inclination is to allow the minimum necessary for SQL spec compliance.
 
 (Fernando, your thoughts here?)
 

I agree.   And for Entry level SQL'92 we are done -- only tables, views 
and grants are required.  The multiple schemas per user is already
an intermediate SQL feature -- for intermediate SQL'92 we would still 
need domains and a character set specification.

For SQL'99, we would have to add types, functions and triggers
(only triggers are not part of Core SQL'99, but I would not leave them out).

Regards,
Fernando



-- 
Fernando Nasser
Red Hat - Toronto   E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Operators and schemas

2002-04-15 Thread Fernando Nasser

Tom Lane wrote:
 
 1. Keep operators as database-wide objects, instead of putting them into
 namespaces.  This seems a bit silly though: if the types and functions
 that underlie an operator are private to a namespace, shouldn't the
 operator be as well?
 

Not necessarily.  One can still create a type and functions to operate 
on them.  Operators are a convenience, not a necessity (except for 
indices extensions).

If some types are really important and operators are desired, it can be
coordinated with the DBA as operators would be a database wide resource.
(This would be the case if indices extensions were involved anyway).

I would keep operators database-wide.  

-- 
Fernando Nasser
Red Hat - Toronto   E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] What's the CURRENT schema ?

2002-04-10 Thread Fernando Nasser

Hiroshi Inoue wrote:
 
 Fernando Nasser wrote:
 
  As most things in the SQL standard, you have to collect information
  from several places and add it together.
 
  Look at 4.20, 11.1 and specially at the rules for
  schema qualified name.
 
  Then think a little bit about scenarios, trying to apply the rules.
 
  It is a pain, but there is no other way.
 
 I couldn't find the description CURRENT_SCHEMA == CURRENT_USER.
 If I recognize SQL99 correctly, the CURRENT schema is the schema
 defined in a SQL-client module not restricted to the CURRENT
 user.
 

Yes, but we don't have a module language.  You have to look for
session.


-- 
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] What's the CURRENT schema ?

2002-04-08 Thread Fernando Nasser

Hiroshi Inoue wrote:
 
  You misunderstood what I've said.  You may have how many schemas
  you please.  But you will have to refer to their objects specifying
  the schema name explicitly.  The only cases where you can omit the
  schema name are (accordingly to the SQL'99 standard):
 
 Please tell me where's the description in SQL99 ?
 I wasn't able to find it unfortunately.
 

As most things in the SQL standard, you have to collect information
from several places and add it together.

Look at 4.20, 11.1 and specially at the rules for
schema qualified name.

Then think a little bit about scenarios, trying to apply the rules.

It is a pain, but there is no other way.


-- 
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] What's the CURRENT schema ?

2002-04-08 Thread Fernando Nasser

Tom Lane wrote:
 
 Hiroshi Inoue [EMAIL PROTECTED] writes:
  However I can see the following at 5.4 Names and Identifiers
  11)   If a schema qualified name does not contain a schema name, then
 Case:
 a) If the schema qualified name is contained in a schema
  definition,
 then the schema name that is specified or implicit in the schema
  definition
is implicit.
 
 Yes.  Fernando, our existing CREATE SCHEMA command does not get this
 right for references from views to tables, does it?  It seems to me that
 to get compliant behavior, we'll need to temporarily push the new schema
 onto the front of the namespace search path while parsing view
 definitions inside CREATE SCHEMA.
 

Correct.  It only takes care of proper setting/checking the schema name
for the view (as is done for tables) that are being created.  Doing as
you suggest would be nice (similar to what we do with the authid).

BTW, I think have to properly fill/check the schema when the grant
objects
are tables/views (I am not sure how functions will be handled).
I will send a patch in later today or tomorrow, unless you want to do
it differently.  I prefer to do in in the parser because I can issue
and error if a grant is for something that is not an object in the
schema being created.


Regards,
Fernando

-- 
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] What's the CURRENT schema ?

2002-04-05 Thread Fernando Nasser

Christopher Kings-Lynne wrote:
 
  I really don't buy this argument; it seems exactly comparable to
  arguing that the notion of current directory in Unix is evil, and
  that users should be forced to specify absolute paths to every
  file that they reference.
 
 You know, I'm kinda surprised that the spec doesn't define a CURRENT_SCHEMA
 variable you can query???
 

Maybe because it would be the same as CURRENT_USER.

For the standard, the schema name used (implied) to qualify objects
outside
a CREATE SCHEMA statement is a schema name with the SQL-session user id.
Except for functions and UDTs where each schema has a SQL-path for 
searching those (the implied schema must always be in it though).
There must be an implementation-defined default for this SQL-path
(but the implied schema must also be in it).



-- 
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] What's the CURRENT schema ?

2002-04-05 Thread Fernando Nasser

Hiroshi Inoue wrote:
 
 Tom Lane wrote:
 
  Hiroshi Inoue [EMAIL PROTECTED] writes:
   I don't object to use a search path to resolve unqualified
   function, type etc names. But it is very siginificant for
   users to be able to be sure what tables they are handling.
 
  I really don't buy this argument; it seems exactly comparable to
  arguing that the notion of current directory in Unix is evil, and
  that users should be forced to specify absolute paths to every
  file that they reference.
 
  There is nothing to stop you from writing qualified names (schema.table)
  if you are concerned about being sure that you get the table you intend.
 
 Probably I can do it in many cases but I couldn't force others
 to do it. I don't object if PostgreSQL doesn't allow unqualified
 table name other than in public/temporary/catalog schema.
 There's no ambiguity and there's no need for the CURRENT schema.
 

We can't do that.  Accordingly to the SQL if you are user HIROSHI
and write   SELECT * FROM a; the  table is actually HIROSHI.a.

This must work for people who are using SQL-schemas in their databases
or we would have a non-conforming implementation of SCHEMAS (would make
the whole exercise pointless IMO).

The path proposed by Tom (discussed in the list some time ago) actually
does magic:

1) It allows SQL_schema compliant code and database to work as the 
standard expects;

2) It allows backward compatibility as someone will be able to use the
same schema-unaware code and create their databases without schemas as
before.

3) If the DBA is careful enough, she/he can convert his/her database to
use schemas incrementally.



-- 
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] What's the CURRENT schema ?

2002-04-05 Thread Fernando Nasser

Hiroshi Inoue wrote:
 
  We can't do that.  Accordingly to the SQL if you are user HIROSHI
  and write   SELECT * FROM a; the  table is actually HIROSHI.a.
 
  This must work for people who are using SQL-schemas in their databases
  or we would have a non-conforming implementation of SCHEMAS (would make
  the whole exercise pointless IMO).
 
 Schema name isn't necessarily a user id since SQL-92
 though SQL-86 and SQL-89 had and probably Oracle still
 has the limitation. As far as I see PostgreSQL's schema
 support wouldn't have the limitation. Probably I wouldn't
 create the schema HIROSHI using PostgreSQL. When
 I used Oracle I really disliked the limitation.
 

You misunderstood what I've said.  You may have how many schemas
you please.  But you will have to refer to their objects specifying
the schema name explicitly.  The only cases where you can omit the
schema name are (accordingly to the SQL'99 standard):

1) The statement is part of a CREATE SCHEMA statement that is
creating the object, so the schema being created is assumed
(and that is what you want).

2) Your schema has the same name as your user id, your statement
is not inside a CREATE SCHEMA and it runs on a session with that 
authorization id.  A schema name equal to the sessuin user id is
assumed (which is what you want in this specific case).  

Otherwise you have to specify the schema explicitly.

So, if you name your schema APPLE, and not HIROSHI, except 
for inside the CREATE SCHEMA APPLE statement elements, you will
have to keep refering to tables with the APPLE. prefix.


PostgreSQL will be smarter and try to relax 2) for you, looking
for the table in a public schema as well (if one exists), so old
style (non-schema) databases can still be used and people who have
schemas with names that are not their user id can save some typing. ;-)

-- 
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] What's the CURRENT schema ?

2002-04-04 Thread Fernando Nasser

Tom Lane wrote:
 
 I've been vacillating about whether to choose another name for the
 public namespace to avoid the need for quotes here.  I can't think
 of another good name :-(
 

For the special schemas, we have pg_catalog, (pg_temp, pg_toast ?),
so pg_public could do the trick.

-- 
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] What's the CURRENT schema ?

2002-04-04 Thread Fernando Nasser

Hiroshi Inoue wrote:
 
 I see. However my main problem is that the schema of unqualified
 vs1 is affected by the existence of yamada.vs1. I don't think
 it's a useful behavior.
 

The unqualified one is there mainly for compatibility, so you can
still use your old database set ups without schema names.

Once you redo your database to use schemas, or even while you are 
converting it, there should not be tables with the same name in both 
places.

Anyway, as Tom said, you can change the search order if you prefer.

-- 
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] What's the CURRENT schema ?

2002-04-04 Thread Fernando Nasser

Tom Lane wrote:
 
 I suppose it's a judgment call which is uglier.  Thoughts?
 

Well, PUBLIC is an SQL reserved keyword (pre-92).  We are
already very liberal with keywords. I would leave PUBLIC alone.

I does not _have_ to be public, so we can just avoid the issue
by adding a pg_ prefix to public, common or something else. 
It is a PostgreSQL concept anyway.   

-- 
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] What's the CURRENT schema ?

2002-04-04 Thread Fernando Nasser

Tom Lane wrote:
 
 Fernando Nasser [EMAIL PROTECTED] writes:
  Tom Lane wrote:
  I've been vacillating about whether to choose another name for the
  public namespace to avoid the need for quotes here.  I can't think
  of another good name :-(
 
  For the special schemas, we have pg_catalog, (pg_temp, pg_toast ?),
  so pg_public could do the trick.
 
 Actually that was my initial choice of name, but I changed my mind
 later.  The reason is that the dbadmin should be able to restrict or
 even delete the public namespace if his usage plans for the database
 don't allow any shared objects. 

Can't we prevent creation in there by (un)setting permissions?

 If we call it pg_public then the system
 will think it is a reserved namespace, and we'd have to put in a special
 case to allow it to be deleted (not to mention recreated again, should
 the DBA change his mind later).

If we can disallow creation with permissions, then we could always keep
it.
There should be a more practical way of making it empty than having to
drop
each object individually (DROP will drop the contents but refuse to
delete
the schema itself as it is a pg_ one?).


-- 
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] What's the CURRENT schema ?

2002-04-04 Thread Fernando Nasser

Tom Lane wrote:
 
 Fernando Nasser [EMAIL PROTECTED] writes:
  Tom Lane wrote:
  Actually that was my initial choice of name, but I changed my mind
  later.  The reason is that the dbadmin should be able to restrict or
  even delete the public namespace if his usage plans for the database
  don't allow any shared objects.
 
  Can't we prevent creation in there by (un)setting permissions?
 
 That was what I was referring to by restrict ... but ISTM we should
 allow dropping the namespace too.  Why waste cycles searching it if
 you don't want to use it?
 

I don't know how the search will be implemented, but it should cost
very few instructions (one isnt checks that a list head is zero and
another gets the next pointer for the next namespace).  And, as we now 
transform things and keep them as Oids, it will be even cheaper.


  There should be a more practical way of making it empty than having to
  drop
  each object individually (DROP will drop the contents but refuse to
  delete
  the schema itself as it is a pg_ one?).
 
 I'd expect DROP on a reserved namespace to error out, and thus do
 nothing at all.
 

But we could have:

DROP SCHEMA pg_public CONTENTS;

or something of a sort (an extension, but a public schema is an
extension).
And this sintax can come handy for DBAs in general.

-- 
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] SET NULL / SET NOT NULL

2002-03-22 Thread Fernando Nasser

Zeugswetter Andreas SB SD wrote:
 
 Imho it would be nice if the command would look exactly like a create
 table. It is simply convenient to use cut and paste :-) And I haven't
 seen a keyword yet, that would make it more descriptive, certainly not SET.
 
 ALTER TABLE blah ALTER [COLUMN] col [int4] [NOT NULL] [DEFAULT 32];
 ALTER TABLE blah ALTER [COLUMN] col [int8] [NULL] [DEFAULT 32];
 maybe even [DEFAULT NULL] to drop the default :-)
 

I like this one.  I would not make COLUMN optional though.

-- 
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Domains and type coercion

2002-03-21 Thread Fernando Nasser

Tom Lane wrote:
 
 If we take the hard SQL99 line that domains *are* the base type plus
 constraints, then we could reduce domains to base types before we start
 the entire matching process, and this issue would go away.  This would
 prevent declaring any specialized operators or functions for a domain.
 (In fact, I'd be inclined to set things up so that it's impossible to
 store domain type OIDs in pg_proc or pg_operator, thus saving the time
 of doing getBaseType on one side of the match.)  Thoughts?
 

IMHO this is the right thing to do.

-- 
Fernando Nasser
Red Hat - Toronto   E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Domains and type coercion

2002-03-20 Thread Fernando Nasser

Tom Lane wrote:
 
 (...) or put a special case into the operator selection rules to reduce
 domains to their basetypes before making the exact match test.

By definition, 

which  I believe should be read as 

A domain is a set of permissible values (of a data type).

What I am trying to say is that the domain is still the same data type
w.r.t. operator and functions so reducing it to the basic type for
such searchs is the right thing to do.


 Neither of these seem real appealing, but if we don't do something
 I think that domains are going to be a big pain in the neck to use.
 

Agreed.



-- 
Fernando Nasser
Red Hat - Toronto   E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Domains and type coercion

2002-03-20 Thread Fernando Nasser

Tom Lane wrote:
 
 Any thoughts?
 

As we are talking about CAST,

if one CASTs to a domain, SQL99 says we have to check the constraints
and issue a integrity constraint violation if appropriate (6.22, GR 21).


-- 
Fernando Nasser
Red Hat - Toronto   E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Broken code in gram.y

2002-03-19 Thread Fernando Nasser

Well,

Someone just dropped the DROP DATABASE statement rules right in the
middle of the CREATE DATABASE production rules!!!

Fernando



Fernando Nasser wrote:
 
 The OWNER production rules added to DROP DATABASE:
 
 DropdbStmt: DROP DATABASE database_name
 {
 DropdbStmt *n =
 makeNode(DropdbStmt);
 n-dbname = $3;
 $$ = (Node *)n;
 }
 | OWNER opt_equal name
 {
 $$ = lconsi(4, makeList1($3));
 }
 | OWNER opt_equal DEFAULT
 {
 $$ = lconsi(4, makeList1(NULL));
 }
 ;
 
 Cause compiler warnings and are clearly returning the wrong type
 (a List, instead of a Node).
 
 (...)/pgsql/src/backend/parser/gram.y: In function
 `yyparse':/home/fnasser/DEVO/pgsql/pgsql/src/backend/parser/gram.y:3205:
 warning: assignment from incompatible pointer type
 (...)/pgsql/src/backend/parser/gram.y:3209: warning: assignment from
 incompatible pointer type
 
 --
 Fernando Nasser
 Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
 2323 Yonge Street, Suite #300
 Toronto, Ontario   M4P 2C9
 
 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?
 
 http://www.postgresql.org/users-lounge/docs/faq.html

-- 
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] insert statements

2002-03-18 Thread Fernando Nasser

Vince Vielhaber wrote:
 
 Looks like Sybase ignores the bar:
 
 1 create table foo(a int)
 2 go
 1 insert into foo(bar.a) values(1)
 2 go
 (1 row affected)
 1 select * from foo
 2 go
  a
  ---
1
 
 (1 row affected)
 1
 

This looks like a parser error to me.  It probably only takes the
last bit of the name and ignores all the qualifiers...


-- 
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] insert statements

2002-03-18 Thread Fernando Nasser

Tom Lane wrote:
 
 I'd want it to error out on INSERT foo (bar.col), though ;-)
 

And on INSERT foo (bar.foo.col) as well.

This means we will have to take this check down to the analyze
phase (where the schema where foo is located is finally known,
if it was not specified explicitly).

We could easily take INSERT bar.foo (bar.foo.col) but the
above one is trouble.

-- 
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] insert statements

2002-03-18 Thread Fernando Nasser

Vince Vielhaber wrote:
 
 On Thu, 14 Mar 2002, Rod Taylor wrote:
 
  Out of curiosity, does SyBase allow you to qualify it with
  schema.table.column?
 
 Just tried it...  Yes.
 

What if you give it a bogus schema name?  Does it error out or just
ignore it?

-- 
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[HACKERS] Broken code in gram.y

2002-03-18 Thread Fernando Nasser

The OWNER production rules added to DROP DATABASE:

DropdbStmt: DROP DATABASE database_name
{
DropdbStmt *n =
makeNode(DropdbStmt);
n-dbname = $3;
$$ = (Node *)n;
}
| OWNER opt_equal name
{
$$ = lconsi(4, makeList1($3));
}
| OWNER opt_equal DEFAULT
{
$$ = lconsi(4, makeList1(NULL));
}
;


Cause compiler warnings and are clearly returning the wrong type
(a List, instead of a Node).

(...)/pgsql/src/backend/parser/gram.y: In function
`yyparse':/home/fnasser/DEVO/pgsql/pgsql/src/backend/parser/gram.y:3205:
warning: assignment from incompatible pointer type
(...)/pgsql/src/backend/parser/gram.y:3209: warning: assignment from
incompatible pointer type


-- 
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] OID wraparound: summary and proposal

2001-08-07 Thread Fernando Nasser

Tom,

If we have WITH NOOID, why not having a WITH OID32 and WITH OID64 (or
something of a sort)
as well (being OID32 the default and OID an alias to it)?  
The last would not be available on some systems 
(who will use a system that does not support long long as a database
server anyway?)

The wire protocol will always handle the (tableoid) long form,
references will always store
the long form...  The OID32 would exist only to allow people to save
space in tables that need
OIDs but not the 64 bit version.



-- 
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [HACKERS] COPY commands could use an enhancement.

2001-04-30 Thread Fernando Nasser

Karen saw me importing data into a database using pgaccess.

Again, this could be useful to someone that it is not a superuser. 
But only superusers can use pgaccess.  What a shame :-(

Fernando

P.S.: pgaccess has a much more limited import facility - only text files
and you can only change the delimiter.  But it can be expanded.


Tom Lane wrote:
 
 Alfred Perlstein [EMAIL PROTECTED] writes:
  It would be very helpful if the COPY command could be expanded
  in order to provide positional parameters.
 
 I think it's a bad idea to try to expand COPY into a full-tilt data
 import/conversion utility, which is the direction that this sort of
 suggestion is headed in.  COPY is designed as a simple, fast, reliable,
 low-overhead data transfer mechanism for backup and restore.  The more
 warts we add to it, the less well it will serve that purpose.
 
 Example: if we allow selective column import, what do we do with missing
 columns?  Must COPY now be able to handle insertion of default-value
 expressions?
 
 I think it'd be better to put effort into an external data translation
 utility that can deal with column selection, data reformatting, CR/LF
 conversion, and all those other silly little issues that come up when
 you need to move data from one DBMS to another.  Sure, we could make
 the backend do some of this stuff, but it'd be more maintainable as a
 separate program ... IMHO anyway.  I think that pgaccess and pgadmin
 already have some capability in this line, BTW.
 
 regards, tom lane
 
 ---(end of broadcast)---
 TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

-- 
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[HACKERS] ERROR: parser: parse error at or near JOIN

2001-04-26 Thread Fernando Nasser

Is anyone else seeing this?

I have the current CVS sources and make check ends up with one
failure.  My regression.diffs shows:


*** ./expected/join.out Thu Dec 14 17:30:45 2000
--- ./results/join.out  Mon Apr 23 20:23:15 2001
***
*** 1845,1851 
  -- UNION JOIN isn't implemented yet
  SELECT '' AS xxx, *
FROM J1_TBL UNION JOIN J2_TBL;
! ERROR:  UNION JOIN is not implemented yet
  --
  -- Clean up
  --
--- 1845,1851 
  -- UNION JOIN isn't implemented yet
  SELECT '' AS xxx, *
FROM J1_TBL UNION JOIN J2_TBL;
! ERROR:  parser: parse error at or near JOIN
  --
  -- Clean up
  --

==


-- 
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] ERROR: parser: parse error at or near JOIN

2001-04-26 Thread Fernando Nasser

Please disregard this.  This message was held by Majordomo for a couple
of days and I have already resent it.

Tom Lane has already solved my problem (I had a miscompiled version of
bison in my machine).

Regards to all,
Fernando


Fernando Nasser wrote:
 
 Is anyone else seeing this?
 
 I have the current CVS sources and make check ends up with one
 failure.  My regression.diffs shows:
 
 *** ./expected/join.out Thu Dec 14 17:30:45 2000
 --- ./results/join.out  Mon Apr 23 20:23:15 2001
 ***
 *** 1845,1851 
   -- UNION JOIN isn't implemented yet
   SELECT '' AS xxx, *
 FROM J1_TBL UNION JOIN J2_TBL;
 ! ERROR:  UNION JOIN is not implemented yet
   --
   -- Clean up
   --
 --- 1845,1851 
   -- UNION JOIN isn't implemented yet
   SELECT '' AS xxx, *
 FROM J1_TBL UNION JOIN J2_TBL;
 ! ERROR:  parser: parse error at or near JOIN
   --
   -- Clean up
   --
 

-- 
Fernando Nasser
Red Hat Inc. E-Mail:  [EMAIL PROTECTED]

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] ERROR: parser: parse error at or near JOIN

2001-04-25 Thread Fernando Nasser

Tom Lane wrote:
 
 Fernando Nasser [EMAIL PROTECTED] writes:
  Is anyone else seeing this?
 
 No.
 
  I have the current CVS sources and make check ends up with one
  failure.  My regression.diffs shows:
 
 I think you must have built gram.c with a broken bison or yacc.  What
 exactly is configure picking, and what version is it?
 

Yes you are right.

With:

[12:03:04]  flex -V
flex version 2.5.4
  
[12:03:08]  bison -V
GNU Bison version 1.28

it fails, but using older versions of flex and bison the regression goes
away:

[12:05:30]  flex -V
flex Cygnus version 2.5-gnupro-99r1

[12:05:34]  bison -V
GNU Bison version 1.25


Thank you very much.




-- 
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[HACKERS] ERROR: parser: parse error at or near JOIN

2001-04-23 Thread Fernando Nasser

Is anyone else seeing this?

I have the current CVS sources and make check ends up with one
failure.  My regression.diffs shows:


*** ./expected/join.out Thu Dec 14 17:30:45 2000
--- ./results/join.out  Mon Apr 23 20:23:15 2001
***
*** 1845,1851 
  -- UNION JOIN isn't implemented yet
  SELECT '' AS xxx, *
FROM J1_TBL UNION JOIN J2_TBL;
! ERROR:  UNION JOIN is not implemented yet
  --
  -- Clean up
  --
--- 1845,1851 
  -- UNION JOIN isn't implemented yet
  SELECT '' AS xxx, *
FROM J1_TBL UNION JOIN J2_TBL;
! ERROR:  parser: parse error at or near JOIN
  --
  -- Clean up
  --

==


-- 
Fernando Nasser
Red Hat Canada Ltd. E-Mail:  [EMAIL PROTECTED]
2323 Yonge Street, Suite #300
Toronto, Ontario   M4P 2C9

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]