Re: [Retrieved]RE: [ADMIN] backup and recovery

2004-03-26 Thread Murthy Kambhampaty
Title: RE: [Retrieved]RE: [ADMIN] backup and recovery






Oops, sorry for the typo in the psql command invocation.  The output of the awk command in Step 3 is piped to
/usr/local/pgsql-7.4/bin/psql -d db_quux -f - -Atnq

(in the logging alternative it goes to
/usr/local/pgsql-7.4/bin/psql -d db_quux -f - -Atnq >> "$LogFile" 2>&1)

Cheers,
   Murthy


-Original Message-
From: [EMAIL PROTECTED] on behalf of Murthy Kambhampaty
Sent: Fri 3/26/2004 3:30 PM
To: Naomi Walker; Tom Lane
Cc: Bruce Momjian; Tsirkin Evgeny; Mark M. Huber; [EMAIL PROTECTED]
Subject: Re: [Retrieved]RE: [ADMIN] backup and recovery

I think you can get both benefits of "multi-statement transactions for INSERT dumps" by doing "subset copies"  ... without any changes in postgresql!  The method I use is developed for handling single table "loads", but is still relatively painless even for database dumps; however, it is limited to text dumps.

Let's say you want to unload-load table "tbl_foo" from schema "sch_bar" from database "db_baz" and reload "sch_bar.tbl_foo" to database "db_quux".  Try the following:

1.) Dump-restore the table schema so you create an empty table in the destination database. e.g.:
    /usr/local/pgsql-7.4/bin/pg_dump -s -t tbl_foo --schema sch_bar db_baz | \
        /usr/local/pgsql-7.4/bin/psql -d db_quux
This can be adjusted for different hosts, etc.

2.) COPY the records to a file:
    /usr/local/pgsql-7.4/bin/psql -d db_bar \
 -c "copy sch_bar.tbl_foo to stdout" > sch_bar.tbl_foo.dat
OR
    /usr/local/pgsql-7.4/bin/psql -d db_bar -Aqt \
 -c "select * from sch_bar.tbl_foo where " > sch_bar.tbl_foo.dat
The latter is slower, but selective.  You can also use the -p option to set col and row separators to whatever you like (as with copy options). If your source data came from a dump file, rather than a COPY, you can strip sql commands to leave data only, or modify the commands below.

3. Pipe the data from sch_bar.tbl_foo.dat to psql, with copy commands spliced in at chosen intervals (in numbers of lines) depending on your preferences for speed versus "recoverability".  In the example below, the subset size is 2000 lines:
awk \
  -v SubSize=2000 \
  -v COPYSTMT="copy sch_bar.tbl_foo from stdin;" \
 'BEGIN{ print COPYSTMT } \
  { print $0 } \
  FNR % SubSize == 0 { \
  print "\\.\n\n" ; \
  print "\n"; \
  print COPYSTMT }' "sch_bar.tbl_foo.dat" | \
    /usr/local/pgsql-7.4/bin/psql -U gouser -d airfrance -f -

The awk command specifies the chosen subset size ("2000") and a copy statement for putting stdin in the selected table; at the "BEGIN"ning, a copy statmenet is issued and lines are streamed in from the text file containing table rows; after each SubSize number of lines the copy stream is ended (as in text dumps, with a "\."), and a new copy statment inserted.

For a 220,000 row table, times for the simple copy versus the subset copy were:

    Simple copy:
    real    0m21.704s
    user    0m3.790s
    sys 0m0.880s

    Subset copy:
    real    0m24.233s
    user    0m5.710s
    sys 0m1.090s

Over 10% more wall clock time, but the savings from not having to rerun the entire "load" if errors are found could be tremendous.


3a.  Alternately, you can generate a log so you easily know which subset failed (if any):
# LogFile="/home/postgres/load.log"; \
 awk \
  -v SubSize=2000 \
  -v COPYSTMT="copy S2.air from stdin;" \
  -v LogF="$LogFile" \
 'BEGIN{ print "Block Size: " SubSize > LogF; \
   print "Copy Statment: " COPYSTMT > LogF; \
   print "\n\n" > LogF; \
   close(LogF) ; \
   print COPYSTMT } \
  { print $0 } \
  FNR % SubSize == 0 { \
  print "\\.\n\n" ; \
  printf("select \047Processed %d records from line no. %d to line no. %d\047;\n", SubSize, FNR -SubSize +1, FNR) ; \
  print "\n"; \
  print COPYSTMT }
  END{ \
  print "\\.\n\n" ; \
  printf("select \047Processed a grand total of %d lines from %s\047;\n", NR, FILENAME ) }' \
  "sch_bar.tbl_foo.dat" | \
    /usr/local/pgsql-7.4/bin/psql -U gouser -d airfrance -Atnq -f - >> "$LogFile" 2>&1

Errors can be located with:

[EMAIL PROTECTED] postgres]$ cat load.log | grep -B 3 -A 3 "ERROR:"
Processed 2000 records from line no. 192001 to line no. 194000
Processed 2000 records from line no. 194001 to line no. 196000
Processed 2000 records from line no. 196001 to line no. 198000
ERROR:  invalid input syntax for integer: "My0"
CONTEXT:  COPY tbl_foo, line 2000, column oct 02: "My0"
Processed 2000 records from line no. 198001 to line no. 20
Processed 2000 records from line no. 21 to line no. 202000


HTH
    Murthy




> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Naomi Walker
> Sent: Wednesday, March 24, 2004 5:58 PM
> To: Tom Lane
> Cc: Bruce Momjian; Tsirkin Evgeny; Naomi Walker; Mark M. Huber;
> [EMAIL PROTECTED]
> Subject: Re: [Retrieved]RE: [ADMIN] backup and recovery
>
>
> At 03:54 PM 3/24/2004, Tom Lane wro

Re: [ADMIN] postgres copy command very slow.

2004-03-26 Thread Sam Barnett-Cormack
On Fri, 26 Mar 2004, Hemapriya wrote:

> Hi,
>
> We have postgresql 7.4.1 running on Mac OS.we are
> doing a conversion from mysql to postgres. I'm
> importing the table dumps using copy command in
> postgres. Copy takes 2 min to import 427938 rows in
> one table and at the same time it takes more than 2
> hrs for 2415768 rows.. no of columns are similar in
> both the tables. i'm not able to figure out why it is
> getting too slower?

Are the field types the same? Indexes on the two tables?

For the best chance of a good diagnosis, please post full details of
both tables - constraints, indexes, and field types at least.

-- 

Sam Barnett-Cormack
Software Developer   |  Student of Physics & Maths
UK Mirror Service (http://www.mirror.ac.uk)  |  Lancaster University

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [ADMIN] License for PostgreSQL for commercial purpose

2004-03-26 Thread listas
Hi,

> > | The Postgres license is a free software license that is GPL
> > | compatible.
> > 
> > Where GPL compatible means (possibly among other things) that I can get
> > a BSD-licensed Postgresql and turn it into a GPL-licensed MyPostgresql ?
> > 
> No, it means you can distribute the two together like on a redhat CD 
> without worrying about conflicting licenses.

If I understand the licenses correctly, the point about being GPL-compatible is
not putting everything on a CD. It's linking gpl code with non-gpl code. Say, I
create a command-based dump utility for PostgreSQL (and so links the pgsql
clientlibrary) but use the GNU Readline library for command-listory (which is
gpl'ed). If the PostgreSQL license wasn't compatible, I would bot be able to
link the Readline library into my executable.


[]s, Fernando Lozano


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [ADMIN] License for PostgreSQL for commercial purpose

2004-03-26 Thread Doug Quale
Radu-Adrian Popescu <[EMAIL PROTECTED]> writes:

> Doug Quale wrote:
>
> | The Postgres license is a free software license that is GPL
> | compatible.
> 
> Where GPL compatible means (possibly among other things) that I can get
> a BSD-licensed Postgresql and turn it into a GPL-licensed MyPostgresql ?

Absolutely.  This has been done.  For example, GNU bison was derived
many years ago from Berkeley yacc.

You can go further and make a non-free MyPostgresql.  Use Postgres
code in a completely proprietary project if you like.  The X11 license
places far fewer restrictions on code than does the GPL.  Works
derived from X11-licensed code do not have to be free.

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [ADMIN] backup and recovery

2004-03-26 Thread Bruno Wolff III
On Mon, Mar 22, 2004 at 09:13:06 -0800,
  "Mark M. Huber" <[EMAIL PROTECTED]> wrote:
> That sounds like a brilliant idea, who do we say it to make it so?

It might be better to make this part of pg_restore, rather than pg_dump.

> 
> Mark H
> 
> -Original Message-
> From: Naomi Walker [mailto:[EMAIL PROTECTED]
> Sent: Monday, March 22, 2004 8:19 AM
> To: Mark M. Huber
> Cc: Naomi Walker; [EMAIL PROTECTED]
> Subject: Re: [ADMIN] backup and recovery
> 
> 
> That brings up a good point.  It would be extremely helpful to add two 
> parameters to pg_dump.  One, to add how many rows to insert before a 
> commit, and two, to live through X number of errors before dying (and 
> putting the "bad" rows in a file).

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [Retrieved]RE: [ADMIN] backup and recovery

2004-03-26 Thread Murthy Kambhampaty
I think you can get both benefits of "multi-statement transactions for INSERT dumps" 
by doing "subset copies"  ... without any changes in postgresql!  The method I use is 
developed for handling single table "loads", but is still relatively painless even for 
database dumps; however, it is limited to text dumps.

Let's say you want to unload-load table "tbl_foo" from schema "sch_bar" from database 
"db_baz" and reload "sch_bar.tbl_foo" to database "db_quux".  Try the following:

1.) Dump-restore the table schema so you create an empty table in the destination 
database. e.g.:
/usr/local/pgsql-7.4/bin/pg_dump -s -t tbl_foo --schema sch_bar db_baz | \
/usr/local/pgsql-7.4/bin/psql -d db_quux 
This can be adjusted for different hosts, etc.

2.) COPY the records to a file:
/usr/local/pgsql-7.4/bin/psql -d db_bar \
 -c "copy sch_bar.tbl_foo to stdout" > sch_bar.tbl_foo.dat
OR
/usr/local/pgsql-7.4/bin/psql -d db_bar -Aqt \
 -c "select * from sch_bar.tbl_foo where " > 
sch_bar.tbl_foo.dat
The latter is slower, but selective.  You can also use the -p option to set col and 
row separators to whatever you like (as with copy options). If your source data came 
from a dump file, rather than a COPY, you can strip sql commands to leave data only, 
or modify the commands below.

3. Pipe the data from sch_bar.tbl_foo.dat to psql, with copy commands spliced in at 
chosen intervals (in numbers of lines) depending on your preferences for speed versus 
"recoverability".  In the example below, the subset size is 2000 lines:
awk \
  -v SubSize=2000 \
  -v COPYSTMT="copy sch_bar.tbl_foo from stdin;" \
 'BEGIN{ print COPYSTMT } \
  { print $0 } \
  FNR % SubSize == 0 { \
  print "\\.\n\n" ; \
  print "\n"; \
  print COPYSTMT }' "sch_bar.tbl_foo.dat" | \
/usr/local/pgsql-7.4/bin/psql -U gouser -d airfrance -f -

The awk command specifies the chosen subset size ("2000") and a copy statement for 
putting stdin in the selected table; at the "BEGIN"ning, a copy statmenet is issued 
and lines are streamed in from the text file containing table rows; after each SubSize 
number of lines the copy stream is ended (as in text dumps, with a "\."), and a new 
copy statment inserted.

For a 220,000 row table, times for the simple copy versus the subset copy were:

Simple copy:
real0m21.704s
user0m3.790s
sys 0m0.880s

Subset copy:
real0m24.233s
user0m5.710s
sys 0m1.090s

Over 10% more wall clock time, but the savings from not having to rerun the entire 
"load" if errors are found could be tremendous.


3a.  Alternately, you can generate a log so you easily know which subset failed (if 
any):
# LogFile="/home/postgres/load.log"; \
 awk \
  -v SubSize=2000 \
  -v COPYSTMT="copy S2.air from stdin;" \
  -v LogF="$LogFile" \
 'BEGIN{ print "Block Size: " SubSize > LogF; \
   print "Copy Statment: " COPYSTMT > LogF; \
   print "\n\n" > LogF; \
   close(LogF) ; \
   print COPYSTMT } \
  { print $0 } \
  FNR % SubSize == 0 { \
  print "\\.\n\n" ; \
  printf("select \047Processed %d records from line no. %d to line no. %d\047;\n", 
SubSize, FNR -SubSize +1, FNR) ; \
  print "\n"; \
  print COPYSTMT }
  END{ \
  print "\\.\n\n" ; \
  printf("select \047Processed a grand total of %d lines from %s\047;\n", NR, FILENAME 
) }' \
  "sch_bar.tbl_foo.dat" | \
/usr/local/pgsql-7.4/bin/psql -U gouser -d airfrance -Atnq -f - >> "$LogFile" 
2>&1

Errors can be located with:

[EMAIL PROTECTED] postgres]$ cat load.log | grep -B 3 -A 3 "ERROR:"
Processed 2000 records from line no. 192001 to line no. 194000
Processed 2000 records from line no. 194001 to line no. 196000
Processed 2000 records from line no. 196001 to line no. 198000
ERROR:  invalid input syntax for integer: "My0"
CONTEXT:  COPY tbl_foo, line 2000, column oct 02: "My0"
Processed 2000 records from line no. 198001 to line no. 20
Processed 2000 records from line no. 21 to line no. 202000


HTH
Murthy




> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] Behalf Of Naomi Walker
> Sent: Wednesday, March 24, 2004 5:58 PM
> To: Tom Lane
> Cc: Bruce Momjian; Tsirkin Evgeny; Naomi Walker; Mark M. Huber;
> [EMAIL PROTECTED]
> Subject: Re: [Retrieved]RE: [ADMIN] backup and recovery
> 
> 
> At 03:54 PM 3/24/2004, Tom Lane wrote:
> >Bruce Momjian <[EMAIL PROTECTED]> writes:
> > > Added to TODO:
> > >   * Have pg_dump use multi-statement transactions for 
> INSERT dumps
> >
> > > For simple performance reasons, it would be good.  I am 
> not sure about
> > > allowing errors to continue loading.   Anyone else?
> >
> >Of course, anyone who actually cares about reload speed shouldn't be
> >using INSERT-style dumps anyway ... I'm not sure why we should expend
> >effort on that rather than just telling people to use the COPY mode.
> 
> Understood.  I would still love this feature for when in the 
> COPY mode.
> 
> 

Re: [ADMIN] automatic pg_dumpall

2004-03-26 Thread Bruno Wolff III
On Thu, Mar 25, 2004 at 04:21:16 +,
  Victor Sudakov <[EMAIL PROTECTED]> wrote:
> Colleagues,
> 
> If I have to organize an automatic nightly pg_dumpall, how do I handle
> authentication ? I do not want to create a passwordless superuser (or
> trust method in pg_hba.conf), and there is noone to enter the password
> manually.  Is there another recipe?

Who do you trust? If you trust the system account that pg_dump is running
under (and if you don't you have problems) and pg_dump is being run on
the same machine as the postgres server, use a domain socket connection
and ident authentication. This doesn't work on all OS's, but in works on
a number of common ones.

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[ADMIN] postgres copy command very slow.

2004-03-26 Thread Hemapriya
Hi,

We have postgresql 7.4.1 running on Mac OS.we are
doing a conversion from mysql to postgres. I'm
importing the table dumps using copy command in
postgres. Copy takes 2 min to import 427938 rows in
one table and at the same time it takes more than 2
hrs for 2415768 rows.. no of columns are similar in
both the tables. i'm not able to figure out why it is
getting too slower?

Does anyone know why there is lot of difference in
performance? 

Any hint is highly appreciated.

regards
Priya

__
Do you Yahoo!?
Yahoo! Finance Tax Center - File online. File on time.
http://taxes.yahoo.com/filing.html

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[ADMIN] Raw devices

2004-03-26 Thread Jaime Casanova
Hi, all.
 
I was thinking about using postgresql and want to know if i can use raw devices with it? if it is possible, how i can configure it? and what are your opinions about postgresql performance on raw devices.
 
 
 
thanx in advance, el_vigiaHelp STOP spam with the new MSN 8  and get 2 months FREE*


[ADMIN] Can I make PostgreSql namespace case-insensitive?

2004-03-26 Thread Ben Kim

Dear List,

Is there a way to completely turn off case sensitivity of the names of
table, field, sequence, etc.?

In our case we used mixed-case names, and the names are unique. They
wouldn't collide even if they get turned into lowercase names.

Because of interfacing problems with other softwares, I wish to make all
names case insensitive. However we already have some scripts using case
sensitive names, so don't want just to convert the database to lowercase
only. This is why I am looking to turn off case sensitivity of the
database as a whole. (So that "MyTable", mytable, MyTable will all work.)

I would appreciate any advice. 


Regards,
Ben


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [ADMIN] License for PostgreSQL for commercial purpose

2004-03-26 Thread scott.marlowe
On Fri, 26 Mar 2004, Radu-Adrian Popescu wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> Doug Quale wrote:
> | Chris Browne <[EMAIL PROTECTED]> writes:
> |
> |
> |>The FSF characterizes the PostgreSQL license as being "an X11 style
> |>license."  They felt a need to distinguish between different
> |>variations of licenses that are called 'BSD licenses.'
> |>
> |>The FSF web site then compares various variations on "BSD licenses,"
> |>considering that there are some that they deem to be "free" (in their
> |>terms), and that there are others that they deem to _NOT_ be "free"
> |>(again in their terms).
> |
> |
> | No, that's not what the FSF says.  All the BSD licenses are considered
> | free by the FSF. (Look at the web page yourself.)  Most BSD licenses
> | are compatible with the GPL, but the original BSD license contains a
> | problematic advertising clause that makes it incompatible with the
> | GPL.
> |
> | The Postgres license is a free software license that is GPL
> | compatible.
> 
> Where GPL compatible means (possibly among other things) that I can get
> a BSD-licensed Postgresql and turn it into a GPL-licensed MyPostgresql ?
> 
> Not that I would, just curious. And even if I did, it would be a severly
> castrated postgresql, as the history of the "My" particle suggests :))
> ~ - sorry I couldn't resist.

No, it means you can distribute the two together like on a redhat CD 
without worrying about conflicting licenses.


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [ADMIN] License for PostgreSQL for commercial purpose

2004-03-26 Thread Radu-Adrian Popescu
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Doug Quale wrote:
| Chris Browne <[EMAIL PROTECTED]> writes:
|
|
|>The FSF characterizes the PostgreSQL license as being "an X11 style
|>license."  They felt a need to distinguish between different
|>variations of licenses that are called 'BSD licenses.'
|>
|>The FSF web site then compares various variations on "BSD licenses,"
|>considering that there are some that they deem to be "free" (in their
|>terms), and that there are others that they deem to _NOT_ be "free"
|>(again in their terms).
|
|
| No, that's not what the FSF says.  All the BSD licenses are considered
| free by the FSF. (Look at the web page yourself.)  Most BSD licenses
| are compatible with the GPL, but the original BSD license contains a
| problematic advertising clause that makes it incompatible with the
| GPL.
|
| The Postgres license is a free software license that is GPL
| compatible.
Where GPL compatible means (possibly among other things) that I can get
a BSD-licensed Postgresql and turn it into a GPL-licensed MyPostgresql ?
Not that I would, just curious. And even if I did, it would be a severly
castrated postgresql, as the history of the "My" particle suggests :))
~ - sorry I couldn't resist.
Cheers,
- --
Radu-Adrian Popescu
CSA, DBA, Developer
Aldratech Ltd.
+40213212243
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.3 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iD8DBQFAZF2TVZmwYru5w6ERAsL4AJsH8ap61BO6i7i5dJ0rDmWFQ9270ACglcUL
fuQ+y2GN8lC30TTdloqhId0=
=lEP9
-END PGP SIGNATURE-
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [ADMIN] License for PostgreSQL for commercial purpose

2004-03-26 Thread Andrew Sullivan
On Thu, Mar 25, 2004 at 06:04:10PM -0500, Chris Browne wrote:
> 
> The FSF web site then compares various variations on "BSD licenses,"
> considering that there are some that they deem to be "free" (in their
> terms), and that there are others that they deem to _NOT_ be "free"
> (again in their terms).

To be fair to the FSF, they have never claimed that the original BSD
license was not a free license.  It just wasn't compatible with the
GPL, because of the advertising clause.

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [ADMIN] License for PostgreSQL for commercial purpose

2004-03-26 Thread Doug Quale
Chris Browne <[EMAIL PROTECTED]> writes:

> The FSF characterizes the PostgreSQL license as being "an X11 style
> license."  They felt a need to distinguish between different
> variations of licenses that are called 'BSD licenses.'
> 
> The FSF web site then compares various variations on "BSD licenses,"
> considering that there are some that they deem to be "free" (in their
> terms), and that there are others that they deem to _NOT_ be "free"
> (again in their terms).

No, that's not what the FSF says.  All the BSD licenses are considered
free by the FSF. (Look at the web page yourself.)  Most BSD licenses
are compatible with the GPL, but the original BSD license contains a
problematic advertising clause that makes it incompatible with the
GPL.

The Postgres license is a free software license that is GPL
compatible.

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [ADMIN] automatic pg_dumpall

2004-03-26 Thread Victor Sudakov
Kemin Zhou wrote:
>>
>>If I have to organize an automatic nightly pg_dumpall, how do I handle
>>authentication ? I do not want to create a passwordless superuser (or
>>trust method in pg_hba.conf), and there is noone to enter the password
>>manually.  Is there another recipe?
>>
>>Thanks in advance for any input.
>>
>>  
>>
> use crontab
> run this daily
> su - postgres -c "/usr/local/pgsql/bin/pg_dump -b -Ft -f db.tar -h 
> machine_name_of_db your_db"


I am afraid you did not understand my question. In your scenario,
pg_dump will ask for the authentification of the user "postgres". I
have already stated above that I do not want to create a passwordless
superuser (or trust method in pg_hba.conf).

> 
> Kemin
> 
> 
> 
> 
> 
> 
> **
> Proprietary or confidential information belonging to Ferring Holding SA or to one of 
> its affiliated companies may be contained in the message. If you are not the 
> addressee indicated in this message (or responsible for the delivery of the message 
> to such person), please do not copy or deliver this message to anyone. In such case, 
> please destroy this message and notify the sender by reply e-mail. Please advise the 
> sender immediately if you or your employer do not consent to e-mail for messages of 
> this kind. Opinions, conclusions and other information in this message represent the 
> opinion of the sender and do not necessarily represent or reflect the views and 
> opinions of Ferring.
> **
> 
> 
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
> 

-- 
Victor Sudakov,  VAS4-RIPE, VAS47-RIPN
2:5005/[EMAIL PROTECTED] http://vas.tomsk.ru/

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [ADMIN] License for PostgreSQL for commercial purpose

2004-03-26 Thread Chris Browne
[EMAIL PROTECTED] ("Eric Yum") writes:
> I am a developer of one commercial organization. We are going to
> develop some applications with PostgreSQL 7.3.3. I learn from some
> websites that it cost no charge for developing software with
> PostgreSQL in commercial environment. However, I saw the PostgreSQL
> is under two type of licenses, namely, X11-style license and BSD
> license for the following websites, :p>
>
> http://www.postgresql.org/licence.html:p>
>
> http://www.gnu.org/directory/database/servers/postgresql.html:p>
>
> Please kindly provide professional comment on this issue and suggest
> whether the use of PostgreSQL for commercial purpose is no license
> charge or not :p>

What you are observing is that the Free Software Foundation (who hold
the gnu.org domain) felt a need to write up their own interpretation
of what they feel various licenses mean.

The FSF characterizes the PostgreSQL license as being "an X11 style
license."  They felt a need to distinguish between different
variations of licenses that are called 'BSD licenses.'

The FSF web site then compares various variations on "BSD licenses,"
considering that there are some that they deem to be "free" (in their
terms), and that there are others that they deem to _NOT_ be "free"
(again in their terms).

None of that establishes that there is actually more than one license
under which you can obtain PostgreSQL; it merely indicates that the
FSF felt the need to use a different name for the license.

There aren't two licenses; there's just one.  And it allows you to use
PostgreSQL for whatever purpose you like without imposing any
licensing fees.

I also find it quite curious that you intend to deploy applications
with a version of PostgreSQL that is known to have bugs fixed by later
releases.  The fact that version 7.3.6 has been released should be
clearly interpreted as indicating that there were problems with all
preceding versions, and that users are to be encouraged to upgrade to
that version.  

Furthermore, 7.4.2 has been released, offering substantial performance
and other functionality advantages over the 7.3 series.  It would seem
odd to start developing applications with a version that has already
been (arguably) superceded by a new major release.
-- 
let name="cbbrowne" and tld="acm.org" in String.concat "@" [name;tld];;
http://cbbrowne.com/info/linux.html
Rules of the Evil Overlord #130.  "All members of my Legions of Terror
will  have professionally  tailored  uniforms. If  the  hero knocks  a
soldier unconscious and steals the uniform, the poor fit will give him
away." 

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster