Re: [GENERAL] OVERLAPS

2000-12-01 Thread Mike Castle

On Thu, Nov 30, 2000 at 04:22:20PM +0300, Alex Bolenok wrote:
> peroon=# SELECT OVERLAPS('yesterday', 'today', 'today', 'tomorrow');
> ERROR:  parser: parse error at or near "overlaps"

Try using it as an operator instead of a function:  

test=# select ('yesterday'::timestamp, 'today'::timestamp) overlaps 
('today'::timestamp, 'tomorrow'::timestamp);
 overlaps 
--
 f
(1 row)

> What am I doing wrong? Is there any way to make this function work?

Is the parser getting confused when it shouldn't (ie, if the functions were
called something beside "OVERLAPS" would it then work)?

mrc
-- 
   Mike Castle   Life is like a clock:  You can work constantly
  [EMAIL PROTECTED]  and be right all the time, or not work at all
www.netcom.com/~dalgoda/ and be right at least twice a day.  -- mrc
We are all of us living in the shadow of Manhattan.  -- Watchmen



Re: [GENERAL] strange error (32 512)

2000-12-01 Thread Martin Jacobs

Hans-Jürgen Schönig schrieb:
> ...
> I am writing a perl program that should insert data into a database. It
> works perfectly well when starting the script manually. When the
> starting the script as cron (same user), it collapses ($fehler = 32 512
> which I don't know and can't find).
> 
> $fehler=system("psql $new $pguser < $upload/working 2> /dev/null ");
> 
> Is there any smarter solution using Perl and what in hell means 32
> 512???
> ...

Yes, you might use the perl DBI module. This supports pg. Then
you can code like this:

my $conn;
my $dbname = 'foo'; # Your database's name.
my $tablename = 'foobar';
$conn = DBI->("dbi:Pg:dbname=$dbname", $pguser, '', {} );
# your SQL-stuff, like:
$conn->do("INSERT INTO $tablename (column1) values ('$data')");
$conn->disconnect;

Just to give you an idea. :-)

What means error 32515?

Let me site some text from the perlfunc man page

system LIST
Does exactly the same thing as "exec LIST" except
that a fork is done first, and the parent process
waits for the child process to complete.  Note
that argument processing varies depending on the
number of arguments.  The return value is the exit
status of the program as returned by the wait()
call.  To get the actual exit value divide by 256.
...

Ok, dividing 32512 by 256 gives me 127. 

127 is the exec's (and bash's too) return code if the command
could not be found.

Cron jobs start with a reduced environment. Maybe something
important is missing, at least a search path and psql isn't
found. Try to use an absolute path. Try to run this command as
batch job, don't throw it's output away and look the resulting
e-mail message.

Good Luck!

Martin

-- 
Dipl-Ing. Martin Jacobs 
Registered Linux User #87175, http://counter.li.org/



Re: [GENERAL] Unanswered questions about Postgre

2000-12-01 Thread Mike Castle

On Thu, Nov 30, 2000 at 12:16:39PM -0800, Mikheev, Vadim wrote:
> Oracle & Interbase have savepoints. Hopefully PG will also have them in 7.2

A feature that I liked from using Faircom's Ctree (granted not an SQL based
DB or one with built in relations) was auto-save points.  So, if something
failed, it could be rolled back to the previous auto-save point.

Just food for thought on how Ctree works.

A transaction, by default, will fail on the commit if there were any errors
within the transaction (though it would happily process all of your
commands after an error without additional failures, so it was less verbose
than the original psql demonstration at the beginning of this thread).
Also, by default, no auto-save points.

One could turn on auto-save points.  (If one wanted "normal" save-point
activities, you would get the save point counter number and then rollback
to that particular save point at some time).  This was convenient if you
wanted to just rollback the last operation that caused the error (this may
have had the side effect of unmarking the fact than an error occured, but I
don't think so.  There was another command to clear the transaction error,
with lots of disclaimers saying if you did that, you took your own
responsibilities).

Guess, in sort, what I'm saying is, if save points are added, might as well
add auto-save points while at it, and give the ability to selectively clear
the error and allow a transaction to commit anyway (keeping current method
as default, of course).

mrc
-- 
   Mike Castle   Life is like a clock:  You can work constantly
  [EMAIL PROTECTED]  and be right all the time, or not work at all
www.netcom.com/~dalgoda/ and be right at least twice a day.  -- mrc
We are all of us living in the shadow of Manhattan.  -- Watchmen



Re: [GENERAL] Database cluster?

2000-12-01 Thread Alain Toussaint

> Please forgive my ignorance (I haven't used Postgres for that long), but
> what are shm and sem?

shared memory and semaphores,interresting tidbit on Linux (2.4.0-test
series at least) is that shared memory can be a nodev filesystem (like proc or
devfs):

/dev/ide/host0/bus0/target0/lun0/part2 on / type ext2 
(rw,errors=remount-ro,errors=remount-ro)
proc on /proc type proc (rw)
/shm on /shm type shm (rw)

and there's at least a few other nodev filesystem in the latest kernel
series:

nodev   shm
nodev   sockfs
nodev   pipefs
nodev   proc
ext2
nodev   devfs

i'm doing some heavy research into these filesystem but so far,there's a
BIG lack of docs.

Alain




[GENERAL] postgres blobs and PHP

2000-12-01 Thread Martin A. Marques

I know I bothered with this, but I just don't get it.
Lets say I have a test table like this in a certain database:

CREATE TABLE test (
id_test SERIAL,
blob_text   oid
);

Now I want to insert a good chunk of text in the blob_text field with PHP.
PHP has some postgres functions like pg_locreate, pg_lowrite, pg_loread, etc, 
but it never specifies the table and it doesn't specify the field in the 
table.
Now how does php know where to write the chunk of text (or the image, or 
whatever)?
Can someone give me a hand?

TIA.

-- 
"And I'm happy, because you make me feel good, about me." - Melvin Udall
-
Martín Marqués  email:  [EMAIL PROTECTED]
Santa Fe - Argentinahttp://math.unl.edu.ar/~martin/
Administrador de sistemas en math.unl.edu.ar
-



[GENERAL] PostgreSQL Shutdowns

2000-12-01 Thread Victor

localhost[/usr/local/pgsql/data]# psql test dbuser
Password:
psql: The Data Base System is shutting down
localhost[/usr/local/pgsql/data]#

-
I am getting weird DB shutdowns. Anybody experience this before ? Any ideas
welcome. What could this be?

I am running Solaris 7
/etc/system has the following shared memory setting:
set shmsys:shminfo_shmmax = 0x

test is a test database created for logins, etc.
dbuser is a user in the db. it's password is correctly entered. Sometimes
there are no problems but other times this failure happens.

Could this be something with persistent connections? What settings set the
length of time that a persistent connection will stay open?

Here are the startup settings (data/postmaster.opts):

-p 5432
-D /usr/local/pgsql/data
-B 64
-b /usr/local/pgsql/bin/postgres
-d 3
-i
-N 32





Re: [GENERAL] RFC: User reviews of PostgreSQL RI functionality

2000-12-01 Thread Joel Burton

> I was wondering if you could tell me where to get the CVS version 
of
> the ODBC driver -- I have almost implemented foreign key support
> (SQLForeignKeys), but I did it on the ODBC source available on the
> postgres server.  I browsed the postgres CVS repository via the 
web,
> but couldn't find anything relevant 

It's *possible* that if you're programming directly to the ODBC API 
that you've always seen the RI stuff. I do my work in VBA apps (like 
Access), and I know that they don't see the RI errors.

You can find the ODBC source in the pgsql source, at
src/interfaces/odbc (or something *very* close, no source in front 
of me right now, sorry).

If you don't want to grab the whole file just for ODBC stuff, you can 
use CVS to just get the contents of the src/interfaces/odbc 
directory. The directions say you need to use VisualC++ to compile 
this.

Via ftp, that's in the /dev directory.


--
Joel Burton, Director of Information Systems -*- [EMAIL PROTECTED]
Support Center of Washington (www.scw.org)



Re: [GENERAL] server permissions for sql copy

2000-12-01 Thread Joel Burton

> I'm running PostgreSQL 7.0.2 on Caldera eDesktop 2.4. 
> when I try to use the SQL COPY command in psql, I get
> an error message informing me that the backend could
> not open the file for reading.  I think the error
> number was 13 - Permission denied.  I changed the
> permissions of the directory and file to
> world-readable; but I still get the same error
> message.

Keep in mind it's the *server* process (user postgres), trying to get 
to the file, not the psql user (probably you). Is the file in *your* 
home directory?

You'll need to chmod a+rx this directory, and all directories above 
this directory.

On our server, we've created a directory for pgsql copying and 
dumping that is rwx for postgres and our DBAs.

--
Joel Burton, Director of Information Systems -*- [EMAIL PROTECTED]
Support Center of Washington (www.scw.org)



Re: [GENERAL] RFC: User reviews of PostgreSQL RI functionality

2000-12-01 Thread Joel Burton

On 1 Dec 2000, at 12:00, Ed Loehr wrote:

> How well is the recently-added PostgreSQL functionality supporting
> referential integrity (RI) working.  Any serious bugs?  Any major
> hinderances?  Other impressions?
> 
> I'd be particularly interested in hearing from people who've
> implemented larger schemas using RI (say, more than 30 tables 
and 50
> foreign keys).

We've been using a database w/85 tables, with about 60 pkey/fkey 
restraints in place. The db has 8 users plus supports a dynamic web 
site (having ~10 users at a time on the site). None of the tables are 
very large (avg ~400 rows) except 3, which have ~65000 rows.

I haven't come across any real problems particular to RI in pgsql. 
The constraints always seem to work; dumping and restoring works 
fine (for RI), etc.

You can do some things that defeat RI--most importantly, if you 
TRUNCATE a table, RI checks are never performed. However, this is 
(IMHO) a good thing, as TRUNCATE is intended solely for DBA use, 
and for DBAs, this means I can truncate a table, while ignoring any 
related records, and reimport (via COPY or INSERT) the data, all w/o 
disturbing any child relationships. This allows me to reconfigure a 
table, delete columns, add other constraints, etc., in a database 
w/o a full dump and restore. When pgsql has all the ALTER TABLE 
DROP COLUMN, ALTER TABLE ALTER COLUMN commands finished, 
this may be less important.

The current "stable" ODBC driver for Windows doesn't work perfectly 
w/RI -- it doesn't report RI errors as an error. The RI rule is still 
obeyed, however, to the ODBC client program, no error is reported, 
so your user may never know that something went wrong. (This is 
fixed in the CVS versions of the ODBC driver, and you can download 
a binary compile from my site at www.scw.org/pgaccess.)

Be away, though, that the way RI is handled internally by pgsql that 
when you dump a database and examine the dump, the RI 
statements are now triggers and not nice clean REFERENCES tblFoo 
ON fieldFOO DDL statements. It reimports perfectly, but is less self-
documenting.

In the tiny-wishlist department, I would like it if there were an easy 
way to change the referential integrity behavior (delete, block, 
etc.) for an existing relationship.


--
Joel Burton, Director of Information Systems -*- [EMAIL PROTECTED]
Support Center of Washington (www.scw.org)



Re: [GENERAL] More SCO funnies

2000-12-01 Thread Dave Smith

Larry Rosenman wrote:

> * Dave Smith <[EMAIL PROTECTED]> [001201 12:11]:
> 
>> Ok so thanks to Larry I managed to get postgresql compiled and running 
>> on SCO. Now I'm trying to compile a small 'C' program. When I link I get
>> 
>> cc -L/usr/local/pgsql/lib -lpq -lm -o t_postgresql t_postgresql.o 
>> post_util.o
>> undefined   first referenced
>>   symbol in file
>> PQexec  t_postgresql.o
>> PQclear t_postgresql.o
>> PQfinisht_postgresql.o
>> PQresultStatus  t_postgresql.o
>> PQcmdTuples t_postgresql.o
>> PQntuples   t_postgresql.o
>> PQgetvalue  t_postgresql.o
>> PQftype t_postgresql.o
>> PQfmod  t_postgresql.o
>> PQnfields   t_postgresql.o
>> PQfname t_postgresql.o
>> PQsetdbLogint_postgresql.o
>> PQstatust_postgresql.o
>> floor   post_util.o
>> i386ld fatal: Symbol referencing errors. No output written to t_postgresql
>> 
> 
> put the -lpq -lm AFTER the .o file
> 
> 
> 
>> Larry, can you work your magic again?

Well I guess sco and linux are a bit different. Also had to add -lsocket 
to the end and presto!
Thanks for your speedy reply





Re: [GENERAL] More SCO funnies

2000-12-01 Thread Larry Rosenman

* Dave Smith <[EMAIL PROTECTED]> [001201 13:05]:
> Larry Rosenman wrote:
> 
> > * Dave Smith <[EMAIL PROTECTED]> [001201 12:11]:
> > 
> >> Ok so thanks to Larry I managed to get postgresql compiled and running 
> >> on SCO. Now I'm trying to compile a small 'C' program. When I link I get
> >> 
> >> cc -L/usr/local/pgsql/lib -lpq -lm -o t_postgresql t_postgresql.o 
> >> post_util.o
> >> undefined   first referenced
> >>   symbol in file
> >> PQexec  t_postgresql.o
> >> PQclear t_postgresql.o
> >> PQfinisht_postgresql.o
> >> PQresultStatus  t_postgresql.o
> >> PQcmdTuples t_postgresql.o
> >> PQntuples   t_postgresql.o
> >> PQgetvalue  t_postgresql.o
> >> PQftype t_postgresql.o
> >> PQfmod  t_postgresql.o
> >> PQnfields   t_postgresql.o
> >> PQfname t_postgresql.o
> >> PQsetdbLogint_postgresql.o
> >> PQstatust_postgresql.o
> >> floor   post_util.o
> >> i386ld fatal: Symbol referencing errors. No output written to t_postgresql
> >> 
> > 
> > put the -lpq -lm AFTER the .o file
> > 
> > 
> > 
> >> Larry, can you work your magic again?
> 
> Well I guess sco and linux are a bit different. Also had to add -lsocket 
> to the end and presto!
> Thanks for your speedy reply
NP.  

It's generally a good idea to specify libraries after the object
files.  Most "REAL" Unix ld's go in order. 

LER

> 

-- 
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 972-414-9812 E-Mail: [EMAIL PROTECTED]
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749



Re: [GENERAL] More SCO funnies

2000-12-01 Thread Larry Rosenman

* Dave Smith <[EMAIL PROTECTED]> [001201 12:11]:
> Ok so thanks to Larry I managed to get postgresql compiled and running 
> on SCO. Now I'm trying to compile a small 'C' program. When I link I get
> 
> cc -L/usr/local/pgsql/lib -lpq -lm -o t_postgresql t_postgresql.o 
> post_util.o
> undefined   first referenced
>   symbol in file
> PQexec  t_postgresql.o
> PQclear t_postgresql.o
> PQfinisht_postgresql.o
> PQresultStatus  t_postgresql.o
> PQcmdTuples t_postgresql.o
> PQntuples   t_postgresql.o
> PQgetvalue  t_postgresql.o
> PQftype t_postgresql.o
> PQfmod  t_postgresql.o
> PQnfields   t_postgresql.o
> PQfname t_postgresql.o
> PQsetdbLogint_postgresql.o
> PQstatust_postgresql.o
> floor   post_util.o
> i386ld fatal: Symbol referencing errors. No output written to t_postgresql
> 
put the -lpq -lm AFTER the .o file


> Larry, can you work your magic again?

-- 
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 972-414-9812 E-Mail: [EMAIL PROTECTED]
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749



[GENERAL] database-level user privileges

2000-12-01 Thread Zachary Beane

I'd like to set up a database so that two users can create, alter,
drop, and grant on any table, regardless of which of the two users
initially created the table. Is this possible?

(The documentation has a blank spot under "Database Privileges".)

Zach
-- 
[EMAIL PROTECTED] Zachary Beane http://www.xach.com/



Re: [GENERAL] More SCO funnies

2000-12-01 Thread Alfred Perlstein

* Dave Smith <[EMAIL PROTECTED]> [001201 10:18] wrote:
> Ok so thanks to Larry I managed to get postgresql compiled and running 
> on SCO. Now I'm trying to compile a small 'C' program. When I link I get
> 
> cc -L/usr/local/pgsql/lib -lpq -lm -o t_postgresql t_postgresql.o 
> post_util.o
> undefined   first referenced
>   symbol in file
> PQexec  t_postgresql.o
> PQclear t_postgresql.o
> PQfinisht_postgresql.o
> PQresultStatus  t_postgresql.o
> PQcmdTuples t_postgresql.o
> PQntuples   t_postgresql.o
> PQgetvalue  t_postgresql.o
> PQftype t_postgresql.o
> PQfmod  t_postgresql.o
> PQnfields   t_postgresql.o
> PQfname t_postgresql.o
> PQsetdbLogint_postgresql.o
> PQstatust_postgresql.o
> floor   post_util.o
> i386ld fatal: Symbol referencing errors. No output written to t_postgresql

This looks like a problem on your end, post_util.o references 'floor'
and even with -lm it's not being found.  Can you compile something 
with post_util.o without t_postgresql.o to see if it's you or us? :)

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



[GENERAL] More SCO funnies

2000-12-01 Thread Dave Smith

Ok so thanks to Larry I managed to get postgresql compiled and running 
on SCO. Now I'm trying to compile a small 'C' program. When I link I get

cc -L/usr/local/pgsql/lib -lpq -lm -o t_postgresql t_postgresql.o 
post_util.o
undefined   first referenced
  symbol in file
PQexec  t_postgresql.o
PQclear t_postgresql.o
PQfinisht_postgresql.o
PQresultStatus  t_postgresql.o
PQcmdTuples t_postgresql.o
PQntuples   t_postgresql.o
PQgetvalue  t_postgresql.o
PQftype t_postgresql.o
PQfmod  t_postgresql.o
PQnfields   t_postgresql.o
PQfname t_postgresql.o
PQsetdbLogint_postgresql.o
PQstatust_postgresql.o
floor   post_util.o
i386ld fatal: Symbol referencing errors. No output written to t_postgresql

Larry, can you work your magic again?




Re: [GENERAL] Unanswered questions about Postgre

2000-12-01 Thread Joel Burton

> > What's nice about PostgreSQL is that, while it hasn't always had
> > every SQL92 feature (like outer joins, etc.), it seems to have less
> > legacy, nonstandard stuff wired in. :-)
> 
> Oh man, you have n idea.  PostgreSQL is legacy headquarters.  ;-)

Yes, yes, I know about *some* of them [8k limit springs to mind!] 
(C hackers no doubt no *lots* more.) But, in terms of, "as comes 
out in our SQL syntax", compared to Oracle, we're free and clear.

--
Joel Burton, Director of Information Systems -*- [EMAIL PROTECTED]
Support Center of Washington (www.scw.org)



[GENERAL] RFC: User reviews of PostgreSQL RI functionality

2000-12-01 Thread Ed Loehr

How well is the recently-added PostgreSQL functionality supporting
referential integrity (RI) working.  Any serious bugs?  Any major
hinderances?  Other impressions?

I'd be particularly interested in hearing from people who've implemented
larger schemas using RI (say, more than 30 tables and 50 foreign keys).

Regards,
Ed Loehr



Re: [GENERAL] Database cluster?

2000-12-01 Thread Gordan Bobic

> > Than you can connect to any of the postgres on your cluster, for
> >example: > round robin.
> >
> >Hmm... But is this really what we want to do? This is less than ideal
for
> >several reasons (if I understand what you're saying correctly).
Replication
> >is off-line for a start, and it only works well for a system that has
few
> >inserts and lots of selects, probably from a lot of different users.
> >Probably a good things for applications like web search engines, but not
> >necessarily for much else.
>
> *** it isn't replication. It's that your cluster behaves like a
> single-computer. You modify the 'OS' (GFS + DIPC), not postgresql.

OK, that makes sense. Kind of like Mosix, then. But like mosix, this would
require lots of network bandwidth - or not, depending on how good GFS is at
figuring our what goes where.

> > > Another issue are datafiles, GFS seems promising. > But postgresql
uses
> >fcnl, and GFS (globalfilesystem.org) doesn't > support it yet. > A
> >distributed filesystem with locking etc. is required, Ideas ?
> >
> >Hmm... I am not sure that a distributed file system is what we want
here. I
> >think it might be better to have separate postgres databases on separate
> >local file systems, and handle putting the data together on a higher
level.
> >I think this would be better for both performance and scaleability.
Having
>
> ***yes... but WHEN we can have these features ? No one have done it till
> now, i've requested and searched but almost no reply.

Well, if you come up with a detailed design, I'm quite happy to help with
coding individual functions...

> >one big file system is likely to incur heavy network traffic penalties,
and
> >that is not necessary, as it can be avoided by just having the
distribution
> >done on a database level, rather than file system level.
> >
> >But then again, the distributed file system can be seen as a "neater"
> >solution, and it might work rather well, if they get the caching right
with
> >the correct "near-line" distribution of data across the network file
system
> >to make sure that the data is where it is most useful. In other words,
make
> >sure that the files (or even better, inodes) that are frequently
accessed
> >by a computer are on that computer).
> >
> >Still there is the issue of replication and redundancy.
>
> ***GFS does it transparently.

But wouldn't this all be incredibly network intensive? Could we implement
something that would make a process go to the data, instead of the other
way around? In database, data is typically bigger than the process
accessing it...

> >Indeed. As such, it should probably be the first thing to do toward
> >"clustering" a database. Still, it would be good to have a clear
> >development path, even though on that path we cludge things slightly at
> >various steps in order to have a useable system now, as opposed to a
> >"perfect" system later.
> >
>
> *** yes, i want clustering now...and i'm alone.

No, you're not. I NEED clustering now. Eventually the number of records and
tables comes and bites you, no matter how much you optimize your
application. And for most of us mere mortals, buying a Cray for running a
database is just not a viable option...

> I my opinion if GFS will do fcntl (and we can ask to GFS people, i
think),
> the stuff in this email can be done rapidly.

Well, I think you've just volunteered to contact them. :-)

> >A shared all approach is not necessarily that bad. It is (as far as I
can
> >tell), not better or worse than a "share nothing" approach. They both
have
> >pros and cons. Ideally, we should work toward coming up with an idea for
a
> >hybrid system that would pick the best of both worlds.
> >
> > > This system can give a sort of single-system-image, useful to
distribute
> > > other software beyond postgresql.
> >
> >Indeed. This is always a good thing for scalability for most
applications,
> >but databases have their specific requirements which may not be best
> >catered for by standard means of distributed processing. Still, what you
> >are suggesting would be a major improvement, from where I'm looking at
it,
> >but I am probably biased by looking at it from the point of view of my
> >particular application.
> >
> > > Also Mariposa (http://s2k-ftp.cs.berkeley.edu:8000/mariposa/) seems >
> >interesting, but it's not maintained and it's for an old postgresql
> >version.
> >
> >Hmm... Most interesting. There could be something recyclable in there.
Must
> >look at the specs and some source later...
> >
>
> *** i've compiled it , but with no results.
> An idea is to get diff to corresponding pure postgresql version (6.4/5?),
> then study the patch, and grab the secrets to fuse in current version.
The
> research papers seems very good. Perhaps some guy that have done
> Mariposa can help...

See above comment...

> My goal is to have a clustered open source database with the less effort
> possible, now.
>
> The project to do good stuff (ie code) in this field is very long...

Indeed. Th

RE: [GENERAL] Modify Column

2000-12-01 Thread Tamsin

This worked for me: 

update pg_attribute set atttypmod = 104 where attname = 'column_name' and
attrelid = (select oid from pg_class where relname = 'tablename');

to set a varchar column 'columnname' in 'tablename' to a size of 100.

Tamsin

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of
> [EMAIL PROTECTED]
> Sent: 01 December 2000 16:32
> To: [EMAIL PROTECTED]
> Subject: Re: [GENERAL] Modify Column
> 
> 
> 
> 
> I don't know if you can use ALTER TABLE to do this
> but you could rename the old column, add a new column with
> the right name and size and use an UPDATE statement to
> copy the data in.
> 
> Unfortunately I had problems dropping the old column since
> ALTER TABLE xxx DROP COLUMN  is not implemented in
> the version on p-sql I'm using - I wonder if its in a later release???
> If not you might have to create a whole new table and copy the data
> in with a INSERT INTO xxx SELECT * from zzz;
> 
> Hope this helps, maybe someone else knows the ultimate way of 
> doing this :)
> 
> MC
> 
> 
> 
> 
> 
> [EMAIL PROTECTED] on 01/12/2000 15:23:03
> 
> Please respond to [EMAIL PROTECTED]
> 
> To:   PostgreSQL General <[EMAIL PROTECTED]>
> cc:(bcc: Martin Chantler/CIMG/CVG)
> Subject:  [GENERAL] Modify Column
> 
> 
> 
> 
> This seems like a simple enough thing, and I'm sure it's been answered,
> but I couldn't find anything that helped in the archives. Basically, I
> have a column in my table that was, once upon a time, large enough, but
> now, I need to increase the site of the column. How can I do that?
> Thanks,
> 
> Joe
> 
> 
> 
> 
> 
> 
> 
> 



Re: [GENERAL] Database cluster?

2000-12-01 Thread Valter Mazzola




>From: "Gordan Bobic" To: Subject: Re: [GENERAL] Database cluster? Date: 
>Fri, 1 Dec 2000 10:13:55 -
>
> > I've succesfully pacthed linux kernel 2.2.17 with DIPC and modified > 
>postgresql's src (src/backend/storage/ipc/ipc.c) to create distributed > 
>shm and sem.
>
>Please forgive my ignorance (I haven't used Postgres for that long), but 
>what are shm and sem?
>

shared memory and semaphores

> > The strategy is then to start a postgresql that creates shm and sem on > 
>ONE machine, then start other postgres on other machines on the cluster > 
>that create NO shared structures ( there is a command line flag to do 
>this).
>
>So, one "master" and lots of "slaves", right?
>

no, every machine is totally similar to the others, the only different this 
is that only ONE machine creates the ( network Distributed by DIPC)shared 
memory and semaphores.


> > Than you can connect to any of the postgres on your cluster, for 
>example: > round robin.
>
>Hmm... But is this really what we want to do? This is less than ideal for 
>several reasons (if I understand what you're saying correctly). Replication 
>is off-line for a start, and it only works well for a system that has few 
>inserts and lots of selects, probably from a lot of different users. 
>Probably a good things for applications like web search engines, but not 
>necessarily for much else.

*** it isn't replication. It's that your cluster behaves like a 
single-computer. You modify the 'OS' (GFS + DIPC), not postgresql.


>
> > Another issue are datafiles, GFS seems promising. > But postgresql uses 
>fcnl, and GFS (globalfilesystem.org) doesn't > support it yet. > A 
>distributed filesystem with locking etc. is required, Ideas ?
>
>Hmm... I am not sure that a distributed file system is what we want here. I 
>think it might be better to have separate postgres databases on separate 
>local file systems, and handle putting the data together on a higher level. 
>I think this would be better for both performance and scaleability. Having

***yes... but WHEN we can have these features ? No one have done it till 
now, i've requested and searched but almost no reply.

>one big file system is likely to incur heavy network traffic penalties, and 
>that is not necessary, as it can be avoided by just having the distribution 
>done on a database level, rather than file system level.
>
>But then again, the distributed file system can be seen as a "neater" 
>solution, and it might work rather well, if they get the caching right with 
>the correct "near-line" distribution of data across the network file system 
>to make sure that the data is where it is most useful. In other words, make 
>sure that the files (or even better, inodes) that are frequently accessed 
>by a computer are on that computer).
>
>Still there is the issue of replication and redundancy.

***GFS does it transparently.

I just think that
>for a database application, this would be best done on the database level, 
>rather than a file system level, unless the distributed file system in use 
>was designed with all the database-useful features in mind.
>
> > Another issue is that DIPC doesn't have a failover mechanism.
>
>Again, for a database, it might be best to handle it at a higher level.
>
> > This is a shared All approach, it's not the best, but probably it's the 
> > fastest solution (bad) to implement, with little modifications (4-5) > 
>lines to postgresql sources.
>
>Indeed. As such, it should probably be the first thing to do toward 
>"clustering" a database. Still, it would be good to have a clear 
>development path, even though on that path we cludge things slightly at 
>various steps in order to have a useable system now, as opposed to a 
>"perfect" system later.
>

*** yes, i want clustering now...and i'm alone.
I my opinion if GFS will do fcntl (and we can ask to GFS people, i think), 
the stuff in this email can be done rapidly.


>A shared all approach is not necessarily that bad. It is (as far as I can 
>tell), not better or worse than a "share nothing" approach. They both have 
>pros and cons. Ideally, we should work toward coming up with an idea for a 
>hybrid system that would pick the best of both worlds.
>
> > This system can give a sort of single-system-image, useful to distribute 
> > other software beyond postgresql.
>
>Indeed. This is always a good thing for scalability for most applications, 
>but databases have their specific requirements which may not be best 
>catered for by standard means of distributed processing. Still, what you 
>are suggesting would be a major improvement, from where I'm looking at it, 
>but I am probably biased by looking at it from the point of view of my 
>particular application.
>
> > Also Mariposa (http://s2k-ftp.cs.berkeley.edu:8000/mariposa/) seems > 
>interesting, but it's not maintained and it's for an old postgresql 
>version.
>
>Hmm... Most interesting. There could be something recyclable in there. Must 
>look at the specs and some s

Re: [GENERAL] Modify Column

2000-12-01 Thread joe

Thanks - It's a fairly small table so I decided to create a new field and
update the table. My syntax was
UPDATE tablename SET new_column = old_column WHERE uid = uid;
However it says: Relation 'tablename' does not have attribute 'new_column'

If i do \d tablename, it shows the new column. Did I miss a step? Thanks,

Joe

Len Morgan wrote:

> >I have a column in my table that was, once upon a time, large enough, but
> >now, I need to increase the site of the column. How can I do that?
>
> Basically, you can't.  What you need to do is dump the table, then edit the
> dumped table definition to increase the size, drop the table and then source
> back in the dumped version with the larger field definition.
>
> Although it would waste a lot of space, you could also create a new field
> that IS large enough, update that field with the contents from the old
> field, then rename the two fields so that the new one has the name of the
> old one.
>
> Sounds hokey but it works.
>
> len morgan




Re: [GENERAL] Modify Column

2000-12-01 Thread martin . chantler



I don't know if you can use ALTER TABLE to do this
but you could rename the old column, add a new column with
the right name and size and use an UPDATE statement to
copy the data in.

Unfortunately I had problems dropping the old column since
ALTER TABLE xxx DROP COLUMN  is not implemented in
the version on p-sql I'm using - I wonder if its in a later release???
If not you might have to create a whole new table and copy the data
in with a INSERT INTO xxx SELECT * from zzz;

Hope this helps, maybe someone else knows the ultimate way of doing this :)

MC





[EMAIL PROTECTED] on 01/12/2000 15:23:03

Please respond to [EMAIL PROTECTED]

To:   PostgreSQL General <[EMAIL PROTECTED]>
cc:(bcc: Martin Chantler/CIMG/CVG)
Subject:  [GENERAL] Modify Column




This seems like a simple enough thing, and I'm sure it's been answered,
but I couldn't find anything that helped in the archives. Basically, I
have a column in my table that was, once upon a time, large enough, but
now, I need to increase the site of the column. How can I do that?
Thanks,

Joe










Re: [GENERAL] Modify Column

2000-12-01 Thread Robert B. Easter

On Friday 01 December 2000 10:23, [EMAIL PROTECTED] wrote:
> This seems like a simple enough thing, and I'm sure it's been answered,
> but I couldn't find anything that helped in the archives. Basically, I
> have a column in my table that was, once upon a time, large enough, but
> now, I need to increase the site of the column. How can I do that?
> Thanks,
>
> Joe

You can dump the database to file.sql, edit file.sql and change the size of 
the column, save it, drop the database, and reload it from file.sql.

-- 
 Robert B. Easter  [EMAIL PROTECTED] -
- CompTechNews Message Board   http://www.comptechnews.com/ -
- CompTechServ Tech Services   http://www.comptechserv.com/ -
-- http://www.comptechnews.com/~reaster/ 



Re: [GENERAL] Unanswered questions about Postgre

2000-12-01 Thread Peter Eisentraut

Joel Burton writes:

> What's nice about PostgreSQL is that, while it hasn't always had
> every SQL92 feature (like outer joins, etc.), it seems to have less
> legacy, nonstandard stuff wired in. :-)

Oh man, you have n idea.  PostgreSQL is legacy headquarters.  ;-)

-- 
Peter Eisentraut  [EMAIL PROTECTED]   http://yi.org/peter-e/




Re: [GENERAL] server permissions for sql copy

2000-12-01 Thread Tom Lane

Andrew Gould <[EMAIL PROTECTED]> writes:
> I'm running PostgreSQL 7.0.2 on Caldera eDesktop 2.4. 
> when I try to use the SQL COPY command in psql, I get
> an error message informing me that the backend could
> not open the file for reading.  I think the error
> number was 13 - Permission denied.  I changed the
> permissions of the directory and file to
> world-readable; but I still get the same error
> message.

What about the directories above the one containing the file?

Also, don't forget that the critical permission for a directory
is execute (= "allow lookup of entries"), not read (= "make a
directory listing").  You probably want permissions rwxr-xr-x
or rwxrwxr-x on directories that need to be publicly readable.

regards, tom lane



[GENERAL] Modify Column

2000-12-01 Thread joe

This seems like a simple enough thing, and I'm sure it's been answered,
but I couldn't find anything that helped in the archives. Basically, I
have a column in my table that was, once upon a time, large enough, but
now, I need to increase the site of the column. How can I do that?
Thanks,

Joe




Re: [HACKERS] Re: [GENERAL] PHPBuilder article -- Postgres vsMySQL

2000-12-01 Thread Daniel Wickstrom

> "Frank" == Frank Joerdens <[EMAIL PROTECTED]> writes:

Frank> Which forum is that? I'm asking because I am currently
Frank> trying to migrate from PHP to Openacs, or integrate the two
Frank> somehow (lurking on this list has convinced me that openacs
Frank> deserves a hard look indeed), and thus I'd like to find a
Frank> forum where relevant stuff is being discussed.


Check out the bboard forums at http://openacs.org/

-Dan



Re: [GENERAL] Database cluster?

2000-12-01 Thread Dave Smith

Have you looked at intermezzo? http://www.inter-mezzo.org/

Valter Mazzola wrote:

> I've succesfully pacthed linux kernel 2.2.17 with DIPC and modified
> postgresql's src  (src/backend/storage/ipc/ipc.c) to create distributed 
> shm and sem.
> 
> The strategy is then to start a postgresql that creates shm and sem on 
> ONE machine, then start other postgres on other machines on the cluster 
> that create NO shared structures ( there is a command line flag to do 
> this).
> 
> Than you can connect to any of the postgres on your cluster, for 
> example: round robin.
> 
> Another issue are datafiles, GFS seems promising.
> But postgresql uses fcnl, and GFS (globalfilesystem.org) doesn't support 
> it yet.
> A distributed filesystem with locking etc. is required, Ideas ?
> 
> 
> Another issue is that DIPC doesn't have a failover mechanism.
> 
> This is a shared All approach, it's not the best, but probably it's the 
> fastest solution (bad) to implement, with little modifications (4-5) 
> lines to postgresql sources.
> 
> This system can give a sort of single-system-image, useful to distribute 
> other software beyond postgresql.
> 
> Also Mariposa (http://s2k-ftp.cs.berkeley.edu:8000/mariposa/) seems 
> interesting,
> but it's not maintained and it's for an old postgresql version.
> 
> hoping for clustrering...
> valter mazzola.
> 
> 
>> From: Alain Toussaint <[EMAIL PROTECTED]>
>> To: PostgreSQL general list <[EMAIL PROTECTED]>
>> Subject: Re: [GENERAL] Database cluster?
>> Date: Thu, 30 Nov 2000 15:05:16 -0500 (EST)
>> 
>>  > Somebody mentioned the fact that postgres uses IPC for communicating
>>  > between processes. I think there are tools for clustering (I am not 
>> sure if
>>  > Mosix supports transparently allowing IPC across nodes) which can work
>>  > around that.
>> 
>> one of those tool is distributed ipc  but
>> it only work with Linux,AFAIK,the software there is just a patch to the
>> Linux kernel and a daemon.
>> 
>> Alain
>> 
> 
> 
>_ 
> 
> Get more from the Web.  FREE MSN Explorer download : 
> http://explorer.msn.com




SV: [GENERAL] Unanswered questions about Postgre

2000-12-01 Thread Jarmo Paavilainen

Hi,

...
> > > That is what transactions are for. If any errors occur, then the
> > > transacction is aborted. You are supposed to use transactions when you
want
> > > either everything to occur (the whole transaction), or nothing, if an
> > > error occurs.

And thats wrong!

The caller should have a change to handle the error. Like if a "insert"
fails, you might want to use "update" instead. It should be the caller who
decides if the transaction should be aborted ("rollback") or not.

As it is now transactions are _totally_ useless with dba:s that serves more
than one client.

...
> > There is obviously no
> > reason why a transaction needs to be aborted for syntax errors.

Absolutely correct. It should be the caller who decides what he wants to do
with the transaction (rollback, or just continue as nothing happened).

...
> A bank is transferring money from one acount to another. Say the money
> leaves the first account (first update query), and then an error occurs
> when inserting the money into the second account (second update query). If
...

Schematic code snipped:

BEGIN;
update table account set credit = credit + 100;
if( error )
{
insert into account (credit,debet) VALUES( 100,0 );
if( error )
{
ROLLBACK;
return FAILED;
}
}
update table account set debet = debet + 100;
if( error )
{
insert into account (credit, debet) VALUES( 0, 100 );
if( error )
{
ROLLBACK;
return FAILED;
}
}
COMMIT;

That is the _correct_ way to do a bank transaction. And that is how
transactions should work.

...
> That is the whole point of transactions - they are used for an
> "all-or-nothing" approach.

Correct, but it should be the caller who decides what to do. Not the dba.

...
> The transaction succeeds, and you end up with two phones with the same
> number. BAD thing.

Your still wrong about the correct dba behaviour. It should be the callers
decision, not the dba.

> > nessasary.   If you don't believe me, here's two fully SQL-92 compliant
> > databases, Oracle and interbase, which do not exhibit this behavior:

I do not give a sh** about SQL9_. There are nothing that forbids a dba to be
better than something.

...
> So, what would you like to be the criteria for aborting or proceeding with
> a transaction?

dba should not try to guess what I want to do with a transaction. It should
repport all errors to me (the caller) and let me decide what to do with the
transaction, period.

...
> > > If you don't like this behaviour, then use auto-commit, and make every

And thats stupid.

...
> > grouping a set of statements and commiting them or rolling them back as
> > a whole.  I do not, however, want the transaction aborted by the server

Thats how it should be.

...
> > when it does not need to be.  Clearly in the above case, neither
> > interbase nor oracle decided that the transaction had to be aborted.

Neither does Sybase or MSSQL.

// Jarmo