Re: [GENERAL] to BLOB or not to BLOB

2001-05-14 Thread KuroiNeko

> I am going  to write an application tht writes big  amounts of plain text
into a database.

 What kind of data are those? Articles? News/mail messages? Other?

> I thought of using  the text type for this, but I don't  know if it has a
maxlenght, and given that these will  be very long texts I started wondered
if these would have to be blobs... but they aren't binary.

 Well,  plain text  is a  `subset' of  `binary,' so  storing it  as a  BLOB
shouldn't be an issue. My home-brewn email archive stores message bodies as
BLOBs with absolutely no problem.
 On length limit of text type field,  for older versions of PGSQL it should
be less  that page size  (default 8 KB,  configurable at compile  time). In
TOASTed PGSQL there's probably no limit set by DBMS itself.

> So, does the text daya type have a maxium lenght? Do BLOBs? What should I
use? For the time being, at least,  I won't be searching these texts with a
search  engine or  anything... but  if I  were to  be, what  considerations
should I take into account when designing the tables?

 As you already figured it, the answer  to your question depends on what do
you want from  your system. Maybe you'll  be OK with BLOBs but  you have to
keep in  mind that  pg_dump couldn't  handle them and  you need  to perform
somewhat tricky things to backup and restore PGSQL DB with BLOBs.
 Another  thing to  consider is  searchability and  indexes. If  you really
don't want (and never going to) search  you big fields, your best way would
be to  store just file names  and to keep  long chunks of text  in external
files. Sure,  you can  store them  in TOASTed fields  if you  need indexes,
but  _IMHO_,  this is  yet  to  be proven  that  simplicity  of design  and
implementation is  worth _possiblie_ preformance degradation.  I believe we
still have to see good and fast index built on a 100 KB text field.
 Maybe what  you need  is FTS, because  IMNSHO, there's no  much use  of an
attribute that can't be indexed  and searched (updated, joined, grouped) on
swiftly. Everything relatively  small and fast to process goes  to DB, huge
portions of not-easily-indexable data should stay outside.
 Actually, FTS is a  last resort, kind of. You may wish  to split your text
into smaller fragments with similar semantical (logical, whatever) load.


--

 ÌĤ¯Ç­¤ÏÁͤòÊá¤é¤Ì


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[GENERAL] Re: Invoices

2001-05-14 Thread Vince Vielhaber

On Sun, 13 May 2001, Louis-David Mitterrand wrote:

> * On Thu, May 03, 2001 at 07:45:23PM -0400, Vince Vielhaber wrote:
> > On Thu, 3 May 2001, Ludwig Meyerhoff wrote:
> >
> > > Hallo!
> > >
> > > Maybe this is a bit off-topic, as this problem is more a "design"-one, but
> > > I wanted to write a web-application write invoices more easy. I wonder if
> > > it was a good idea to try this using Postgres or if it was better to write
> > > the data of each invoice in a separate file in a separate directory.
> > >
> > > I doubt it was a good idea to put all the data into one database like
> >
> > I wrote a couple of programs for various projects to do this (one was
> > for someone else so I can't release it).  I kept all the data in their
> > own tables - account info in the account table, customer info in the
> > customer table, etc. - and put the data together as I generated the
> > invoice.  Simple, huh?  Not exactly.
> >
> > Here's where the problem arises.  You can create a really good looking
> > invoice, in fact you can create a whole bunch of really good looking
> > invoices.  What you can't do with a web app is send a form feed to the
> > printer!
>
> Yes you can. Try the CSS2 reference manual:
>
>   13.3 Page breaks
>
>   The following sections explain page formatting in CSS2. Five properties
>   indicate where the user agent may or should break pages, and on what
>   page (left or right) the subsequent content should resume. Each page
>   break ends layout in the current page box and causes remaining pieces of
>   the document tree to be laid out in a new page box.
>
> And it works, we use it in our own app.

Yes, that was pointed out last week.  It works with opera and ie, not
with mozilla or any of the netscapes (4 or 6) but it is a start.

Vince.
-- 
==
Vince Vielhaber -- KA8CSHemail: [EMAIL PROTECTED]http://www.pop4.net
 56K Nationwide Dialup from $16.00/mo at Pop4 Networking
Online Campground Directoryhttp://www.camping-usa.com
   Online Giftshop Superstorehttp://www.cloudninegifts.com
==




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[GENERAL] Re:

2001-05-14 Thread Gregory Wood

> Could u just tell me if it is possible to have a Windows Client and the
Server running on Linux and having Postgres and the two communicate thru
something like the ODBC.

Yes, although it's preferable to use a native library.

> Does Postgres have anything for the advanced features like OleDB instead
of ODBC.

No OleDB, but there is a JDBC driver.

> Thirdly, are Postgres ODBC alrady installed alongwith the Postgres
installation or any specific fine-tuning is required.

I'm not sure exactly what you're asking here, but the ODBC driver is a
separate download which requires a few configuration variables (like where
the server is) to find and work with your server. It's not plug and play
(but then again, neither is *any* ODBC driver).

> if the answer to the first one is Yes, then the fourth question is - how
do i do that?

You set up the server on your Linux box. Make sure you run it to accept TCP
connections (using the -i switch) and that your Windows box has permissions
to access the database (via the pg_hba.conf file). You might want to test it
with some Windows tool to make sure everything is working at this stage.
Then install the ODBC driver and point it to your PostgreSQL server. At that
point you should be able to access it like any other ODBC data source.

Greg


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[GENERAL] Re: What's the best front end/client under MS Windows?

2001-05-14 Thread Gregory Wood

I'm partial to the ZEOS Database Explorer myself. I don't have experience
with a multitude of tools, so the best I can offer is that I have no
problems with the latest version :) Actually, I can tell you that it has the
advantage of a native interface (rather than relying on ODBC). And it's
free, that's always an advantage.

Oh, and it does have the disadvantage of not showing NOTICEs, which is a
pain in the ass when doing benchmarking via EXPLAIN, for which I'm forced to
use psql. There were some problems in older versions (specifically it was
having trouble stripping out spaces on newlines and with quoting on
triggers), but those problems seem to be resolved with the latest version.
It's not the most stable program, but most people don't use Windows for
stability...

Address is http://www.zeoslib.org/  but it appears to be down at the moment.

Greg

- Original Message -
From: "Walter Chong" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, May 11, 2001 11:03 PM
Subject: What's the best front end/client under MS Windows?


> Kindly list the advantages/disadvantages for me, THX!
>
>
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster
>


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[GENERAL] Re: 7.0.0 long name truncation problem

2001-05-14 Thread Gregory Wood

> When the name of a sequence auto-generated from a SERIAL type would be
> longer than 32 chars, it appears that "CREATE TABLE" uses a different
> name truncation algorithm than "DROP SEQUENCE".  Example below.  Note
> the difference between the following:
>
> 'process_state_subscripti_id_seq'
> 'process_state_subscription_id_s'

The problem is that the CREATE TABLE statement uses the table name, field
name and an identifier "_seq" to generate the sequence name. Because it
knows those values, it is able to intelligently truncate values.

The DROP SEQUENCE statement doesn't know the table name, the field name, or
even that the sequence is being used for a SERIAL field. All it knows is
that the name can't be longer than 32 characters. So when you feed it a
string, the only thing it can really do: truncate the end. It *might* be
possible to parse the string based on separators (underscores) except that
in your example, you use underscores in your table/field names as well, so
what's it to do?

> Might be fixed in 7.1, I dunno.  Can anyone confirm this is a problem?

It's a problem for people like you and me, but it's expected behavior.
Personally, I'd love to see someone add a DROP SERIAL that would accept the
Table and Field name and then generate the DROP SEQUENCE statement for you
(hint, hint *g*).

Greg


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [GENERAL] Speeding up Query

2001-05-14 Thread Tom Lane

Alexander Lohse <[EMAIL PROTECTED]> writes:
> ...
> INTERSECT

> select events.id from events,event_ref,teams,orgs,pers where 1=1 and

> (lower(events.head) like '%web%'
>   or lower(events.search) like '%web%'
>   or lower(events.ort) like '%web%'
>   or lower(events.text) like '%web%'
>   or (events.id = event_ref.event_id and event_ref.ref_id = teams.id 
> and lower(teams.name) like '%web%')
>   or (events.id = event_ref.event_id and event_ref.ref_id = orgs.id 
> and lower(orgs.name) like '%web%')
>   or (events.id = event_ref.event_id and event_ref.ref_id = orgs.id 
> and event_ref.ref_name = 'loc' and lower(orgs.ort) like '%web%')
>   or (events.id = event_ref.event_id and event_ref.ref_id = pers.id 
> and (lower(pers.name) like '%web%' or lower(pers.prename) like 
> '%web%')))

This is pretty horrid: you are generating a cross product of
events * event_ref * teams * orgs * pers and then selecting rows
multiple times out of that very large set.  No wonder you lost
patience even with a small test database.  I think you wanted
something like

...
INTERSECT (

select events.id from events where
(lower(events.head) like '%web%'
  or lower(events.search) like '%web%'
  or lower(events.ort) like '%web%'
  or lower(events.text) like '%web%'

union

select teams.id from teams where
lower(teams.name) like '%web%'

union

select orgs.id from orgs where
lower(orgs.name) like '%web%'

...

)


This is assuming that the match against event_ref isn't really
necessary, but if it is, you could make each component select be
a two-way join between event_ref and the other table.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] Speeding up Query

2001-05-14 Thread Alexander Lohse

>This is pretty horrid: you are generating a cross product of
>events * event_ref * teams * orgs * pers and then selecting rows
>multiple times out of that very large set.  No wonder you lost
>patience even with a small test database.  I think you wanted
>something like
>
>...
>INTERSECT (
>
>select events.id from events where
>(lower(events.head) like '%web%'
>   or lower(events.search) like '%web%'
>   or lower(events.ort) like '%web%'
>   or lower(events.text) like '%web%'
>
>union
>
>select teams.id from teams where
>lower(teams.name) like '%web%'
>
>union
>

Hi Tom,

this thing is a bit more complicated.

event_ref is the table containing the relations events <-> teams (One 
event/multiple Teams)
Spoken: Teams or Persons organise Events,
that's what: (events.id = event_ref.event_id and event_ref.ref_id = 
teams.id) is for.

Now, this search wants to be able to find all events that are 
organised by teams, pers, orgs where teams,p,o name contains 
"search_string".

But maybe you already brought the idea to me! Instead of using 
pleanty of ORs I should try using plenty UNIONs.

Would it also possible to make multiple INTERSECTS?
How do these operate on each other, do I also use parentheses?

I am bit in a hurry, right in the moment that why I write stenograph! ;-)

Thank you in advance,

Alex









-- 
___
Human Touch Medienproduktion GmbH
Am See 1
17440 Klein Jasedow

Alexander Lohse
Tel: (038374) 75211
Fax: (038374) 75223
eMail: [EMAIL PROTECTED]
http://www.humantouch.de

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] Speeding up Query

2001-05-14 Thread Tom Lane

Alexander Lohse <[EMAIL PROTECTED]> writes:
> Would it also possible to make multiple INTERSECTS?
> How do these operate on each other, do I also use parentheses?

Yes, if you don't want to think hard about what the precedence is
(I don't recall either) ...

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [GENERAL] Trigger only firing once

2001-05-14 Thread Fran Fabrizio

> Kinda hard to believe.

I know it, but that's what I am seeing. (to recap, my trigger only
fires on the first insert per connection.)

This will be kind of long, it's a spliced-together version of my psql
client session and the server log.  I drop and re-create the procedure
and trigger, then I'll connect and test, and disconnect and reconnect and
show you what's happening.  Comments with * stars around them
 are my running commentary to make following this session easier.

* First, I use the text file to drop and recreate the procedure and
trigger.  *

postgres@rusty ~$ psql monitoring < log_trigger
DROP
DROP
CREATE
CREATE
postgres@rusty ~$

* Server log shows dropping and creating (long procedure, sorry)
*

2001-05-14 11:51:12 DEBUG:  StartTransactionCommand
2001-05-14 11:51:12 DEBUG:  query: /* trigger to update the host table
for incoming records
The trigger will update the update the date and timestamps
for the host, insert a newhost if one is not found */
drop function update_host_table();
2001-05-14 11:51:12 DEBUG:  ProcessUtility: /* trigger to update the host
table for incoming records
The trigger will update the update the date and timestamps
for the host, insert a newhost if one is not found */
drop function update_host_table();
2001-05-14 11:51:12 DEBUG:  CommitTransactionCommand
2001-05-14 11:51:12 DEBUG:  StartTransactionCommand
2001-05-14 11:51:12 DEBUG:  query: drop trigger log_trigger on log ;
2001-05-14 11:51:12 DEBUG:  ProcessUtility: drop trigger log_trigger on
log ;
2001-05-14 11:51:12 DEBUG:  CommitTransactionCommand
2001-05-14 11:51:12 DEBUG:  StartTransactionCommand
2001-05-14 11:51:12 DEBUG:  query: create function update_host_table()
returns opaque
as 'declare

site_recrecord;
host_recrecord;
status_rec  record;

begin

new.tstamp := now() ;
/* check to see if we have see this site before */

select  * into site_rec
from sites s
where s.fqdn = new.fqdn ;

/* -- if we have not found the machine name we are going to
insert a new record into the sites table and set the init_contact to now
*/

if not found
then
insert into sites
values
(nextval(''sites_site_id_seq''),new.fqdn,new.site,new.region,''f'',new.tstamp)
;

/* we also have to insert a new host if this is a new site */


insert into hosts
values
(nextval(''hosts_host_id_seq''),currval(''sites_site_id_seq''),new.hostname,new.tstamp)
;

/* now update the incoming record with the new host_id and
site_id */
new.site_id=currval(''sites_site_id_seq'');
new.host_id=currval(''hosts_host_id_seq'');

else
/* we have seen the site before, update the incoming records
site_id */

new.site_id = site_rec.site_id ;

/* if we have seen this site before we need to check and see if
we have ever seen this machine before */

select * into host_rec
from hosts h
where h.hostname = new.hostname and
h.site_id = site_rec.site_id ;

/* new host */
if not found
then
insert into hosts
values
(nextval(''hosts_host_id_seq''),site_rec.site_id,new.hostname,new.tstamp)
;
new.host_id = currval(''hosts_host_id_seq'');
else
new.host_id = host_rec.host_id ;

update hosts
set last_contact = new.tstamp
where hosts.host_id = new.host_id ;
end if ;

/*  update sites
set last_contact = new.tstamp
where sites.fqdn = new.fqdn ; */
end if ;


/* now we are going to update the status table with the new record */

select * into status_rec
from status s where
s.site_id = new.site_id and
s.host_id = new.host_id and
s.product = new.product and
s.class = new.class and
s.subclass = new.subclass ;

/* new monitored process */

if not found
then
insert into status
values (new.site_id,
new.host_id,
new.product,
new.class,
new.subclass,
new.status,
new.msg,
new.tstamp);
else
update status
set status = new.status,
tstamp = new.tstamp
where site_id = new.site_id and
host_id = new.host_id and
product = new.product and
class = new.class and
subclass = new.subclass ;
end if ;
return new;
end ;'
language 'plpgsql';
2001-05-14 11:51:12 DEBUG:  ProcessUtility: create function
update_host_table()
returns opaque
as 'declare

site_recrecord;
host_recrecord;
status_rec  record;

begin

new.tstamp := now() ;
/* check to see if we have see this site before */

select  * into site_rec
from sites s
where s.fqdn = new.fqdn ;

/* -- if we have not found the machine name we are going to
insert a new record into the sites table and set the 

Re: [GENERAL] COPY locking

2001-05-14 Thread John Coers

I rebuilt with these changes and am seeing a 30-40% improvement in average performance 
for
my high-contention high-volume copy case.  I still need to do some more testing, but 
this really
seems to help.

Thanks a lot Tom!

Tom Lane wrote:
> 
> I have committed some changes into development sources that should
> reduce contention overhead when multiple backends are inserting into the
> same table.  If you're interested in trying it out, you could pull the
> latest sources from our CVS server, or try back-patching the changes
> into 7.1.*.  The relevant changes are in these files:
> 
> 2001-05-12 15:58  tgl
> 
> * src/: backend/access/heap/heapam.c, backend/access/heap/hio.c,
> backend/storage/buffer/bufmgr.c, backend/storage/buffer/localbuf.c,
> include/storage/bufmgr.h: Modify RelationGetBufferForTuple() so
> that we only do lseek and lock when we need to move to a new page;
> as long as we can insert the new tuple on the same page as before,
> we only need LockBuffer and not the expensive stuff.  Also, twiddle
> bufmgr interfaces to avoid redundant lseeks in
> RelationGetBufferForTuple and BufferAlloc.  Successive inserts now
> require one lseek per page added, rather than one per tuple with
> several additional ones at each page boundary as happened before.
> Lock contention when multiple backends are inserting in same table
> is also greatly reduced.
> 
> 2001-05-10 16:38  tgl
> 
> * src/: backend/commands/sequence.c,
> backend/storage/buffer/bufmgr.c, backend/storage/smgr/md.c,
> backend/storage/smgr/mm.c, backend/storage/smgr/smgr.c,
> include/storage/smgr.h: Avoid unnecessary lseek() calls by cleanups
> in md.c.  mdfd_lstbcnt was not being consulted anywhere, so remove
> it and remove the _mdnblocks() calls that were used to set it.
> Change smgrextend interface to pass in the target block number (ie,
> current file length) --- the caller always knows this already,
> having already done smgrnblocks(), so it's silly to do it over
> again inside mdextend.  Net result: extension of a file now takes
> one lseek(SEEK_END) and a write(), not three lseeks and a write.
> 
> regards, tom lane

-- 
John CoersIntrinsity, Inc.
[EMAIL PROTECTED]  Austin, Texas

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[GENERAL] New article mentions Postgres...

2001-05-14 Thread Brent R. Matzelle

This article could be huge for those looking to sell Postgres to
a company.  I'm printing out a copy for myself ;)

Open Source Code: A Corporate Building Block (ZDNet)
http://dailynews.yahoo.com/h/zd/20010514/tc/open_source_code_a_corporate_building_block_1.html

Brent

__
Do You Yahoo!?
Yahoo! Auctions - buy the things you want at great prices
http://auctions.yahoo.com/

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[HACKERS] Internet is putting lot of competition fire & heat under Microsoft SQL Server

2001-05-14 Thread universe_made_of_atoms

Internet is putting lot of competition fire & heat under Microsoft SQL
Server

Your boss will tell you - "Now, that we have high speed internet
connection
why do you need commercial SQL servers?? Simply use your mouse button,
click and
download the open-source Postgresql, InterBase or MySQL "

There will be no Oracle or Microsoft Corporation in about 2-3 years time
(as Internet is
a very big threat to Oracle/Microsoft corporation). Microsoft corp will
be a dead
meatbecause of lightening speed broad-band-internet.

If you are running MS Windows 2000/NT, then here is the chance for you
to try this superb SQL RDBMS open-source database.

Now, PostgreSQL is packaged in Cygwin32-setup.exe, simply download the
cygwin setup.exe  and double click on setup.exe
to install and run PostgreSQL on MS Windows 2000/NT

Installing and running the pgsql on MS Windows is extremely easy now and

is quite rock solid on Windows 2000/NT desktop or server

PostgreSQL is a "LINUX" of database systems - very powerful and
reliable..

Everybody is asking "What is the equivalent of Linux in SQL databases
??"
The answer is "PostgreSQL" RDBMS server.

Please try and go to http://sources.redhat.com/cygwin/download.html
to download cygwin (which has pgsql)

After installing the cygwin, read the user guides at
http://www.postgresql.org

Your boss will say - "Purchase commercial support for PostgresQL from
GreatBridge at http://greatbridge.com or from http://www.mysql.com .
Microsoft
products costs too much money!!! Why do need these MS SQL servers
anymore "

Open-source SQL RDBMS and their rankings are -
Ranked 1st Gold Medal   : PostgreSQL http://www.postgresql.org
Ranked 2nd Silver Medal : Interbase SQLserver
http://www.borland.com/devsupport/interbase/opensource
Ranked 3rd Bronze Medal : MySQL http://www.mysql.com
Ranked 4th  : Many others 





---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[GENERAL] Re: Question on Starting Postgres for the first time

2001-05-14 Thread Lee Harr

On Sat, 12 May 2001 13:00:35 -0300, David Stevenson <[EMAIL PROTECTED]> 
wrote:
> I am a new user of PostgreSQL and Linux.  I have my Linux up and running and
> my PostgreSQl semi-configured.  I can get into PostgreSQL using super user
> on root and then super user into postgres.  The problem is when I try to run
> psql and createdb from my home directory.  When I do that, I get the
> following messages

I think first you must, as the superuser

createuser david

and then:

> 
> $ createdb david
> psql: FATAL 1:  SetUserId: user 'david' is not in 'pg_shadow'
> 
> $ psql
> psql: FATAL 1:  Database "david" does not exist in the system catalog.
> 
> Not sure what is going on here, but I think someone out there does.  Can you
> help.
> 
> Thanks
> Dave
> 
> 
> 
> 

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[GENERAL] How to save images in a table.

2001-05-14 Thread Stuart Foster

I'm looking for a way to save jpg's in a postgresql table. 
Does anyone know how I can do this?

TIA 
Stuart

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] Inheritance & Referential Integrity

2001-05-14 Thread Stephan Szabo

On Mon, 14 May 2001, Andrew Snow wrote:

> I note that it still is not possible (v7.1.1) to have a table with a foreign
> key check on an inherited column for rows in child tables.  Will this ever
> be supported, and what is the status of work on this feature?

It will eventually be supported.  There's a bit of support work that needs
to come first (inherited unique constraints) and a fair amount of work on
the fk stuff itself.  I think that getting the fk to completely work on
normal tables (there are still some problem cases with deferred
constraints and the constraints really should be referencing
oids and shouldn't use a self-conflicting lock for locking rows) probably
happens first.


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[GENERAL] Contraints in postgresql ?

2001-05-14 Thread snpe

Hello,
I want next :

a) add constraint (primary and foreign) in existing table
b) temporary disable constraint and enable later

Is it possible in Postgresql ?

Haris Peco
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[GENERAL] Wal logs continued...

2001-05-14 Thread webb sprague

Earlier I posted with my problems about the WAL logs eating up all my 
diskspace.  I tried the solutions offered--checkpoint after a big copy and 
shortening the time between flushes.  They helped somewhat.

Unfortunately, the problem snow seems to happen when I vacuum-analyze after a 
big delete.  Even if the vacuum takes more than 2 minutes (the amount of time 
between flushes that I set in postgresql.conf), the logs continue to grow.  
Currently the vacuum has been running for about 20 minutes after a delete of 
about 2,000,000 records out of 5,000,000, and the WAL logs are at about 1.5 G.

Versions: RH 6.2, PG 7.1.1

Is there some way to set the number of log files before a flush/checkpoint 
thing?  We are going to go to a bigger machine next week, but this is quite 
an inconvenience, and it would probably benefit the DB as a whole to place 
some limit on the size of the WAL.

I would code it myself, but I can't yet (next year, after I finish a couple 
of Stevens' books).  If there is any thing else I can do to help, please 
let me know.

Thanks,
W


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] Contraints in postgresql ?

2001-05-14 Thread Stephan Szabo


On Mon, 14 May 2001, snpe wrote:

> Hello,
>   I want next :
> 
>   a) add constraint (primary and foreign) in existing table
>   b) temporary disable constraint and enable later
> 
> Is it possible in Postgresql ?

Sort of...
You can add foreign key constraints using ALTER TABLE ADD CONSTRAINT,
and unique constraints with CREATE UNIQUE INDEX (primary keys are
effectively unique constraints where all the columns are not null 
-- if you need to change the columns to not null that's a bit more
involved).  AFAIK, you can't really "disable" constraints, although
you can remove them (drop the index for unique.  you have to manually
drop the triggers created by the foreign key constraint) and re-add them.
However, if you've violated the constraint while it was gone, you
won't be able to re-add the constraint.



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] Re:

2001-05-14 Thread Thalis A. Kalfigopoulos

On Mon, 14 May 2001, Gregory Wood wrote:

> > Could u just tell me if it is possible to have a Windows Client and the
> Server running on Linux and having Postgres and the two communicate thru
> something like the ODBC.
> 
> Yes, although it's preferable to use a native library.

Is ODBC that bad? Which is better in terms of speed and robustness, ODBC or JDBC? I'd 
be interested in any comments as I'm thinking of deploying Pg on my linux machine and 
have windows machines talk to it.

TIA,
thalis


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[GENERAL] Re: Bug with timestamp !!! (resolve)

2001-05-14 Thread Alexander Dederer

> > create table tmp (create_datetimestamp);
> > 
> > #insert into tmp values('2001-04-01 02:29:52');
> > INSERT 1021715 1
> > 
> > #select * from tmp;
> >   create_data
> > 
> >  2035-05-29 01:33:36-05
> > (1 row)

Before start PostgreSQL on FreeBSD. Set environment variable TZ (time zone).
For me work this command:
TZ=+00; export TZ
or
TZ=+00 pg_ctl -o "-i -B 512" -D 




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[GENERAL] Authentication

2001-05-14 Thread Christine Kluka

I am trying to connect to postgres from a cgi script.  In order for it to connect to 
postgres,
I need to create a new postgres user that matches the Unix owner of my Web files.  I 
don't get an error on create user, but I cannot find a pg_shadow file.  Is the 
pg_shadow file supposed to be in my pgsql directory or is it generated by a command 
somewhere?

I'm using my backup mail account to send this, hope it'll work...

--
Totally Amazing Search Results - Just C4 Yourself!
http://www.C4.com - Total Search Technology

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[GENERAL] sql question

2001-05-14 Thread u95886230

Please help, I am trying to write an SQL statement but with no success as I am just 
starting out with sql.

I have a table with 3 columns: Account# ,OrderType and date
example of data:
Account#¦   Ordertype  ¦ Date
1   ¦   A  ¦ April
1   ¦   B  ¦ May
1   ¦   B  ¦ May
2   ¦   B  ¦ April
2   ¦   B  ¦ May
2   ¦   C  ¦ May
3   ¦   C  ¦ May


I need to write a select that will show me the totals of EACH type for EACH account 
AND 
total ordersplaced for a SPECIFIC month eg..Show me the results for May...

account ¦ TotA  ¦ TotB  ¦ TotC  ¦ Total
1   ¦ 0 ¦ 2 ¦ 0 ¦ 2
2   ¦ 0 ¦ 1 ¦ 1 ¦ 2
3   ¦ 0 ¦ 0 ¦ 1 ¦ 1

I can use temp tables, but need a solution written as basic as pssible so I can 
understand 
it (all in the form select this from that)
any help would be fantastic as I am completely stuck and have been trying for about a 
week

thanks
[EMAIL PROTECTED]
 



-- 
Sent by sgebbie from  ciaoweb piece from  it
This is a spam protected message. Please answer with reference header.
Posted via http://www.usenet-replayer.com/cgi/content/new

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[GENERAL] to BLOB or not to BLOB

2001-05-14 Thread Aristide Aragon

Hello

I am going to write an application tht writes big amounts of plain text into a 
database.
I thought of using the text type for this, but I don't know if it has a maxlenght, and 
given that these will be very long texts I started wondered if these would have to be 
blobs... but they aren't binary.
So, does the text daya type have a maxium lenght? Do BLOBs? What should I use? For the 
time being, at least, I won't be searching these texts with a search engine or 
anything... but if I were to be, what considerations should I take into account when 
designing the tables?

Thanks in advance

Aristide Aragon

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] Wal logs continued...

2001-05-14 Thread Tom Lane

Barry Lind <[EMAIL PROTECTED]> writes:
> Given what you say below, I think there is a documentation bug then.  In 
> Section 9.3 of the Administrators Guide it says:

> "After a checkpoint has been made, any log segments written before the 
> redo record are removed, so checkpoints are used to freedisk space 
> in the WAL directory."

Hmm, it should have said "undo record" ...

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html