Re: [GENERAL] SQL equivallent to "\ds" in psql

2001-10-17 Thread Mitch Vincent

Use  psql -E and it will give you the SQL for the backslash commands.. 

-Mitch

- Original Message - 
From: "Arcady Genkin" <[EMAIL PROTECTED]>
To: "PostgreSQL" <[EMAIL PROTECTED]>
Sent: Wednesday, October 17, 2001 4:35 PM
Subject: [GENERAL] SQL equivallent to "\ds" in psql


> Where does Postgres store information about the sequences?  I tried
> looking in the tables produced by "\dS", but could find no references
> to the sequences. :(
> 
> What I want is an SQL equivallent of "\ds" command, which will return
> a list of all sequences in the current database.  I need this for a
> periodically run script so that I could keep an eye on all sequences
> in the database.
> 
> Many thanks,
> -- 
> Arcady Genkin
> 
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
> 
> http://www.postgresql.org/users-lounge/docs/faq.html
> 


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] Does Postgresql 7.1.x Still Have 8K Maximum Field Size?

2001-09-15 Thread Mitch Vincent

The 8k (well, BLCKSZ limit) has been eliminated for quite some time now..

-Mitch

- Original Message -
From: "Philip" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Saturday, September 15, 2001 6:57 AM
Subject: [GENERAL] Does Postgresql 7.1.x Still Have 8K Maximum Field Size?


> Hello all.
>
> I will be creating one of my first web applications and would like to use
Postgresql as the database on a Red Hat 6.2 or 7.1 system with Apache and
either ColdFusion or PHP.
>
> Anyway, the application will have a text field where employees can update
their progress on assigned tasks.  By the time a task is completed, the
field may exceed 8k.  I am not a person who likes to compile from tarball's
because training non-Linux experienced fellow system administrators to use
RPMs is enough.  Updates are easier as well.
>
> The question:  Is it possible to have a text field greater than 8k but
only using RPMs -- not compiling and/or chaining via a separate table
(breaking something into 8k chunks)?  If so, how?
>
> Any assistance you can provide is appreciated.  Thanks.
>
>
> __
> Your favorite stores, helpful shopping tools and great gift ideas.
Experience the convenience of buying online with Shop@Netscape!
http://shopnow.netscape.com/
>
> Get your own FREE, personal Netscape Mail account today at
http://webmail.netscape.com/
>
>
> ---(end of broadcast)---
> TIP 6: Have you searched our list archives?
>
> http://archives.postgresql.org
>


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [GENERAL] Case sensitivity issue

2001-09-14 Thread Mitch Vincent

SELECT * FROM apples WHERE lower(color) LIKE '%red%';

Make sure that what ever 'red' might be is always lower case (you can pass
it through lower() too, of course)..

-Mitch


- Original Message -
From: "Michael Gay" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, September 10, 2001 12:02 AM
Subject: [GENERAL] Case sensitivity issue


> If I am doing a command such as
>
> select * from Apples where color like '%red%';
>
> how do I make it case insensitive?
>
> Thanks.
>
> mike
>
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
>


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] Query

2001-09-13 Thread Mitch Vincent

> I'm trying to execute the following query:
> "SELECT TOP 10 * FROM table1"

SELECT * FROM table1 LIMIT 10

 -- might give you what you're looking for.. I've not seen TOP, though I
could guess where it comes from (Oracle?) -- top seems to imply order so you
might want to ORDER BY something in the query before you LIMIT ...

-Mitch



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] Ridicolus Postgresql review

2001-09-11 Thread Mitch Vincent

> That's not nonsense at all, you can't just go around and redefine the
> language used in the database world at your own whims.

"Stored Procedure".. Hmm, that seems to me that the definition of that would
be "a procedure that's stored somewhere". When talking about stored
procedures and databases I would assume that the stored procedure would be
some database procedure (anything you can do with or in a database could be
seen as a procedure, IMHO), that's stored in the said database...

"Stored Procedure" is a very ambiguous term and probably needs to be treated
as such.. Unless there is a written definition somewhere that outlines
exactly how a stored procedure has to return things then I think PG's stored
procedures have the right to carry the name...

> Everybody I know employed in the database arena thinks of a stored
procedure
> as something that may return result sets. In PostgreSQL it cannot and
> does therefore not fit the term stored procedure.

What do they base that on though? The inability to return a record set from
a PG stored procedure is a limitation, no doubt, but not cause to say that
PG doesn't support stored procedures..

> What is confusing is the PostgreSQL use of the term "stored
> procedure". To me it sounds like bad marketing, something we really
> shouldn't need in the open source world.

I think PG is using the term as well as anyone could use such an
ambiguous term.. I think it's fair to list the limitations of PG stored
procedures when discussing feature sets but I don't think it's fair to say
that PG doesn't have stored procedures as clearly it does!


-Mitch



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] moving char() to varchar()

2001-09-07 Thread Mitch Vincent

> 1. I thought the SQL spec required varchar() not to pad.  Is it
> just that, because of the way pg_dump saved the char() data (as
> blank-padded) that the varchar() field preserves the padded data?

A dump from a char() field keeps the NULL padding even in the dump file I
assume, so when you went to import it you were importing data with NULLs
attached..

> 2. I could _swear_ I did something very similar to this some
> time ago (version 6.5.x?  something like that?).  Am I just imagining
> things?  (I'm perfectly prepared to accept that, by the way.  My
> memory is about as reliable these days as the DIMM I took out of my
> PC last week.  That's why I need a good DBMS like postgres!)

I can't say one way or another on that.. But I feel your pain on the memory
problems :-)

-Mitch



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] Deployment of PostgreSQL Applications

2001-08-31 Thread Mitch Vincent

> 2) More importantly, is it possible to prevent a customer from peeking
into
> said database once it is deployed on their machine?  A large part of what
> makes my application proprietary is the data model in the database, and
it'd
> be tough to maintain a competative edge when everyone can see exactly how
I
> do things in the database by logging into their postgres account, adding
> some users and changing permissions on their machine.  I really need to
make
> sure the database is bulletproof before I can begin deployment.

If the people that have your application have physical access (or even
remote superuser access) to the machine on which the database resides then
there is little you could do to prevent a knowledgeable person from getting
anything he/she wanted from the database, circumventing the PostgreSQL
security measures by reading the information right off the disk... I don't
know how hard or easy it would be to get meaningful information this way but
it's always going to be possible when people have superuser/physical access
to the machine.

-Mitch



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] PL/java?

2001-08-28 Thread Mitch Vincent

> Yup.  We wrote the client that is accessing the database.  It's using
> PHP, and we don't even *use* transactions currently.  But that isn't the
> problem.  From what I gather so far, the server is under fairly high
> load (6 right now) so vacuuming the database (520MB in files, 5MB dump)
> takes a *long* time.  While it's vacuuming, anything using that database
> just has to wait, and that's our problem.

Well every query is in it's own transaction unless you explicitly say BEGIN
and END -- so you technically are using transactions...

I have a 14x larger (70 meg, dumped) database running on a dual PII400 that
only takes 2 minutes or so to vacuum analyze (lots-o-indexes too), though I
guess that's a long time in some settings, we only do it once day though..

> Actually, on a whim, I dumped that 520MB database to it's 5MB file, and
> reimported it into an entirely new DB.  It was 14MB.  We vacuum at least
> once an hour (we have a loader that runs every hour, it may run multiple
> concurrent insert scripts).  We also use vacuum analyze.  So, I really
> can't see a reason for it to balloon to that horridly expanded size.

> Maybe stale indexes?  Aborted vacuums?  What on earth would cause that?

I've read that you can take the size of the data out of the database,
multiply it by 6 and you'll get the approximate size of the same data stored
in the database... Obviously that's not working in your case..

Every UPDATE and DELETE leaves the tuple that was updated or deleted until
vacuum is run but I don't see how that would happen on a fresh import into a
newly created DB...

-Mitch



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: MySQL's (false?) claims... (was: Re: [GENERAL] PL/java?)

2001-08-25 Thread Mitch Vincent


- Original Message -
From: "Sean Chittenden" <[EMAIL PROTECTED]>
To: "Bruce Momjian" <[EMAIL PROTECTED]>
Cc: "Gowey, Geoffrey" <[EMAIL PROTECTED]>; "'Dr. Evil'"
<[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Saturday, August 25, 2001 11:48 PM
Subject: Re: MySQL's (false?) claims... (was: Re: [GENERAL] PL/java?)




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] maximum query length

2001-08-22 Thread Mitch Vincent

No.

- Original Message - 
From: "jose" <[EMAIL PROTECTED]>
To: "Postgres" <[EMAIL PROTECTED]>
Sent: Wednesday, August 22, 2001 3:51 AM
Subject: [GENERAL] maximum query length


> Hi all,
> 
> I have a problem related with the maximum query length
> ERR: query is too long.  Maximum length is 16382
> I'm using PostgreSQL 6.5.3 and python
> Is this limit in the newer releases of PostgreSQL too?
> Thanks for any help
> 
> Jose Soares
> 
> 
> 
> 
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster
> 


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] Re: postgres slower than grep?

2001-07-10 Thread Mitch Vincent

> 1. I find about 50% database storage overhead in this case.  That's not
completely silly, considering this is structured data, but seems a little
high.  I don't know >the internal structures well enough to really see
what's happening.

Hmm, the PG docs say to expect data stored in the database to take up %600
(or so) more space..

see: http://postgresql.bteg.net/docs/faq-english.html#4.7

> 2. Why would it be faster than grep?  This has to match structured data,
in this case varchar, and not just bytes.  It has to worry about
transactions and logs, not >just a stream of data.  Besides, in my tests it
is not *that* slow (3 sec, compared with 1/2).  Dunno what's up with your
system.

Sure, I'd expect grep to find a string in a semi-large text file faster than
PostgreSQL -- there is a hell of a lot less overhead with grep! :-)

> 3. As you said: With an index it rocks, easily beating grep.  Use an
index - it's your friend :-)

Yep yep!

-Mitch


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] Re: [DOCS] QUERY - Including the new LDP PostgreSQL HOWTO in the main PostgreSQL distribution

2001-07-09 Thread Mitch Vincent

> For a long time, I thougt HOWTO is docuement for "quick start". Isn't it?

I'm not sure there is an understood meaning of what a HOWTO actually it.
It's an explanation of how to do something, I guess... Generally I've found
(as in your example) that it's putting software together and making it work,
like Apache/SSL/Frontpage/something -- for that there is no single document
that describes how to set it up. I really don't think the PostgreSQL
installation guide could get much more simple -- why re-invent the wheel
here?

-Mitch


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] How should I phrase this?

2001-07-08 Thread Mitch Vincent

An alternative, though I'm not sure it's better in any way :

DELETE FROM generators WHERE started + '30 minutes'::interval <= now();

-Mitch

- Original Message -
From: "Paul Tomblin" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Sunday, July 08, 2001 2:12 PM
Subject: [GENERAL] How should I phrase this?


> I've got a table that I insert a value and a timestamp "now()".  Then I
> want to delete entries that are older than 30 minutes old.  After some
> experimentation, I found the following works, but it looks awkward and
> backwards.  Is there a better, more readable way of phrasing this or
> rewriting it?
>
> DELETE
> FROM   generators
> WHERE  age(now(),started) > interval '30 minutes'
>
>
> --
> Paul Tomblin <[EMAIL PROTECTED]>, not speaking for anybody
> "Nobody can be told what the dominatrix is, they have to see it for
themselves"
>
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/users-lounge/docs/faq.html
>


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[GENERAL] Re: Index on a function(field)

2001-05-11 Thread Mitch Vincent

> Is it possible to create an index using a function(field) sintaxis ?

As far as I know you can -- I have lots of indexes on lower(varchar).. There
may be limitations though so I'll let someone else have the final word :-)

-Mitch



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[GENERAL] Re: Using , instead of . for thousands..

2001-05-10 Thread Mitch Vincent

Ok, over my head -- someone has schooled me.. My apologies for the list
noise.

-Mitch


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[GENERAL] Re: Using , instead of . for thousands..

2001-05-10 Thread Mitch Vincent

1,000 is one thousand, right?

1.000 is one, right?

Since the decimal isn't just an arbitrary separator when we're speaking
of decimal numbers (1.00 is certainly a lot different than 1000.00) I guess
I don't follow what the problem is.. How can you have one thousand
represented as 1.000, unless you're storing it as a string for some reason
(say, in a varchar field instead of a float or something)?

Of course the question might be flying right over my head, sorry if
that's the case :-)

-Mitch

- Original Message -
From: "mazzo" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, May 10, 2001 4:02 PM
Subject: Using , instead of . for thousands..


> Hi all...i'm converting an access 97 database to pg and i have a little
> problem...since i'ìm in italythousand use commas as a separator and
not
> dots (eg 1,000 not 1.000)... is there a function to instruct postgresql to
> use the comma instead of the dot?? I would be very usefull because it
would
> avoid me to open the files i dump with access and find/replace all the
> commas with dots...
> Btw ... i'm on some testing and this is my situation:
> I have a db that has about 10 tables of which 1 has 20+ records and
> others have about 3now this db is in access 97 on a dual pII 600
> scsi hd 512 mb RAM ecc ecc (a good machine) now...for testing i'm
using
> a celeron 500 with 128 mb ram and udma 66 hd and after having exported the
> accessdb into postgres i can see that to open a table with the access
> 97/dual pIII machine takes me about 10 secs...the same table opened on the
> cel with postgres takes about 5 secs...am i crazy or can i really gain so
> much speed on pg..??!!??
> Thanks for your answers and sorry if my english is not really good..
>
>
>
>
> ---(end of broadcast)---
> TIP 6: Have you searched our list archives?
>
> http://www.postgresql.org/search.mpl
>


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[GENERAL] Re: Query not using index

2001-05-10 Thread Mitch Vincent

Does that query really return 9420 rows ? If so, a sequential scan is
probably better/faster than an index scan..

-Mitch

- Original Message -
From: <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, May 10, 2001 9:22 AM
Subject: Re: Query not using index


> I vacuum every half hour!  Here is the output from EXPLAIN:
>
> NOTICE:  QUERY PLAN:
>
> Seq Scan on pa_shopping_cart  (cost=0.00..7237.94 rows=9420 width=296)
>
> EXPLAIN
>
> Thanks!
>
>
> On Thu, 10 May 2001 18:19:16 + (UTC),
> [EMAIL PROTECTED] (Stephan Szabo) wrote:
>
> >
> >Have you vacuum analyzed recently and what does
> >explain show for the query?
> >
> >On Thu, 10 May 2001 [EMAIL PROTECTED] wrote:
> >
> >> Here's the query:
> >>
> >> SELECT
> >> cart_row_id
> >> FROM
> >> pa_shopping_cart
> >> WHERE
> >> order_id = 20;
> >> [ ... ]
> >> There is an index on:
> >> just order_id
> >> just order_id and cart_row_id
> >> and a PK on cart row_id
> >>
> >> I don't understand why it's not using one of these indexes!
> >> Please post your responses to the group - my email is down.
> >
> >
> >---(end of broadcast)---
> >TIP 2: you can get off all lists at once with the unregister command
> >(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
>
>
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/users-lounge/docs/faq.html
>


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[GENERAL] Re: very odd behavior

2001-05-10 Thread Mitch Vincent

desc is a reserved keyword (used in ORDER BY to indicate descending order)..

You can use keywords as field names though you have to put the in quotes (as
you found out!).

-Mitch

- Original Message -
From: <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, May 10, 2001 2:08 PM
Subject: very odd behavior


> I have 7.1
>
> Can someone take a look the following
> and tell me why I'm getting errors?
> I'm completely baffled!
> 
>
> what=> create table bla(desc text,def text,data text);
> ERROR:  parser: parse error at or near "desc"
> what=> create table bla("desc" text,def text,data text);
> CREATE
> what=>
>
> ---(end of broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
>


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[GENERAL] Re: underlying structure: varchar vs. text

2001-05-04 Thread Mitch Vincent

> I'm trying to store anywhere from a few words to a half page of text as a
> field in my table. A fellow db programmer told me that better than using a
> varchar is to break the message up into n pieces of size m (let's say
m=100),
>
> and make n (or n+1) varchar(m)s, and relate them together.

If the text needs to be stored together, there isn't anything wrong with
using a text type field. Keep in mind that in 7.0.3 there is an 8k limit on
text fields (unless you increased it at compile time) and that in any
PostgreSQL you can't directly index text fields (though there is some stuff
in contrib that can help)..

> He is worried that varchars allocate and hold space.

Do you mean many varchar fields will take up more space than a single text
field both holding the same information? I don't know for sure but I'd say
that's true -- it's probably a pretty small different though (well,
depending on what the total amount of data we're talking about is).

> I'm not sure if that is actually more efficient, nor do I know the limit
> of how big you can make a varchar.

I think I remember Tom saying it was around 2100 bytes. I could be pretty
far off on that though.

Oh -- Update to 7.1 if you can, it's *really* nice...

-Mitch


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[GENERAL] Re: Newbie Question

2001-05-04 Thread Mitch Vincent

You *really* should go glance over the manual.. There is the field type
SERIAL and things called sequences, both could be what you're talking
aboutThe rest I'll leave up to you :-)

Good luck!!!

-Mitch

- Original Message -
From: "Clay & Judi Kinney" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, May 01, 2001 8:34 AM
Subject: Newbie Question


> How do I create an autoincrement field in a postgresql table???
>
> What are the correct field type and parameters
>
> Any help would be appreciated.
>
> Thanks
>
> Clay
>
>
>
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster
>


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[GENERAL] Re: Ideal hardware - OS Wars are silly.

2001-05-03 Thread Mitch Vincent

Uhh.. OS wars are silly. Use what ever OS you like.

We should discuss PostgreSQL here, not operating systems (at least not in
the classic flame-war style)..

*sigh*

-Mitch


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[GENERAL] Re: stored procedures

2001-05-03 Thread Mitch Vincent

> So, I am not really sure what is the benefit of writing logic inside the
DB.
> Is there a performance benefit compared to processing via PHP?

One key benefit aside from anything else would be the ability to call
the stored proceedure no matter how you were interfacing to the database.
I've found this useful on many occasions.

> I do not really see the use for me to separate logic from the other logic,
> 'cause then you'll have to worry about 2 sources!

Sort of, it really depends on exactly what you're doing. Doing things in
the database might not be the best thing for you..

> (I usually have a DB only used by that specific application, and if
> not it is at least the same language, so global function are stored
> in shared code libraries.)

What if you write a web app and a windows app that connects to the same
database? Storing logic in the database allows both to share the code... One
of several uses, of course.

> I see this is a powerful feature and it seems to make sense, but when
> I scan across my apps, currently using MySQL, I am having a hard time
> to imagine where I could use it.

It might not benefit you at all -- no one says you have to use it.. :-)

> It would help me lot, if anyone could help me here to jump on that train.
> Maybe one can tell me what he does with this opporunity.

Hope I helped, I'm sure others have more (and probably better) examples
for you..

-Mitch



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[GENERAL] Re: last comma inside "CREATE TABLE ()" statements

2001-04-22 Thread Mitch Vincent

> I suppose it isn't a major problem, but enforcing strict grammar
> helps to show up inadvertent errors.  Suppose I have a set of schema
> building files for a whole system; the way I do things, there may be fifty
> or more files, one per table.  If one of these gets corrupted in editing
> (perhaps a line gets deleted by mistake) it would be nice to know about it
> through a parser error. Of course, an error may be such that the parser
> won't detect it, but why remove protection by gratuitously departing from
> the standard?

I agree -- while it would be a huge problem, it's a matter of following
the rules.. I don't see any reason why we can't expect users to follow the
proper syntax rules.. I missed the first post so I don't know how the person
who posted this was actually putting the comma there -- perhaps there was a
good reason for it..

Just my $0.02 worth..

-Mitch


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[GENERAL] Re: versioning question

2001-04-18 Thread Mitch Vincent

7.1 is a release (the latest), 7.1rc4 is a *Release Candidate*.

-Mitch
Software development : 
You can have it cheap, fast or working. Choose two.

- Original Message - 
From: "Matthew Hixson" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, April 18, 2001 1:28 PM
Subject: versioning question


> Which is the most recent version of Postgres, 7.1 or 7.1rc4 ?
>  Thanks,
>   -M@
> 
> --
> There are more things in heaven and earth,
> Horatio, than are dreamt of in your philosophy.
> 
> 
> 
> ---(end of broadcast)---
> TIP 6: Have you searched our list archives?
> 
> http://www.postgresql.org/search.mpl
> 


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[GENERAL] Re: Database Name Case Sensitivity

2001-04-05 Thread Mitch Vincent

I'm not sure about database names but table and field names can be upper
case (or contain upper case characters) you just have to reference the
table/field in double quotes..

Example :
SELECT * FROM "FoBaR";

Hope that helps.

-Mitch
Software development :
You can have it cheap, fast or working. Choose two.
- Original Message -
From: "Brian T. Allen" <[EMAIL PROTECTED]>
To: "Shaw Terwilliger" <[EMAIL PROTECTED]>;
<[EMAIL PROTECTED]>
Sent: Thursday, April 05, 2001 3:53 PM
Subject: Re: Database Name Case Sensitivity


> I just ran into that too, and find it most undesirable.  I don't know
> whether that is part of the SQL spec or not, but it seems very odd.  The
> queries seem to be converted to lowercase before they ever reach the SQL
> engine.
>
> Brian
> +
> Flash web sites from $149/year w/ hosting
> http://www.eflashcash.com/er/websuccess/
> +
>
>
> > Maybe this isn't technically a bug, but I found it strange.  With
> PostgreSQL
> > 7.0.3 and 7.1beta6, I can create a database "FOO", but can only later
> connect to
> > it as "foo".  I'm not sure if this is a psql thing or a backend thing,
or
> if
> > it's intended to be this way, but it makes some of my scripts more
> complicated.
> >
> > --
> > Shaw Terwilliger <[EMAIL PROTECTED]>
> > SourceGear Corporation
> > 217.356.0105 x 641
> >
> > ---(end of broadcast)---
> > TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
>
>
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/users-lounge/docs/faq.html
>


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[GENERAL] Re: Memory Tuning

2001-03-30 Thread Mitch Vincent

If you could post the schema of your tables that you do the query against
and an EXPLAIN of the queries you're doing, perhaps we could further tune
your queries in addition to beefing up the memory usage of the backend..

Check this link out too.
http://postgresql.readysetnet.com/devel-corner/docs/user/performance-tips.ht
ml


-Mitch
Software development :
You can have it cheap, fast or working. Choose two.

- Original Message -
From: "Bruno Wolff III" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, March 30, 2001 9:45 AM
Subject: Memory Tuning


> I am looking for information on tuning memory usage for Postgres on Linux
> (2.2 kernel). In particular I have a lot of memory relative to the size
> of my database and am looking to reduce latency in queries.
>
> Searching goegle turned up a few other cases of people asking about memory
> tuning, but I didn't see any answers.
>
> I have tried increasing the memory allowed for in memory sorts and the
> estimate of available buffer caching. I haven't tried raising the buffer
> space per connection as the documenation didn't seem to indicate
> that that would help. This seems to have helped a little, but it is
> hard to tell since the current delay is about 1 second and the queries
> seem to happen faster when they are repeated after a short amount of
> time.
>
> For some idea of the specific problem, I am setting up a replacement
> web server for my hobby server. The new box has a 1GHz Tbird with 500MB
> of memory and a 20G 7200rpm IDE disk. The database is effectively static,
> with rows for 300 games, 4000 people and 11000 people/game ratings and
> a couple of other miscelaneous tables with a handful of rows each.
>
> I am mostly interested in reducing the latency of requests while
maintaining
> the flexibility of using the database. Eventually there will be online
data
> entry so I don't want to switch to writing static files for the most of
the
> different possible reports.
>
> ---(end of broadcast)---
> TIP 6: Have you searched our list archives?
>
> http://www.postgresql.org/search.mpl
>


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[GENERAL] Re: stability problems

2001-03-30 Thread Mitch Vincent

Upgrade to 7.1 as soon as it's out (shouldn't be long now)..

I successfully migrated a 6.4 database to a 7.1 database while 7.1 was in
the early beta stages, a few minor problems but the entire process only took
half an hour to complete..

Good luck!

-Mitch
Software development :
You can have it cheap, fast or working. Choose two.

- Original Message -
From: "Lars Maschke" <[EMAIL PROTECTED]>
To: <>
Sent: Wednesday, March 28, 2001 5:42 AM
Subject: stability problems


> Hello Newsgroup
>
> I have stability problems with my Postgres 6.4 database. I am using
PHP4.0.4
> and get access to the db. Sometimes when about 10-20 clients are using the
> website the postgres deamon shuts down and the message "no backend cache"
> appears. The processor ist at 60%-80% and about 10-15 postgres processes
are
> on my linux machine. So what can i do ?
>
> My postgres-command is: /usr/lib/pgsql/bin/postmaster -B
> 2048 -i -o -F -D/var/lib/pgsql
>
> Thanks
> Lars
>
>
>
> ---(end of broadcast)---
> TIP 6: Have you searched our list archives?
>
> http://www.postgresql.org/search.mpl
>


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[GENERAL] Re: Re: Convert to upper

2001-03-02 Thread Mitch Vincent

Oh, I misunderstood.. I read his post that he wanted anything inserted to be
converted to upper case.. If you're just looking to throw an error if it's
not upper case, the check constraint is the way to go..

My apologies, I'll try and read more carefully..

-Mitch

- Original Message -
From: "Rod Taylor" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, March 02, 2001 11:05 AM
Subject: Re: Re: Convert to upper


> check
>
> create table uppercase (
>   name char(15) check (name ~ '[A-Z]')
> );
>
> --
> Rod Taylor
>
> There are always four sides to every story: your side, their side, the
> truth, and what really happened.
> - Original Message -
> From: "Mitch Vincent" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Friday, March 02, 2001 10:52 AM
> Subject: [GENERAL] Re: Convert to upper
>
>
> > > is there any way to limit values to upper case strings?
> > > Somthing like:
> > >
> > > name char(15) DEFAULT (upper(name))
> > >
> > > or must I use triggers?
> >
> > I'd say a trigger is your best bet.
> >
> > -Mitch
> >
> >
> > ---(end of
> broadcast)---
> > TIP 2: you can get off all lists at once with the unregister command
> > (send "unregister YourEmailAddressHere" to
> [EMAIL PROTECTED])
> >
>
>
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
>


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[GENERAL] Re: Newbie question :-)

2001-02-26 Thread Mitch Vincent

Oh and the instructions in the INSTALL file (under the directory where the
source was un-tar'd)  is very easy to follow and walks you through the
installation as much as possible IMHO..

The user/admin/programmer manuals and the tutorial @ www.postgresql.org are
all good too, they detail just about anything you'd want to do with PG..

-Mitch

- Original Message -
From: "Bela Lantos" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Saturday, February 24, 2001 5:22 AM
Subject: Newbie question :-)


> Hi,
>
> I just started learning Database Design, our programs have to work on
> Oracle, but at home I run Linux.
>
> Can any of you tell me how portable are the programs written for
Progresql?
> Would they run on Oracle without problem?
>
> Is there an easy install and configuration guide anywhere on the net? I am
> a bit lost in the documentation that comes with the packages.
>
> Regards,  Bela
>
>




[GENERAL] Re: Newbie question :-)

2001-02-26 Thread Mitch Vincent

If you're talking about some kind of C application then it's going to be
portable only you'll have to rip out the PG API and replace it with the
Oracle API, I'm sure that's not such an easy task..

If you're speaking of an application written in say, PHP, then that's a
horse of a different color. I wrote a database class for PHP that gives me a
layer between my applications and the database, it allows me to change the
code in my database class for another database as opposed to going through
all the code that uses PG and changing it all to use another DB.. I would
strongly suggest you do something like that if you're needing to swap out
backends to applications fairly quickly... Note you could do the same thing
in C -- build a generic database library that is..

If you're talking about query/table layout portability then that's again
something different.. I'm not sure what all the differences are but I know
that everything in PG isn't the same as it is in Oracle. There are things
that PG does that Oracle doesn't and vice versa. For example I don't think
Oracle has sequences like PG's, and I'd guess that a lot of the aggregate
functions are named different and take different arguments..

-Mitch

- Original Message -
From: "Bela Lantos" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Saturday, February 24, 2001 5:22 AM
Subject: Newbie question :-)


> Hi,
>
> I just started learning Database Design, our programs have to work on
> Oracle, but at home I run Linux.
>
> Can any of you tell me how portable are the programs written for
Progresql?
> Would they run on Oracle without problem?
>
> Is there an easy install and configuration guide anywhere on the net? I am
> a bit lost in the documentation that comes with the packages.
>
> Regards,  Bela
>
>




[GENERAL] Fw: PHP and pg_connect()

2001-02-19 Thread Mitch Vincent

Just an FYI.


> FWIW, I emailed the php maintainer of the postgres piece, and he replied
> with the following (I installed the patch, recompiled, and everything has
> been running fine for about a week or so)
>
> Michael Fork - CCNA - MCP - A+
> Network Support - Toledo Internet Access - Toledo Ohio
>
> -- Forwarded message --
>
> On Wed, Feb 14, 2001 at 09:59:24AM -0500, Michael Fork wrote:
> > I am subscribed the the PostgreSQL mailing list and noticed that you
were
> > a PHP developer, and I was wondering if you could give me a hand.  I
have
> > a bunch of scripts that randomly give errors "Warning:  is
> > not a valid PostgreSQL link resource in ".  I have noticed this
> > since moving to PHP4, and it appears to only be random (it just started
> > happening on a website that previously worked for months without
> > problems -- only change to system was increasing kern.maxfiles to 8192).
>
> i've commited a fix for this to PHP 4 CVS yesterday.
>
> if you don't want to live on the "bleeding edge" (use PHP
> from CVS) just replace the php_pgsql_set_default_link
> function in pgsql.c against this one and you're all-set!
>
> regards,
> tc
>
> static void php_pgsql_set_default_link(int id)
> {
> PGLS_FETCH();
>
> if ((PGG(default_link) != -1) && (PGG(default_link) != id)) {
> zend_list_delete(PGG(default_link));
> }
>
> if (PGG(default_link) != id) {
> PGG(default_link) = id;
> zend_list_addref(id);
> }
> }
>
> >
> > Any information is greatly appreciated (I have been banging my head
> > against the wall :)  If you need more info, just let me know!
> >
> > Thanks
> >
> > Michael Fork - CCNA - MCP - A+
> > Network Support - Toledo Internet Access - Toledo Ohio
> >
> >
> > Here is system info:
> > 4.2-STABLE FreeBSD
> > PostgreSQL 7.0.3
> > mod_php-4.0.3pl1
> >
> > here is a link to a page calling phpinfo():
> > http://postgres.toledolink.com/~mfork/info.php





[GENERAL] Re: DBD::Pg problem?

2001-02-17 Thread Mitch Vincent

In pre 7.0 versions there was an 8k query size limit and in all pre 7.1
versions a limit on the size of a field (BLCKSZ from 8k (default) to 32k)...
I'd say it's either in there to prevent one or the other.

Of course 8140 isn't quite 8k but *shrug*..


-Mitch


- Original Message -
From: "John Madden" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Saturday, February 17, 2001 12:51 PM
Subject: DBD::Pg problem?


> The DBD::Pg perl module for some reason has a limit of 8140 bytes when
> inserting, and I don't see why (yeah, I'll be contacting the authors there
> too), but I thought I'd ask here... I'm writing a Postgres-backed mail
> client, and storing emails in the database is obviously a little difficult
> when you can't insert more than 8k.
>
> Does anyone know why this limit exists, or if there's a way around it?
>
> John
>
>
>
>
>
> --
> # John Madden  [EMAIL PROTECTED] ICQ: 2EB9EA
> # FreeLists, Free mailing lists for all: http://www.freelists.org
> # UNIX Systems Engineer, Ivy Tech State College: http://www.ivy.tec.in.us
> # Linux, Apache, Perl and C: All the best things in life are free!
>




[GENERAL] Re: order of clauses

2001-02-16 Thread Mitch Vincent

Are you referring to short circuit? That's a language feature, isn't it? I
didn't think it had anything to do with the compiler (I know C and a few
other languages do it). Anyway, I could be wrong.. Seems that could break a
lot of code if the programmer relies on short circuit in some conditional
statements.

if ( whatever() OR something() ) {

blah();

}

-- if "whatever" evaluates to true, then "something" isn't executed (the
whole statement is true if one is true)...

This really only comes into play when you're comparing the values returned
by something (a method, function, etc), if you're just looking at boolean
variable I guess it doesn't matter.

-Mitch

- Original Message -
From: "Steve Wolfe" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, February 16, 2001 3:10 PM
Subject: Re: order of clauses


> > > will give a divide by zero error as A=(y>0) and B=(x/y>1) can be
> evaluated in
> > > any order (A and B = B and A). I obviously would like (y>0) to happen
> first,
> > > but I don't see how this can be achieved.. Any ideas?
>
>   I have one idea that would be nifty to implement.  In some compilers,
you
> can turn off complete boolean checking.  As soon as any part of an
> expression will invalidate the expression, it stops evaluating all of it.
> That can help you avoid division by zero, and keeps you from evaluating
> parts of the expression that don't matter.  It sounds like a good idea, at
> least to an ignoramus like me. : )
>
> steve
>
>
>




[GENERAL] Re: Case insensitive selects?

2001-02-16 Thread Mitch Vincent

> Hmmm...I'd hate to have two indexes on every field I query like this, one
> case-senstive, one case-insensitve (like the one you create here). Is
> there a configuration option or something that will tell pgsql to do
> case-insensitive comparisons (kinda like MS SQL Server has)? That could
> save us on indexing overhead, since we want all of our WHERE comparisons
> to be case-insensitive, anyway.

If you want all of them to be case insensitive then make the upper ( or
lower() ) index and don't make any case sensitive queries! :-)

Make sure all your queries use upper() or lower() around the field and value
you're comparing and you're golden.. Unless I've misunderstood you, I don't
see the problem..

SELECT * FROM whatever WHERE lower(myfield) = lower('myvalue'); -- and make
your index on lower(myfield)... Viola!

-Mitch




[GENERAL] Re: Postgres slowdown on large table joins

2001-02-16 Thread Mitch Vincent

Can you EXPLAIN that query and send us the results (the query plan)? That
should tell a whole lot.

-Mitch

- Original Message -
From: "Dave Edmondson" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, February 16, 2001 1:32 PM
Subject: Postgres slowdown on large table joins


> I'm having a problem here. I'm using Postgres 7.0.3 on a FreeBSD
4.2-RELEASE
> machine... it's a Pentium II/450 w/ 128MB of RAM (not nearly enough, but
> there'll be an upgrade soon). Anyway, I have a data table, which currently
> has around 146,000 entries, though it will grow to a few million
eventually.
> There is also config and prefs tables, which have 4-5 rows each. When I
> execute the following command:
>
> SELECT c.unit_id,c.name,c.auxenable,c.bias,c.feedback,c.gain,c.igain,
> c.mode,c.reverse,c.setpoint,c.switch1,c.switch2,c.timeout,
> c.valvetype,d.active,d.drive_1_s,d.drive_1_f,d.drive_2_s,
> d.drive_2_f,d.mval,d.m4val,d.sw1,d.sw2,d.cycle,d.itemp,
> d.error,d.aval,d.ts,c.ts,p.degree,c.outputa,c.outputb,
> c.outputc,c.rawtemp
> FROM config c, data d, prefs p
> WHERE c.conf_id = '4'
> AND d.conf_id = c.conf_id
> AND p.conf_id = c.conf_id
> ORDER BY d.ts DESC
> LIMIT 1
>
> ...it takes an astounding 50 seconds to complete, CPU usage goes to about
> 85% Now, a simple...
>
> SELECT *
> FROM data
> ORDER BY ts desc
> LIMIT 1
>
> ...takes about 16-26 seconds - still slw, but not as bad as with the
> table join. What's really causing the slowdown? ...should I just execute
> the command differently? I'm trying to get the latest data in all three
> tables.
>
> Once the server has 768MB+ of RAM, is it possible to load the entire table
> into memory? should speed things up considerably.
>
> Thanks,
>
> --
> David Edmondson <[EMAIL PROTECTED]>
> GMU/FA d-(--) s+: a18>? C$ UB$ P+>+ L- E--- W++ N- o K-> w--
O?
> M-(--) V? PS+ PE+ Y? PGP t 5 X R+ tv-->! b DI+++ D+ G(--) e>* h!>+ r++
y+>++
> ICQ: 79043921 AIM: AbsintheXL   #music,#hellven on
irc.esper.net
>




[GENERAL] Re: PostgreSQL vs Oracle vs DB2 vs MySQL - Which should I use?

2001-02-15 Thread Mitch Vincent

> Yah. What I find is the developers set the general tone/culture of the
list.
> This affects the type of responses/support you get even from the other
list
> subscribers. So it's quite good here where you have kind and helpful
developers.

Helpful developers doesn't go near far enough..

I've seen (and still do see) commercial support that isn't up to the grade
of support I have gotten from the -general and -hackers lists. I can ask any
question and I *always* get a response within minutes from one of the core
developers.. I have yet to have a question go un-answered and I've been on
the list for a pretty long time! It just doesn't get much better than that
to me.

Thanks to all the people that have put up with my strange (and sometimes
amusing) questions!

-Mitch





[GENERAL] Re: Re: Order question

2001-02-13 Thread Mitch Vincent

Sure, I was just wondering if there was some PG internal value I could use
(and I should have said that in my email)..

Thanks for the suggestion though, I appreciate it..

-Mitch

- Original Message -
From: "Mike Castle" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, February 13, 2001 6:18 PM
Subject: Re: Re: Order question


> On Tue, Feb 13, 2001 at 05:58:35PM -0500, Mitch Vincent wrote:
> > I just set the row with j.inv_id to 1, I'd like it to be ordered above
the
> > row with j.jobtitle in it -- is that possible?
>
> make a trigger that updates a time stamp and order by that?
>
> mrc
> --
>Mike Castle   Life is like a clock:  You can work constantly
>   [EMAIL PROTECTED]  and be right all the time, or not work at all
> www.netcom.com/~dalgoda/ and be right at least twice a day.  -- mrc
> We are all of us living in the shadow of Manhattan.  -- Watchmen
>




[GENERAL] Re: Order question

2001-02-13 Thread Mitch Vincent

A further extension of this..

What might I be able to additionally order by so that the most recently
updated rows get ordered above everything else (within the order by
ordernum).. Using the same example :

>  ordernum |   fieldname   |   oid
> --+---+-
> 1 | J.jobtitle| 1197126
> 1 | J.inv_id  | 1197125
> 2 | J.updatedon   | 1197127
> 3 | J.empinitials | 1197128

I just set the row with j.inv_id to 1, I'd like it to be ordered above the
row with j.jobtitle in it -- is that possible?

Thanks again!

-Mitch


- Original Message -----
From: "Mitch Vincent" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, February 13, 2001 5:54 PM
Subject: Order question


> Hey guys, another strange question here..
>
> If I query and order by a field and there are duplicate values in that
> field, what makes one return before the other? Just the first one that PG
> comes to on the disk is displayed first or is something else looked at to
> determine the order?
>
> Example :
>
> hhs=# SELECT ordernum,fieldname,oid from resultstyle WHERE style_id=1001
> order by ordernum asc;
>  ordernum |   fieldname   |   oid
> --+---+-
> 1 | J.jobtitle| 1197126
> 1 | J.inv_id  | 1197125
> 2 | J.updatedon   | 1197127
> 3 | J.empinitials | 1197128
>
>
> What makes the record with j.jobtitle appear above te other, when ordernum
> is the same?
>
> Thanks!
>
> -Mitch
>
>
>




[GENERAL] Order question

2001-02-13 Thread Mitch Vincent

Hey guys, another strange question here..

If I query and order by a field and there are duplicate values in that
field, what makes one return before the other? Just the first one that PG
comes to on the disk is displayed first or is something else looked at to
determine the order?

Example :

hhs=# SELECT ordernum,fieldname,oid from resultstyle WHERE style_id=1001
order by ordernum asc;
 ordernum |   fieldname   |   oid
--+---+-
1 | J.jobtitle| 1197126
1 | J.inv_id  | 1197125
2 | J.updatedon   | 1197127
3 | J.empinitials | 1197128


What makes the record with j.jobtitle appear above te other, when ordernum
is the same?

Thanks!

-Mitch





[GENERAL] Casting help

2001-02-12 Thread Mitch Vincent

hhs=# INSERT INTO applicants_test (SELECT app_id ,old_id ,emp_id ,inv_id
,createdate ,updatedon ,todelete ,appstatus ,apptype ,infosent ,empinitials
,firstname ,lastname ,salutation ,fontype1 ,fonnumber1 ,fonext1 ,fontype2
,fonnumber2 ,fonext2 ,fontype3 ,fonnumber3 ,fonext3 ,address1 ,address2
,city ,state ,postal ,country ,homestate ,email ,careerdate ,degree1
,degreedate1 ,gpa1 ,university1 ,degree2 ,degreedate2 ,gpa2 ,university2
,startdate ,currentsalary ,degreecode ,industrycode ,title ,company
,companydiv ,preemploy ,minority as processtype ,inunion ,prodnotes ,sk1
,sk2 ,sk3 ,sk4 ,sic1 ,sic2 ,sic3 ,certificates ,clearance ,languages
,desiredsalary ,minimumsalary ,whyseeking ,followdate ,personalnotes as
technotes ,technotes as personalnotes ,relonote ,follownote ,profile
,relopref1 ,relopref2 ,relo_states ,source ,upinitials  FROM applicants);
ERROR:  Attribute 'processtype' is of type 'varchar' but expression is of
type 'bool'
You will need to rewrite or cast the expression
hhs=#

And

if I try to cast minority to varchar (or anything else for that matter) I
get

ERROR:  Cannot cast type 'bool' to 'varchar'

Is there any way to get the above to work? I have a client down and need
this (or something like this) to straighten out his data tables... Thanks!

-Mitch




[GENERAL] Re: PostreSQL SQL for MySQL SQL

2001-02-06 Thread Mitch Vincent

> I'm fairly new to PostreSQL, coming from MySQL. My Python application
> generates these MySQL commands:
>
> drop database if exists Foo;
> create database Foo;
> use Foo;
> Using the PostgreSQL online docs, my closest translation is:
>
> drop database Foo;
> create database Foo;

You supply the database name on connect.. I don't *think* you can change
databases after you've connected but I could be very wrong (if you can, I'd
sure like to know!)

> What I'm missing is:
> * "if exists" for the drop, in order to avoid an error the very first time
> this program runs
> * the "use Foo;"
>
> Can I do these things in PostreSQL from SQL (e.g., not from the command
line)?

Not sure on these...

-Mitch




[GENERAL] Re: full text searching

2001-02-06 Thread Mitch Vincent

> Well you wouldn't want to start  building these kind of rules in your
> application - better to have them in the search engine. The fulltextindex
> in the contrib package does of course not offer anything like this, it
> would be nice to see a third party addon provide fulltext capabilities for
> Postgresql.

Well, the search engine isn't the database, IMHO. The search "engine" is
your application... The database will go get anything you tell it to, you
just have to know how to tell it and make sure that your application tells
it in the correct way.

Teaching an application or database the English language is going to be a
hell of a project, good luck!

 Personally, I think the FTI trigger and function that's in contrib is
pretty bad. It's not usable in a lot of situations, I re-wrote it to remove
duplicates and index whole words but it still didn't work out for me...
Namely when you have fairly large chunks of text (30k or so), one for each
record in a row (and you have 10,000 rows).. Well, ripping out and indexing
30k*10k text chunks is a lot by itself but then when you search it you have
to join the two tables... It becomes a mess and was actually slower than
when I used LIKE to search the big text fields in my single table. It only
take a few seconds for the seq scan and the index scan on the FTI table but
with FTI updating became a 30 second job (of course there were like 4
million rows and each app did have 30k of text or so).. I don't have too
many small databases, so maybe this works for a lot of people :-)

Anyway. Moral of the story.. I'd like to see native PostgreSQL full text
indexing before we go adding on to the contrib'd trigger/function
implementation...

-Mitch




[GENERAL] Re: Foreign Keys

2001-02-05 Thread Mitch Vincent

In addition to being too purple you might want to note your database is too
old! Upgrade to 7.0.3 - it has foreign key support..

-Mitch

- Original Message -
From: "Adam Haberlach" <[EMAIL PROTECTED]>
To: "" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Tuesday, February 06, 2001 12:50 AM
Subject: Re: Foreign Keys


> On Thu, Feb 01, 2001 at 07:59:16PM -0500,  wrote:
> > I have PostgreSQL 6.5, and I can't get foreign keys to work! What seems
to
> > be the problem?
>
> Your database is too purple.
>
> --
> Adam Haberlach|A cat spends her life conflicted between a
> [EMAIL PROTECTED]   |deep, passionate, and profound desire for
> http://www.newsnipple.com |fish and an equally deep, passionate, and
> '88 EX500 |profound desire to avoid getting wet.
>




[GENERAL] Re: Re: Disk acces

2001-02-05 Thread Mitch Vincent


Super sweet... That is excellent.

-Mitch



On Thu, 1 Feb 2001, Bruce Momjian wrote:

> [ Charset ISO-8859-1 unsupported, converting... ]
> > Hi Doug, your comments caught my eye and I thought I'd ask you something..
> > Are you speaking of using persistant connections with PHP? I'm not sure what
> > Enhydra is, so forgive me if this is totally off base..
> > 
> > We do a lot of development in PHP and only use PostgreSQL, I've tried and
> > tried to get persistant connections to work but every time I use them, very
> > strange things start to happen (variables disappearing and getting corruted
> > to name one).. It was said a while back on the list that the most probable
> > reason for that is that PHP isn't thread safe and that was probably the
> > cause.. Another strange, though not surprising thing that happens is that if
> > you begin a transaction in a PHP script and the script is terminated before
> > you comit or rollback (be that from the user clicing stop or a script/server
> > error), the transaction is left open for the next time that backend is
> > used -- it's caused a lot of problems.. I say it's not surprising because
> > that's exactly what I'd expect to happen, there just doesn't seem to be much
> > of a way to prevent it in some cases.. I've tried the ignore_user_abort but
> > it still happened quite a lot. (I'm spoiled with Ruby and ensure :-) )..
> > 
> > Have you had those kinds of problems and if so how did you overcome them?
> > Does Enhydra manage your database/web server pool?
> 
> I just talked to PHP/Rasmus yesterday, and I will try to get this fixed
> in the PHP PostgreSQL code soon.  It involves adding the ability to
> ROLLBACK before handing the connection to a new user.
> 
> -- 
>   Bruce Momjian|  http://candle.pha.pa.us
>   [EMAIL PROTECTED]   |  (610) 853-3000
>   +  If your life is a hard drive, |  830 Blythe Avenue
>   +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026
> 




[GENERAL] Re: full text searching

2001-02-05 Thread Mitch Vincent

Another thing..

Full text indexing, last time I checked, was just a trigger/function, you
don't have to rebuild anything that I'm aware of to include it..

-Mitch

> Hi,
>
> OK full text searching.  Will the full text index
> catch changes in verb tense?  i.e. will a search for
> woman catch women?
>
> I'm researching before I dive in to this later in the
> week so please excuse this incompletely informed
> question:  Will I need to rebuild postgresql with the
> full-text index module included?  Unfortunately I'm
> away from my linux machine-- would someone be willing
> to email me the README?
>
> Thanks in advance,
>
> Culley
>
> __
> Get personalized email addresses from Yahoo! Mail - only $35
> a year!  http://personal.mail.yahoo.com/
>




[GENERAL] Re: full text searching

2001-02-05 Thread Mitch Vincent

> Hi,
>
> OK full text searching.  Will the full text index
> catch changes in verb tense?  i.e. will a search for
> woman catch women?
>
> I'm researching before I dive in to this later in the
> week so please excuse this incompletely informed
> question:  Will I need to rebuild postgresql with the
> full-text index module included?  Unfortunately I'm
> away from my linux machine-- would someone be willing
> to email me the README?

Regardless of indexing, you're still searching for a specific string (if you
search using the = operator).

SELECT * from people WHERE whatever = 'woman';

-- Isn't going to catch anything but the literal string "woman".. (it's case
sensitive too, mind you)

SELECT * from people WHERE whatever LIKE 'wom%n';

-- Should check either.

A regex search is going to get more specific but when using the regex
search, you can't use indexes.

Anyone, please correct me if I'm wrong.

-Mitch





[GENERAL] Re: pg_dump shell or php?

2001-02-02 Thread Mitch Vincent

This is simple, but it's what I use to do my daily dumps (run from
crontab) -- I took out some stuff specific to my application..Replace
database with the name of the database..


#!/bin/sh

pgpath=/usr/local/pgsql/bin
homepath=/home/postgres
backup=/usr/local/pgsql/backup
today=`date "+%Y%m%d-%H%M%S"`

$pgpath/pg_dump database > $backup/database${today}dump
/bin/gzip $backup/database${today}dump



- Original Message -
From: "Matt Friedman" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, February 02, 2001 7:53 PM
Subject: pg_dump shell or php?


Any one written a shell or php script that runs pg_dump on a db say everyday
or something?

Any suggestions?

thanks,

Matt Friedman






Re: [GENERAL] varchar => int

2001-01-28 Thread Mitch Vincent

Yes, Peter's suggestion does work and I was using the substring already..
This guy has some that have Xs in them too (lord knows why), so it's either
function writing time as you suggest or time to beat a client :-)

Thanks for the suggestions guys, I appreciate it..

-Mitch


- Original Message -
From: "Stephan Szabo" <[EMAIL PROTECTED]>
To: "Mitch Vincent" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Sunday, January 28, 2001 1:39 PM
Subject: Re: [GENERAL] varchar => int


>
> Well, you can get there via text (as Peter said) except that I think
> that if any don't convert (ie n-) it's going to error since
> at least on current source it appears to make sure that the entire
> string is numeric.
>
> I guess if you knew that the first five were *always* going to be numeric
> you could use substring.  I think your best bet is to write your
> own function though.
>
> On Sat, 27 Jan 2001, Mitch Vincent wrote:
>
> > Well, here I am trying to do a range search on a varchar field (though
it
> > has numbers in it...)..
> >
> > I can't cast varchar to int (according to the error I'm getting..), any
> > ideas on how I can accomplish a ranged search on a varchar field (taking
> > into acount the value of the numbers, not the character's ascii code)?
> >
> > I know. It's a messed up thing to do.. I didn't design these tables, I
think
> > it's like that so the zip codes (what's stored in this table) could
contain
> > dashes... Now they want to search it with a range *sigh*..
> >
> > Thanks for any ideas..
> >
> > -Mitch
> >
>
>




[GENERAL] Re: postgres limitation

2001-01-26 Thread Mitch Vincent

> > the max size of a row...
>
> 8k in pre v7.1, no limit in v7.1 an dlater ...

32k really... BLCKSZ can be changed.. I've had no trouble running a
production database with BLCKSZ set to 32k though there might be issues I'm
not aware of..

-Mitch




[GENERAL] Re: Re: Load a database into memory

2001-01-25 Thread Mitch Vincent

> My /etc/rc.d/init.d/postgresql script has this line:
>  su -l postgres -c "/usr/bin/pg_ctl -D $PGDATA -p /usr/bin/postmaster -o
> '-B 4096 -i' start >/dev/null 2>&1"

Isn't that BLCKSZx4096 ? I'm almost sure it is.. 

Some people (me) have BLCKSZ set to 32k so  Just an FYI.

-Mitch





[GENERAL] Re: 7.1 expected features list?

2001-01-24 Thread Mitch Vincent

There is some information here :

http://www.postgresql.org/devel-corner/docs/postgres/release.htm


- Original Message -
From: "David Wall" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, January 24, 2001 1:07 PM
Subject: 7.1 expected features list?


> Is there a place where I can see what the new features for 7.1 are
expected
> to be?  I'm hopeful for native BLOB support 
>
> David
>
>
>




[GENERAL] Re: plpgsql - cont'd

2001-01-23 Thread Mitch Vincent

Ok, after some more playing, this works.

CREATE FUNCTION invoice_payment() RETURNS OPAQUE AS '
BEGIN

   UPDATE invoice_master SET total = total - NEW.amount,updated = now(),
is_paid=(CASE WHEN total::numeric = NEW.amount::numeric THEN TRUE::bool ELSE
FALSE::bool END) WHERE invoice_id = NEW.invoice_id;

   RETURN NEW;

END;
' LANGUAGE 'plpgsql';


Thanks again for answering my stupid little questions, Tom :-)

-Mitch

- Original Message -
From: "Tom Lane" <[EMAIL PROTECTED]>
To: "Mitch Vincent" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Tuesday, January 23, 2001 12:01 PM
Subject: Re: plpgsql - cont'd


> "Mitch Vincent" <[EMAIL PROTECTED]> writes:
> > Hmm, this is the third time this has happened.. I am using 7.1 Bert 3,
so I
> > expected some things like this... When deleting and adding functions
back,
> > when I run them I get :
>
> > ERROR:  plpgsql: cache lookup for proc 49237 failed
>
> > -- What's happening there and is there anyway to fix this without having
to
> > dump/restore (which is what I've had to do thus far.. ) ?
>
> dump/restore is the hard way.  If you delete and recreate a function,
> the new incarnation has a new OID, so anything that referred to the old
> OID is now broken, and has to be deleted/recreated itself.  Triggers
> are one such reference.  Cached query plans are another, although I
> don't think that's the issue here.  How are you invoking the functions,
> anyway?
>
> regards, tom lane
>




Re: [GENERAL] Another plpgsql question..

2001-01-23 Thread Mitch Vincent

Ok, it appears now I have an error.. Unless I'm going crazy, this started
after I had to do a restore because of one of those cache lookup errors.. I
changed nothing, still, this is what I get..


CREATE FUNCTION invoice_payment() RETURNS OPAQUE AS '
BEGIN

   UPDATE invoice_master SET total = total - NEW.amount,updated = now(),
is_paid=(CASE WHEN (total - NEW.amount) = 0.00 THEN ''t'' ELSE ''f'' END)
WHERE invoice_id = NEW.invoice_id;

   RETURN NEW;

END;
' LANGUAGE 'plpgsql';


Now I get

brw=# INSERT into invoice_payments VALUES
(1,1000,'now',100,'now',100,1,1,150.00);
ERROR:  Unable to identify an operator '=' for types 'numeric' and 'float8'
You will have to retype this query using an explicit cast
DEBUG:  Last error occured while executing PL/pgSQL function invoice_payment
DEBUG:  line 2 at SQL statement
ERROR:  Unable to identify an operator '=' for types 'numeric' and 'float8'
You will have to retype this query using an explicit cast
brw=#

--- amount and total are both numeric(9,2), I've tried casting everything
(total,amount and 0.00) to float and everything to numeric with the same
error popping up.. What needs casting here?

I can determine if an invoice has been paid or not a number of ways, really
what I should do there is NEW.amount >= total -- I tried and got the above
error again..

Thanks!

-Mitch

- Original Message -
From: "Tom Lane" <[EMAIL PROTECTED]>
To: "Mitch Vincent" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Tuesday, January 23, 2001 11:05 AM
Subject: Re: [GENERAL] Another plpgsql question..


> "Mitch Vincent" <[EMAIL PROTECTED]> writes:
> > is_paid is never updated...
>
> It's not possible that is_paid is never updated; that command *will*
> replace the total, updated, and is_paid columns with *something*.
> It may be that in the cases you've checked, it gets updated to the
> same value it had before.  That's why I want to see the test cases.
>
> regards, tom lane
>




[GENERAL] Re: PL/pgSQL Question

2001-01-22 Thread Mitch Vincent

Er, I'm pretty sure I found what I was looking for, sorry to waste
everyone's time.. I looked right past half the documentation!

-Mitch

- Original Message -----
From: "Mitch Vincent" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, January 22, 2001 6:56 PM
Subject: PL/pgSQL Question


> I haven't used PL/pgSQL very much but it looks like a good language in
which
> to make some simple functions for this application I'm writing..
>
> Is it possible (with PL/pgSQL) to access other records in other tables
than
> the tuple that pulled the trigger (and called the function)?
>
> Say this (pseudo code):
>
> When a record is added to the invoice_payments table have a trigger fire
and
> call this function :
>
> total_invoice()
>
> exec_sql -> UPDATE invoices SET total = total +  the tuple> WHERE invoice_id=
>
>
>
> That's basically all I'd have to do.. There are a lot of these things that
> could be handled by the backend and would make my life much easier..
>
> I looked in the documentation but the examples only talk about rearranging
> values within the tuple that fired the trigger which called the function
> (wow, a mouth full).
>
> Thanks!
>
> -Mitch
>
>
>




[GENERAL] PL/pgSQL Question

2001-01-22 Thread Mitch Vincent

I haven't used PL/pgSQL very much but it looks like a good language in which
to make some simple functions for this application I'm writing..

Is it possible (with PL/pgSQL) to access other records in other tables than
the tuple that pulled the trigger (and called the function)?

Say this (pseudo code):

When a record is added to the invoice_payments table have a trigger fire and
call this function :

total_invoice()

exec_sql -> UPDATE invoices SET total = total +  WHERE invoice_id=



That's basically all I'd have to do.. There are a lot of these things that
could be handled by the backend and would make my life much easier..

I looked in the documentation but the examples only talk about rearranging
values within the tuple that fired the trigger which called the function
(wow, a mouth full).

Thanks!

-Mitch





Re: [GENERAL] Re: C function returning rows, was Boolean text, with phrase ranking, search under Postgres

2000-10-17 Thread Mitch Vincent

> Edmar Wiggers wrote:
> >
> > I too am interested in full text indexing under PostgreSQL.
>
> As I have described it, does it sound like something you would use? It
> is designed more like a search engine. It will do a full text / boolean
> search with phrase ranking in about 10~40 ms depending on the number of
> terms, memory and CPU speed.

Does this actually exist or is it still in the design phase of development?

-Mitch





Re: R: [GENERAL] PostgreSQL book

2000-10-11 Thread Mitch Vincent

No matter where you choose to buy it, do buy it and support Bruce,
PostgreSQL and the publisher for allowing it to be electronically
published!!

-Mitch

- Original Message -
From: <[EMAIL PROTECTED]>
To: "Steve Wolfe" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Wednesday, October 11, 2000 2:28 PM
Subject: Re: R: [GENERAL] PostgreSQL book


>
> ..not to mention that it is now Amazon's official policy
> to use/share personal information they now/will have on
> you, arguably, any way they wish:
>
> http://www.amazon.com/privacy-notice
>
> Morey Parang
> Oak Ridge National Lab





[GENERAL] Curious age() behavior Cont'd

2000-10-10 Thread Mitch Vincent

I should point out that it works as I expected it to on other values..

hhs=# SELECT age('Sun Nov 05 08:00:00 2000 EST','Tue Oct 10 08:00:00 2000
EDT') as esec;
   esec
--
 @ 26 days 1 hour
(1 row)

hhs=# SELECT ('Tue Oct 10 08:00:00 2000 EDT'::timestamp + '1 mon 24 days 1
hour'::interval);
   ?column?
--
 Mon Dec 04 08:00:00 2000 EST
(1 row)

*shrug*

Thanks again..

-Mitch






[GENERAL] Curious age() behavior

2000-10-10 Thread Mitch Vincent

Maybe I'm just overlooking something really simple but this has me a bit
confused.

What I'm trying to do is get the amount of time from A to B -- I thought
age() would do just that but it seems to be about a day off sometimes.

hhs=# SELECT age('Sun Dec 03 08:00:00 2000 EST','Tue Oct 10 08:00:00 2000
EDT') as esec;
  esec

 @ 1 mon 24 days 1 hour
(1 row)

Ok, but if I turn right around and add that value back , I get :


hhs=# SELECT ('Tue Oct 10 08:00:00 2000 EDT'::timestamp + '1 mon 24 days 1
hour'::interval);
   ?column?
--
 Mon Dec 04 08:00:00 2000 EST
(1 row)

Like I said, perhaps I'm blind and can't see what's happening here but
shouldn't that be Sunday the 3rd of December?

Thanks!

-Mitch





Re: [HACKERS] My new job

2000-10-10 Thread Mitch Vincent

What is the main concern? That Great Bridge or PostgreSQL Inc will try to
influence development?  This is just my lowly opinion but it seems to me
that this could be a storm brewing in a tea cup, it just doesn't seem to be
that threatening a situation at a glance.

Congrats to everyone on their new positions. *hats off*

-Mitch

- Original Message -
From: "Tom Lane" <[EMAIL PROTECTED]>
To: "Peter Eisentraut" <[EMAIL PROTECTED]>
Cc: "PostgreSQL-general" <[EMAIL PROTECTED]>;
"PostgreSQL-development" <[EMAIL PROTECTED]>
Sent: Tuesday, October 10, 2000 10:02 AM
Subject: Re: [HACKERS] My new job


> Peter Eisentraut <[EMAIL PROTECTED]> writes:
> > Bruce Momjian writes:
> >> After careful consideration, I have decided to accept a job with Great
> >> Bridge.
>
> > Whatever happened to this:
>
> > From: Tom Lane <[EMAIL PROTECTED]>
> > : One thing we have agreed to is that there must not be an unseemly
fraction
> > : of core members working for the same company.  With six people on
core,
> > : probably about two working at the same company would be a reasonable
> > : limit.
>
> I knew someone was going to bring that up ;-).
>
> There's already been discussion of this point among core.  What we
> now have is three core members employed by Great Bridge and the
> other three either fully or partly employed by PostgreSQL Inc.
> In one sense that's a stable situation, but on the other hand it does
> not agree with our original informal goal of keeping any one company
> to a minority position of the core membership.
>
> None of the core members are interested in giving up their new
> positions.  En masse resignation from the core committee would preserve
> our high moral standards, perhaps, but it wouldn't do the project any
> good that I can see.  So it seems like the choices are to accept the
> status quo, or to appoint some more core committee members to bring
> the numbers back where we said they should be.
>
> While I can think of a number of well-qualified candidates for core
> membership, I don't much like the notion of appointing core members
> just to meet some kind of numerical quota.  Also, suppose we do appoint
> more members, and then some of them accept positions with GB or PgSQL
> Inc; do we repeat the exercise indefinitely?  (This is not an unlikely
> scenario, since the sort of people who'd be asked to join core are
> exactly the sort of people whom both companies would love to hire.)
>
> Bottom line is we're not sure what to do now.  Opinions from the
> floor, anyone?
>
> regards, tom lane
>




Re: [GENERAL] -F option, RAM usage, more...

2000-10-04 Thread Mitch Vincent

Hmm, it seems we all know just enough to be dangerous :-)

I have seen many threads on the "to fsync() or not to fsync()" and
overwhelmingly people have come out and said that to not fsync() is A Bad
Thing(TM).  -- If Neil is right then it being bad or not is going to depend
very much on the filesystem (I think)...

Now I'm pretty confused (as I'm sure others are) -- can someone that knows
beyond a reasonable doubt beat us with a clue stick on this?  Are we taking
a huge risk if we use -F and disable fsync() or no?

-Mitch

- Original Message -
From: "Neil Conway" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, October 04, 2000 1:24 PM
Subject: Re: [GENERAL] -F option, RAM usage, more...
On Wed, Oct 04, 2000 at 02:09:47PM -0400, Mike Biamonte wrote:
> I understand from the docs that -F ".. prevents fsync()'s from
> flushing to disk after every transaction.." and that this boosts
> performance because RAM accesses are far faster than disk accesses.  I
> have also seen some impressive stats regarding the degree of this
> performance boost.

Correct me if I'm wrong, but I believe that when you specify '-F', it
allows the filesystem to buffer I/O operations, performing several
operations once after another. This is much faster than with fsync(),
where the disk heads have to be moved frequently. Also, allowing the
I/O subsystem to buffer some data will speed subsequent accesses of it,
until the buffer is flushed.

HTH,

Neil

--
Neil Conway <[EMAIL PROTECTED]>
Get my GnuPG key from: http://klamath.dyndns.org/mykey.asc
Encrypted mail welcomed

Violence is to dictatorship as propaganda is to democracy.
-- Noam Chomsky






Re: [GENERAL] -F option, RAM usage, more...

2000-10-04 Thread Mitch Vincent

> > 1. How often DOES PG flush to disk - if at all - when the -F option is
> > invoked?  Can this be controllled?
>
> Once after each transaction.

That's what it does when -F is *not* used, right?  -F disables calling
fsync() after each transaction, right?..

-Mitch





Re: [GENERAL] Talking with other Dbases.

2000-09-28 Thread Mitch Vincent

> Hi,
>
> I'm pretty new to PgSQL (been lurking) and am wondering if
> it's possible to talk with other DB's easily?  I'm using
> FreeBSD atm and the project I'm working on will be moving
> from dbase to postgres in the next release.  It will also
> have a web based side to it that I plan on using MySQL for
> possibly, since all it's going to do is return records.
> (I have yet to do some testing between postgres and mysql
> to determine if pgsql would be sufficient or mysql would
> work better).

If you're talking about connecting to another RDBMSs from within PostgreSQL
I suppose it would be possible with custom written functions (since most
RDBMSs have a C API) but it being easy depends on your level of experience
with C (and the two database APIs)..

 > Is it possible to write data to other databases?  Also
> is it possible to get a secure connection between 2 postgres
> databases, or other databases for that matter?  Either through
> ssh or Kerberos(sp?) etc etc.

You can tunnel virtually anything through SSH, sure.

I know MySQL is fast but just remember, it's just "A filesystem with an SQL
interface" -- that's ALL... :-)

-Mitch




Re: [GENERAL] web programming

2000-09-25 Thread Mitch Vincent

I too use PHP and PostgreSQL for %90 of my web programming projects (C
for the other %10)... I find the embedded nature of PHP to aide in
development more than anything (though I suppose you can get that from
mod_perl as well).. I don't do much Perl at all, I found the language to be
sloppy and very messy (I'm a pretty 'neat' programmer I guess) I liked PHP's
structure and flow.. I shouldn't trash Perl as I don't know it very well at
all, I know enough to know that I don't want to know any more :-)

I've always found PHP to be as fast as I've ever needed and some of the
applications I've written get many thousands of hits a day on (at best)
mediocre hardware..

Use what you like best as most languages (especially PHP, Perl and C)
are all very capable web programming languages, it comes down to programmer
preference I suppose..

Good luck!

-Mitch

- Original Message -
From: "Michelle Murrain" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, September 25, 2000 6:15 PM
Subject: [GENERAL] web programming


> Hi folks,
>
> I thought (now that I've solved my DBD::Pg problem) that I'd ask a
> fairly generic opinion question. I've been working with postgres and
> php now for about 4 years, quite heavily in the last year. I'm sold
> on postgres, and will continue to use it unless I have compelling
> reasons not to. But I'm debating about continuing development using
> php.
>
> 1) I'm getting better and better at perl - and it's a much more
> mature language than php.
>
> 2) I've been learning a bit about jsp and java servlets - and that's
> intriguing me a lot.
>
> So what's your favorite web programming language to use with
> postgres, and why? Are there speed/performance issues I should think
> about? I'm also beginning to get into XML, so there is another set of
> things to think about.
>
> Thanks!
>
> Michelle
> ---
> Michelle Murrain, Ph.D.
> President, Norwottuck Technology Resources
> [EMAIL PROTECTED] www.norwottuck.com
>




Re: [GENERAL] more or less spontaneous reboots with Suse 6.4 / Postgresql 7.0.2

2000-09-25 Thread Mitch Vincent

If there isn't any kernel panic (or other error message) displayed then I'd
bet it's a memory problem (it's almost certainly a hardware problem),
PostgreSQL is just triggering it (probably by using the damaged memory).
I've seen this happen before with other programs... Try switching out the
RAM and running the queries again..

-Mitch

- Original Message -
From: "Nils Zonneveld" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, September 25, 2000 2:23 PM
Subject: [GENERAL] more or less spontaneous reboots with Suse 6.4 /
Postgresql 7.0.2


> Hi all,
>
> I use Suse Linux 6.4 (kernel 2.2.14) and when I'm executing SQL
> statements in batch to postgresql (like: psql db < schema.sql) I have a
> fair chance that Linux reboots. No entries about the mishap are logged
> in the postgres logfile and neither in /var/log/messages. There's no
> telling when it exactly happens but when one of the statements contains
> an error the chance is really high that the system reboots.
>
> I have no idea if Postgres is to blame or the linux kernel. I have to
> admit that I'm still using the generic kernel, but I think it is a bit
> silly that a user process like postmaster could bring the whole server
down.
>
> If anyone has had a simulair experience and found a solution I would
> like to know about it.
>
> Thanks in advance,
>
> Nils Zonneveld
> --
> "Misschien is niets geheel waar, en zelfs dat niet"
> Multatuli (Eduard Douwes Dekker) - Idee 1
>




Re: [GENERAL] Starting postmaster at boot

2000-09-14 Thread Mitch Vincent

The reason I didn't have to use the -l is that I have everything PostgreSQL
needs (as far as environment variables) already set, because this machine is
a dedicated PostgreSQL server.

Sorry, should have thought about that before I replied but it seems you have
it running now, that's great.. Good luck!

-Mitch

- Original Message -
From: "Dale Walker" <[EMAIL PROTECTED]>
To: "Adam Lang" <[EMAIL PROTECTED]>
Cc: "PGSQL General" <[EMAIL PROTECTED]>
Sent: Thursday, September 14, 2000 1:33 PM
Subject: Re: [GENERAL] Starting postmaster at boot


> Adam Lang wrote:
> >
> > I didn't directly use your method, but you nonetheless solved my
problem.
> > >From the beginning everyone was telling me to put this into my
rc.local:
> > su postgres -c "/usr/local/pgsql/bin/postmaster -D
/usr/local/pgsql/data -i
> > /usr/local/pgsql/postgres.log 2>&1 &"
> >
> > It was never working.  I noticed you had a tag "-l" in yours for su.  I
> > looked up the reason for it and gave it a try.  So the script:
> > su -l postgres -c "/usr/local/pgsql/bin/postmaster -D
> > /usr/local/pgsql/data -i /home/postgres/postgres.log 2>1 &"
> > does work.
> >
> > I still don't understand the point of the 1's and 2's in the command
though.
>
>
> it's a method for redirecting STDERR --> STDOUT
>
> this is especially usefull in scripts
>
> normally I use [scriptname >/dev/null 2>&1] to send any unwanted output
> to /dev/null
>
> >
> > Why is everyone else's script working without the -l and mine wasn't?
> >
>
>
> '-l' uses the login profile for the su'd user... this is similar to a
> 'simuated login', without it , it was only referencing variables from
> your roo profile ... (ie. no PGDATA,etc..)
>
>
> --
> Dale Walker  [EMAIL PROTECTED]
> Independent Computer Retailers (ICR)   http://www.icr.com.au
> ICRnethttp://www.icr.net.au
>




Re: [GENERAL] php + postgres7x

2000-08-27 Thread Mitch Vincent

I've been using PHP with PostgreSQL as long as I've been using either, what
is the error message exactly?

Did you install the libraries in something other than the default
directories? You can configure PHP with --with-pgsql=/path/to/pglibs if you
did..

-Mitch


- Original Message -
From: "Mike Sears" <[EMAIL PROTECTED]>
To: "pgsql-general" <[EMAIL PROTECTED]>
Sent: Saturday, August 26, 2000 12:50 PM
Subject: [GENERAL] php + postgres7x


I've recently installed postgres 7.0.2 and after a few tries everything is
working though, I'm trying to compile php w/ and I can't get past where php
is looking for postgres even though I've told it where it is. Is there
something I'm doing wrong?

Mike





Re: [GENERAL] Statistical Analysis

2000-07-24 Thread Mitch Vincent

I don't think it's random (well, I'm sure it's not) but you could use LIMIT
to get a smaller number of results...

*shrug* just an idea.

Good luck!

-Mitch

- Original Message -
From: "Nathan Barnett" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, July 24, 2000 3:20 PM
Subject: [GENERAL] Statistical Analysis


> I am having to perform a large data analysis query fairly frequently and
the
> execution time is not exceptable, so I was looking at doing a statictical
> sample of the data to get fairly accurate results.  Is there a way to
> perform a query on a set number of random rows instead of the whole
dataset?
> I have looked through the documentation for a function that would do this,
> but I have not seen any.  If this is a RTFM type question, then feel free
to
> tell me so and point me in the right direction because I just haven't been
> able to find any info on it.
>
> Thanks ahead of time.
>
>
> ---
> Nathan Barnett
>
>




Re: [GENERAL] Problem with inserting newlines and ' characters

2000-07-24 Thread Mitch Vincent

Well, you will have to escape the single quotes but you shouldn't have to
escape the newlines, I'm inserting some chunks of text that have all sorts
of newline characters and I'm not escaping them, it seems to work fine..

-Mitch

- Original Message -
From: "Alan Horn" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, July 24, 2000 10:38 AM
Subject: [GENERAL] Problem with inserting newlines and ' characters


>
> Hi,
>
> This may actually be more of a limitation with the DBD::Pg perl driver.
> I'm not sure.
>
> Basically I'm using DBD::Pg in a perl script which inserts data into a
> very simple table.
>
> The statement I'm using is thus :
>
> my($sth) = $dbh->prepare("insert into maintenance (sno, date, data) values
> ('$sno', '$date', '$tmpstring')");
>
> (all on one line of course...)
>
> My problem is this. $tmpstring may well contain embedded newlines or '
> characters (these are the two characters I've had problems with so far).
>
> I don't want to go down the route of rewriting those characters into
> something 'unique' and decoding whenever I extract information from the
> table. That's always struck me as a bit of a kludge.
>
> So, is it a problem with postgresql, or DBD::Pg, or (most likely) what I'm
> doing with them... ?
>
> Any help much appreciated.
>
> Cheers,
>
> Al
>
>
>
>
>
>
>




Re: [GENERAL] Sort

2000-07-21 Thread Mitch Vincent


- Original Message -
From: "Nathan Barnett" <[EMAIL PROTECTED]>
To: "'Mitch Vincent'" <[EMAIL PROTECTED]>
Sent: Friday, July 21, 2000 3:03 PM
Subject: RE: [GENERAL] Sort


> Here is the query:
>
> SELECT Impression.AdNumber_AdNum,
> Impression.Webmaster_WebmasterNum,
> Impression.Banner_BannerNum, COUNT(Click.ClickNum)
> AS ClickCount
> FROM Impression INNER JOIN
> Click ON
> Impression.ImpressionNum = Click.Impression_ImpressionNum
> GROUP BY Impression.AdNumber_AdNum,
> Impression.Webmaster_WebmasterNum,
> Impression.Banner_BannerNum
>
> -
> Nathan Barnett
>
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
> Behalf Of Mitch Vincent
> Sent: Friday, July 21, 2000 2:48 PM
> To: Nathan Barnett; [EMAIL PROTECTED]
> Subject: Re: [GENERAL] Sort
>
>
> Show the query and maybe someone could help :-)
>
> - Original Message -
> From: "Nathan Barnett" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Friday, July 21, 2000 2:33 PM
> Subject: [GENERAL] Sort
>
>
> > I'm currently using v7.0.2 of PostgreSQL.
> >
> > I have a query that performs a group by on three columns.  The EXPLAIN
of
> > the query is as follows:
> >
> > Aggregate  (cost=4116.05..4125.47 rows=94 width=24)
> >   ->  Group  (cost=4116.05..4123.12 rows=942 width=24)
> > ->  Sort  (cost=4116.05..4116.05 rows=942 width=24)
> >   ->  Nested Loop  (cost=0.00..4069.52 rows=942 width=24)
> > ->  Seq Scan on click  (cost=0.00..15.42 rows=942
> > width=8)
> > ->  Index Scan using impression_pkey on impression
> > (cost=0.00..4.29 rows=1 width=16)
> >
> > I need to speed up this query.  I have already created an index on the
> three
> > columns query_idx(columna, columnb,columnc).  Is there some other index
> that
> > I could add which would speed up the query.  There is a high frequency
in
> > the table of each group.  Each group probably makes up 10% of the table.
> > Does this force a sequence scan when sorting and grouping?  Basically
just
> > looking for suggestions.
> >
> >
> > 
> > Nathan Barnett
> >
> >
>
>
>




[GENERAL] Plan question..

2000-06-27 Thread Mitch Vincent

databasename=# explain select DISTINCT (case when resubmitted > created then
resubmitted else created end),a.app_id, a.appcode, a.firstname,
a.middlename, a.lastname, a.state, a.degree1, a.d1date, a.degree2, a.d2date,
a.salary, a.skill1, a.skill2, a.skill3, a.objective, a.employer, a.sic1,
a.sic2, a.sic3, a.prefs1, a.prefs2, a.sells from applicants as
a,applicants_resumes as ar,resumes_fti as rf where (a.created::date >
'01-06-2000' or a.resubmitted::date > '01-06-2000') and (rf.string
~'^test' ) and ar.app_id=a.app_id and rf.id=ar.oid order by (case when
resubmitted > created then resubmitted else created end) desc limit 10
offset 0;

NOTICE:  QUERY PLAN:

Unique  (cost=3981.58..4396.74 rows=722 width=220)
  ->  Sort  (cost=3981.58..3981.58 rows=7220 width=220)
->  Hash Join  (cost=1751.00..3518.84 rows=7220 width=220)
  ->  Hash Join  (cost=1665.98..2634.96 rows=6132 width=216)
->  Seq Scan on applicants a  (cost=0.00..585.74
rows=6132 width=208)
->  Hash  (cost=1638.38..1638.38 rows=11038 width=8)
  ->  Seq Scan on applicants_resumes ar
(cost=0.00..1638.38 rows=11038 width=8)
  ->  Hash  (cost=4.97..4.97 rows=32024 width=4)
->  Index Scan using resumes_fti_index on resumes_fti rf
(cost=0.00..4.97 rows=32024 width=4)

EXPLAIN

Both the app_id and the OID columns are indexed and I have done a VACUUM
ANALYZE in the applicants_resumes table yet I still get a seq scan on
them I'm trying to understand how this works so I can index the right
columns -- could someone please school me? :-)

When I enable likeplanning (in contrib) I get a totally differen plan but
over all it's slower.

Thanks!

-Mitch




[GENERAL] Linux/Postgre question.

2000-06-26 Thread Mitch Vincent

In order to take advantage of some new hardware I had to use Linux over
FreeBSD.. I use to run my backend with the options -B 4096 -o '-S 16384' but
remember I had to modify the FreeBSD kernel to allow processes to use that
much shared memory... How would I go about doing that in Linux? I've been
out of the Linux loop for some time (obviously) -- Thanks!

-Mitch