[GENERAL] Indexing arrays

2000-12-06 Thread Stefano Bargioni

I'd like to index an array. This is my table:
template1= \d test_array
Table= test_array
+--+--+---+
|  Field   |  Type| Length|
+--+--+---+
| name | text |   var |
| pay_by_quarter   | int4[]   |   var |
| schedule | text[]   |   var |
+--+--+---+

How can I create an index for schedule? I need to index every occurrence of this array.
Thanks. Stefano
--
Dott. Stefano Bargioni
Biblioteca della Pontificia Universita' della Santa Croce - Roma
mailto:[EMAIL PROTECTED]  http://www.usc.urbe.it
Personal web page:http://www.usc.urbe.it/html/php.script?bargioni
--- "Si apud bibliothecam hortulum habes, nihil deerit"  (Cicerone) ---





RE: [GENERAL] Clarification

2000-12-06 Thread Roger Wernersson

You could of course try Java and JDBC as Java runs everywhere. Then you
could run your application on Mac, Windows and Linux as you see fit.

/Roger

-Original Message-
From: Alain Toussaint [mailto:[EMAIL PROTECTED]]
Sent: den 6 december 2000 08:10
To: KuroiNeko
Cc: [EMAIL PROTECTED]
Subject: Re: [GENERAL] Clarification


  that too is an  option but it would hurt me to have  to insert 20K
record
  or
  something like that with a web browser.

  H How ODBC driver  is supposed to help with that?  Upload a CSV
or
 FF file and run bulk insert with COPY.

with ODBC:

user load the database client app (spreadsheet,small database like access
or filemaker) and dump his content on the fat server.

without ODBC:

user hand off the csv file to the DBA.

  maybe because it is/was a deal between Apple and HK to bundle some nice
  sounding speakers with their computer and Apple got a Very good price
due
  to volume.

  If bundled actually means built-in, then  the only thing from H/K there
is
 the name :)

no idea at the moment,i don't think it's built in (there's no place to put
the speakers in the IMac) but i'll confirm that tomorrow.

Alain



Re: [GENERAL] Overhead of tables.

2000-12-06 Thread GH

On Tue, Dec 05, 2000 at 09:34:19PM -0800, some SMTP stream spewed forth: 
 At 11:48 PM 12/5/2000 -0500, you wrote:
 Soma Interesting [EMAIL PROTECTED] writes:
   I'd like to get an idea of the overhead introduced by large quantity of
   tables being hosted off a single PostgreSQL server. It is possible I'll be
   wanting to host upwards of 200-500 tables per server. Essentially, will I
   be surprised to find out that performance in PostgreSQL (or DBMS in
   general) is significantly hindered by sheer quantity of tables?
 
 When you get to tens of thousands of tables per server, we might start
 to worry a little... 500 is in the "what me worry?" class.
 
  regards, tom lane
 
 That is what I'd expect - but I've not experienced it to really know first 
 hand. Thanks for the input.
 
 If a couple more people would just say the same thing - I could rest easy 
 about moving forward on this. :)

Well, based on my experience at this point, Tom Lane's
comments/suggestions are worth those of several people. Consider it as if
I have said the same as he. ;-)

gh

 



Re: [GENERAL] MacIntosh

2000-12-06 Thread Adam Lang

I have a friend that networked his iMac and MacTV to his Linux file server.
There is a program out there called Netatalk (?).  Basically it is the
equivalent of Samba for Appletalk.

Adam Lang
Systems Engineer
Rutgers Casualty Insurance Company
http://www.rutgersinsurance.com
- Original Message -
From: "Alain Toussaint" [EMAIL PROTECTED]
To: "ashley" [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Tuesday, December 05, 2000 10:36 PM
Subject: Re: [GENERAL] MacIntosh


  I need to run a Linux Server with MacIntosh clients.
 
  Is this possible?

 yes (that is,as long as your mac understand the internet protocol)
 !!but..what kind of server ??

 Alain




[GENERAL] syntax of outer join in 7.1devel

2000-12-06 Thread Poul L. Christiansen
Hi

I just grabbed the latest development version from the CVS, because i need the outer join functionality.

But how do I make an outer join? 

What's the syntax?

I will RTFM if someone points me to the docs :-)

Poul L. Christiansen

 




Re: [GENERAL] syntax of outer join in 7.1devel

2000-12-06 Thread Tom Lane

"Poul L. Christiansen" [EMAIL PROTECTED] writes:
 But how do I make an outer join? 
 What's the syntax?
 I will RTFM if someone points me to the docs :-)

I'm afraid I haven't updated the FM yet :-( ... but you could look at
the examples in the JOIN regress test, src/test/regress/sql/join.sql.

Or, from the SQL92 spec, here's the grammar:

 from clause ::= FROM table reference [ { comma table reference }... ]

 table reference ::=
table name [ [ AS ] correlation name
[ left paren derived column list right paren ] ]
  | derived table [ AS ] correlation name
[ left paren derived column list right paren ]
  | joined table

 derived table ::= table subquery

 derived column list ::= column name list

 column name list ::=
  column name [ { comma column name }... ]

 joined table ::=
cross join
  | qualified join
  | left paren joined table right paren

 cross join ::=
  table reference CROSS JOIN table reference

 qualified join ::=
  table reference [ NATURAL ] [ join type ] JOIN
table reference [ join specification ]

 join specification ::=
join condition
  | named columns join

 join condition ::= ON search condition

 named columns join ::=
  USING left paren join column list right paren

 join type ::=
INNER
  | outer join type [ OUTER ]
  | UNION

 outer join type ::=
LEFT
  | RIGHT
  | FULL

 join column list ::= column name list

We don't do UNION JOIN yet, but I think everything else shown in this
snippet is implemented ...

regards, tom lane



Re: [GENERAL] Clarification

2000-12-06 Thread Tom Lane

ashley [EMAIL PROTECTED] writes:
 I wish to run postgresql on a Linux (Slackware) server over a LAN.
 The client must reside on Apple MacIntosh (12 of them)

What are you planning to write the client in?

If you haven't decided yet, one possibility is Tcl.  It runs fine on
Macs.  I'm not sure whether our libpgtcl interface would port easily
to Mac --- libpgtcl itself probably would, but it depends on libpq
which uses Unix-isms like select().  (Has anyone tried that?)  However,
somewhere out there is a Tcl Postgres client library written entirely
in Tcl, and it would surely drop in and run on a Mac.  I don't have a
URL at hand for that library, but I know it exists --- try checking our
mail list archives from a year or two back to see what you can turn up.

regards, tom lane



[GENERAL] Design and development info

2000-12-06 Thread Jeff Meeks

Hi,
I have been looking for some resources for database design and
development. Not specifcally for Postgres ,but more generalized
information about the best way to design DBs and the pitfalls that are
easy to fall into. 
I am a competent programmer, and I have worked with databases in the
past, but have no formal and not much casual experience about them.

Thanks
Jeff Meeks
[EMAIL PROTECTED]



[GENERAL] MySQL-esque sec_to_time() function

2000-12-06 Thread George Johnson



Hello,

I'm converting from MySQL to PostgreSQL (actually 
flipping/flopping back) and have a question:


MySQL hasa cool function sec_to_time() which 
converts your number of seconds to hh:mm:ss

I've read thru the mailing lists and am basically 
trying to implement the following:


MySQL:
select sec_to_time(sum(unix_timestamp(enddate) - 
unix_timestamp(startdate))) from foo;

PostgreSQL:
select XXX(sum(date_part('epoch',enddate) - 
date_part('epoch',startdate))) from foo;

I just need to know what XXX is/can be. I've 
tried a lot of combinations of the documented functions and come up with useless 
conversions.

Thanks,
George Johnson
[EMAIL PROTECTED] 

PS: i can't find documentation on how to load 
the functions in contrib. I can compile and
 install them 
O.K., but not sure how to make them load.


Re: [GENERAL] Many postmasters...

2000-12-06 Thread Lamar Owen

Jean-Christophe Boggio wrote:
 Using Linux RH7.0 with correct gcc and glibc, PG7.03, Apache 1.3.14
 and PHP4. We have several unresolved questions :
 
 * Is it normal that
   ps aux |grep postgres
   shows (what we want : processes own by postgres) multiple postgres
   backends (which seems normal to me) *AND* multiple postmaster (same
   full cmd line).
   Sometimes we also have "defunct" postgresses.

Yes, this would be normal.  Due to the fork nature of the backend, you
will see with ps, depending upon traffic, the actual postmaster fork
before the backend (postgres) is exec'd. I don't see that here due to my
use of a pooling webserver, but non-pooled situations will have backends
bouncing up and down constantly.  The defunct postgres processes are the
ones that are going away, but haven't yet been removed from the process
table, IIRC.
 
 * we start postgres with a /etc/rc.d/init.d script that launches
   pg_ctl -w many options here start
   When invoked from the shell, this command never returns to the shell
   by itself, we have to press enter. This behaviour prevents the
   script for terminating properly. Is there a way around this ?
   Not tried echo | pg_ctl  yet

The init.d script has an  after the pg_ctl line.  If it didn't return,
your system would never finish booting, due to the sequential nature of
the RedHat 7 SysV init setup.  Now, pg_ctl is kept running; it just
doesn't block the initscript.
 
 * every backend created by an Apache session opens many files (in our
   case, about 80 including the indexes) and many backends will finally
   generate an "Too many files open" message. We first increased the
   /proc/sys/fs/file-max to 8192 but that's a lot !

   The apache/php server always uses the same connect parameters for
   every page but it seems php's pg_pconnect() behaves just like
   pg_connect. Shouldn't we have apache hold a few backends connected ?

Thanks to the non-pooled connection scheme of Apache/PHP, the way the
persistent pconnect mechanism works is non-obvious.  Each apache
_process_ can hold a configured number of connections open -- but that
is then multiplied by the number of apache _processes_.

So, to run persistent connections in a usable manner on Apache/PHP
requires a huge number of backends, requiring an even larger number of
open files.  File-max at 8192 is probably middle of the road for such a
system.

Too bad PHP can't use AOLserver's pooled connections -- that would be a
big win. PHP can run on AOLserver -- it just doesn't yet use the pooled
API.

Apache 2.0's multithreaded nature will help this -- unless a mechanism
can be devised to share database connections amongst multiple full
processes for older Apache's.

Multithreading is a big win for clients that generate multiple
connections -- it's not a big win for backends that serve multiple
connections.  IMHO.
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11



Re: [GENERAL] How To Log User Name + Database?

2000-12-06 Thread Tom Lane

Ed Loehr [EMAIL PROTECTED] writes:
 Can anyone tell me how to log the db user name and db name to the server
 log?  Is this possible at present?

Offhand I don't think that happens at the moment, but it does seem like
it'd be a good idea to have a backend do an elog(DEBUG) at startup that
mentions its database name and user name.  Assuming you have logging of
timestamps/PIDs turned on, that would be enough to correlate later log
messages with a user name.

Comments anyone?

regards, tom lane



Re: [GENERAL] How To Log User Name + Database?

2000-12-06 Thread Larry Rosenman

* Tom Lane [EMAIL PROTECTED] [001206 11:25]:
 Ed Loehr [EMAIL PROTECTED] writes:
  Can anyone tell me how to log the db user name and db name to the server
  log?  Is this possible at present?
 
 Offhand I don't think that happens at the moment, but it does seem like
 it'd be a good idea to have a backend do an elog(DEBUG) at startup that
 mentions its database name and user name.  Assuming you have logging of
 timestamps/PIDs turned on, that would be enough to correlate later log
 messages with a user name.
in 7.1, I use:
log_connections = on
fsync = off
syslog_facility = LOCAL5
syslog_ident = pg-test
syslog=2
show_source_port = on

Which produces:

Dec  4 04:57:09 lerami pg-test[27458]: [1] DEBUG:  connection: host=[local] user=ler 
database=regression
Dec  4 04:57:11 lerami pg-test[27462]: [1] DEBUG:  connection: host=[local] user=ler 
database=regression

in the syslog. 

So, I think it's there already.


 
 Comments anyone?
 
   regards, tom lane
-- 
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 972-414-9812 E-Mail: [EMAIL PROTECTED]
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749



[GENERAL] localization design for DB structures?

2000-12-06 Thread Isaac

Hello,

I asked this a while back but didn't get a reply, so I'm trying again
(hopefully explaining my question better, too).

I'm setting up a basic "document manager" system that stores various short
text documents which are flagged for display on certain sections of our
website. That's the easy part, which I've already implemented in
FileMaker/Lasso (which I can't wait to get away from, now that I've been
using Postgres and PHP).

We also want to have translations into other languages available for these
documents (although not all of them will be translated).  Have others set up
a structure like this before? I'm looking for some guidance on how to set up
the tables so that the document only shows up once in a list of all
documents (or searches), but the user gets the title and content based on
the language they're using (and also when viewing the document in "your"
language, you can see which other translations are available).

My first guess is to have one table with all the basic (meta) info about the
document (category, title in english, id#, etc), and then use another table
for the actual text content and localized versions of the info. This 2nd
table has columns for language, the title (localized).

Anyway, I'm kinda flying in the dark here, and yet I know whole operating
systems have been localized so it can't be too hard to do it for a
relatively short list of categorized text documents (200 or so). This is
probably a wheel that doesn't need re-inventing! Can anyone point me to
where to RTFM or find examples (or books, etc.)?

Thanks.
--i





RE: [GENERAL] MySQL-esque sec_to_time() function

2000-12-06 Thread Francis Solomon

Hi George,

Difference of two timestamps directly :

dbtest=# select 'now'::timestamp - '2000-12-06 13:47:57+00'::timestamp
as "Time Interval";
 Time Interval
---
 02:49:34
(1 row)

Number of seconds converted to hh:mm:ss :

dbtest=# select '12345 seconds'::interval as "Time Interval";
 Time Interval
---
 03:25:45
(1 row)

Hope this helps

Francis Solomon

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of George Johnson
Sent: 06 December 2000 16:28
To: [EMAIL PROTECTED]
Subject: [GENERAL] MySQL-esque sec_to_time() function


Hello,

I'm converting from MySQL to PostgreSQL (actually flipping/flopping
back) and have a question:


MySQL has a cool function sec_to_time() which converts your number of
seconds to hh:mm:ss

I've read thru the mailing lists and am basically trying to implement
the following:


MySQL:
select sec_to_time(sum(unix_timestamp(enddate) -
unix_timestamp(startdate))) from foo;

PostgreSQL:
select XXX(sum(date_part('epoch',enddate) -
date_part('epoch',startdate))) from foo;

I just need to know what XXX is/can be.  I've tried a lot of
combinations of the documented functions and come up with useless
conversions.

Thanks,
George Johnson
[EMAIL PROTECTED]

PS:  i can't find documentation on how to load the functions in contrib.
I can compile and
   install them O.K., but not sure how to make them load.




[GENERAL] apache .htaccess / postgresql module

2000-12-06 Thread George Johnson



Hello,

I had a question regarding using PostgreSQL as the 
authorization database for Apache. Previously I was using Apache authentication module for MySQL v2.20 
(mod_auth_mysql) and was wondering if there was 
an equivalent module for use with PostgreSQL?

I must add I'm really impressed with the 
responsiveness of the 7.0.3 PostgreSQL under decent load from my java 
servers. Replacing my "left joins" with subselects and keeping the 
temporary tables (you have to use them in mysql in a lot of cases), I see quite 
a number of performance increases or about-the-same query times. I'm not a 
benchmarking type of guy, just have a feel for "seems slower, seems faster, 
seems-about-the-same". Very cool. Plus, the original reason I 
switched (something very very strange with multiple threads pounding the JDBC 
driver and/or transient temporary table collisions corrupting query constructs 
and updates) seems all but last week's headache (fingers crossed).

Thanks,
George
[EMAIL PROTECTED]