[ADMIN] PK and FK relation ship

2003-09-11 Thread shyamperi
Can any one tell me when would a primary key be refered to as foreign key in
its own table.

-
Warm Regards
Shÿam Peri

II Floor, Punja Building,
M.G.Road,
Ballalbagh,
Mangalore-575003 
Ph : 91-824-2451001/5
Fax : 91-824-2451050 


DISCLAIMER: This message contains privileged and confidential information and is
intended only for the individual named.If you are not the intended
recipient you should not disseminate,distribute,store,print, copy or
deliver this message.Please notify the sender immediately by e-mail if
you have received this e-mail by mistake and delete this e-mail from
your system.Can any one tell me when would a primary key be refered to as foreign key inits own table.-
Warm Regards
Shÿam Peri

II Floor, Punja Building,
M.G.Road,
Ballalbagh,
Mangalore-575003 
Ph : 91-824-2451001/5
Fax : 91-824-2451050 




DISCLAIMER: This message contains privileged and confidential information and is
intended only for the individual named.If you are not the intended
recipient you should not disseminate,distribute,store,print, copy or
deliver this message.Please notify the sender immediately by e-mail if
you have received this e-mail by mistake and delete this e-mail from
your system.
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [ADMIN] PK and FK relation ship

2003-09-11 Thread Peter Childs
On Thu, 11 Sep 2003 [EMAIL PROTECTED] wrote:

> Can any one tell me when would a primary key be refered to as foreign key in
> its own table.
> 
Simple Trees. 

If you need a parent child relationship the simplest way of
expressing it is to have a parent columnn and store the parent key in it. 
If its null it must be the root. 
Its really quite good for finding parents and children of a
record. (Children are "SELECT * from table where parent=`;" parent
is "SELECT * from table where id=;")
Could also be used for storing a linked list or double linked 
list. 
Find any standard algorithims book and try and implement the lot 
in a database. (Its not easy as SQL has a lot of missing features that 
even C, C++ or even VB has!)

Peter Childs


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[ADMIN] Views

2003-09-11 Thread Donald Fraser



PostgreSQL version 7.3.4
 
Quick question: With respect to a VIEW, why 
does PostgreSQL internally return all of the columns when, for example, the 
SELECT statement only requests say 2 out of 10 columns ?
 
Thanks in advance,
Donald Fraser.


[ADMIN] `__builtin_va_alist' undeclared

2003-09-11 Thread Kalyan (FS)




HiPlease help 
me...I am installing 
postgresql-7.3.4 on solaris sparc 2.8.But 
with make it is giving me the following error.  I am strucked now.
 
Output:
make -C nbtree SUBSYS.omake[4]: Entering directory 
`/opt/temp/postgresql-7.3.4/src/backend/access/nbtree'make[4]: `SUBSYS.o' is 
up to date.make[4]: Leaving directory 
`/opt/temp/postgresql-7.3.4/src/backend/access/nbtree'make -C rtree 
SUBSYS.omake[4]: Entering directory 
`/opt/temp/postgresql-7.3.4/src/backend/access/rtree'make[4]: `SUBSYS.o' is 
up to date.make[4]: Leaving directory 
`/opt/temp/postgresql-7.3.4/src/backend/access/rtree'make -C transam 
SUBSYS.omake[4]: Entering directory 
`/opt/temp/postgresql-7.3.4/src/backend/access/transam'make[4]: `SUBSYS.o' 
is up to date.make[4]: Leaving directory 
`/opt/temp/postgresql-7.3.4/src/backend/access/transam'make[3]: Leaving 
directory `/opt/temp/postgresql-7.3.4/src/backend/access'make -C bootstrap 
allmake[3]: Entering directory 
`/opt/temp/postgresql-7.3.4/src/backend/bootstrap'make[3]: Nothing to be 
done for `all'.make[3]: Leaving directory 
`/opt/temp/postgresql-7.3.4/src/backend/bootstrap'make -C catalog 
allmake[3]: Entering directory 
`/opt/temp/postgresql-7.3.4/src/backend/catalog'make[3]: Nothing to be done 
for `all'.make[3]: Leaving directory 
`/opt/temp/postgresql-7.3.4/src/backend/catalog'make -C parser 
allmake[3]: Entering directory 
`/opt/temp/postgresql-7.3.4/src/backend/parser'make[3]: Nothing to be done 
for `all'.make[3]: Leaving directory 
`/opt/temp/postgresql-7.3.4/src/backend/parser'make -C commands 
allmake[3]: Entering directory 
`/opt/temp/postgresql-7.3.4/src/backend/commands'make[3]: Nothing to be done 
for `all'.make[3]: Leaving directory 
`/opt/temp/postgresql-7.3.4/src/backend/commands'make -C executor 
allmake[3]: Entering directory 
`/opt/temp/postgresql-7.3.4/src/backend/executor'make[3]: Nothing to be done 
for `all'.make[3]: Leaving directory 
`/opt/temp/postgresql-7.3.4/src/backend/executor'make -C lib allmake[3]: 
Entering directory `/opt/temp/postgresql-7.3.4/src/backend/lib'gcc  
-Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/include 
-I/usr/local/include -I/usr/include  -c -o stringinfo.o 
stringinfo.cstringinfo.c: In function 
`appendStringInfo':stringinfo.c:126: `__builtin_va_alist' undeclared 
(first use in this function)stringinfo.c:126: (Each undeclared identifier is 
reported only oncestringinfo.c:126: for each function it appears 
in.)make[3]: *** [stringinfo.o] Error 1make[3]: Leaving 
directory `/opt/temp/postgresql-7.3.4/src/backend/lib'make[2]: *** 
[lib-recursive] Error 2make[2]: Leaving directory 
`/opt/temp/postgresql-7.3.4/src/backend'make[1]: *** [all] Error 
2make[1]: Leaving directory `/opt/temp/postgresql-7.3.4/src'make: *** 
[all] Error 2
 
 
 
  Thanks in 
advance,
 
Kalyan.



[ADMIN] Pam Ldap (debian) problem

2003-09-11 Thread Wim Bertels
Hi,

i dont get postgres working with pam to use an external ldap server to authenticate 
users. 
i think the problems is that postgres is using the wrong pam file,
namely he uses pam_unix where he should use pam_ldap.so.

postgres version 7.3.2
debian version 3 (kernel 2.4)

any help is much appreciated

ive included the following files:
/etc/hosts
/etc/pam.conf
/etc/pam_ldap.conf
/etc/pam.d/postgresql
/etc/postgresql/pg_hba.conf
/etc/postgresql/postgresql.conf
/etc/postgresql/postmaster.conf
/var/log/messages
/var/log/postgres.log (is empty)
/var/log/auth.log

in short the logs:

Sep 11 12:48:49 damian postgres[1892]: [1] LOG:  connection received: host=10.18.10.47 
port=32882
Sep 11 12:48:53 damian postgres[1893]: [1] LOG:  connection received: host=10.18.10.47 
port=32883

and

Sep 11 12:48:49 damian PAM_unix[1892]: auth could not identify password for [wibrt]
Sep 11 12:48:49 damian 18.10.47 authentication[1892]: [2] LOG:  CheckPAMAuth: 
pam_authenticate failed: 'Conversation error'
Sep 11 12:48:49 damian 18.10.47 authentication[1892]: [3] FATAL:  PAM authentication 
failed for user "wibrt"
Sep 11 12:48:53 damian PAM_unix[1893]: authentication failure; (uid=31) -> wibrt for 
sameuser service
Sep 11 12:48:55 damian 18.10.47 authentication[1893]: [2] LOG:  CheckPAMAuth: 
pam_authenticate failed: 'Authentication failure'
Sep 11 12:48:55 damian 18.10.47 authentication[1893]: [3] FATAL:  PAM authentication 
failed for user "wibrt"

tnx,
Wim






auth.log
Description: Binary data


hosts
Description: Binary data


messages
Description: Binary data


nsswitch.conf
Description: Binary data


pam.conf
Description: Binary data


pam_ldap.conf
Description: Binary data


pg_hba.conf
Description: Binary data


postgresql
Description: Binary data


postmaster.conf
Description: Binary data

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [ADMIN] System catalog for triggers

2003-09-11 Thread Tom Lane
"A.Bhuvaneswaran" <[EMAIL PROTECTED]> writes:
> Where does postgresql store tg_op and tg_when details for a trigger? If i 
> am right, it is not in pg_trigger catalog.

No, you're not right.  It's encoded in tgtype somehow; you'll have to
look at the trigger code for details (see src/include/catalog/pg_trigger.h
in particular).

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [ADMIN] `__builtin_va_alist' undeclared

2003-09-11 Thread Tom Lane
"Kalyan (FS)" <[EMAIL PROTECTED]> writes:
> I am installing postgresql-7.3.4 on solaris sparc 2.8.

> gcc  -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../src/includ
> e -I/usr/local/include -I/usr/include  -c -o
>  stringinfo.o stringinfo.c
> stringinfo.c: In function `appendStringInfo':
> stringinfo.c:126: `__builtin_va_alist' undeclared (first use in this
> function)

It seems not to be picking up the gcc-specific version of .

I don't know how you configured PG, but if I were you I'd get rid of
the explicit -I/usr/include --- that is unnecessary, and it is very
likely causing the Solaris-standard versions of include files to be
picked up instead of the gcc-specific ones.  For that matter, I rather
doubt that you need -I/usr/local/include ... every gcc installation
I've ever dealt with will search that by default, too.

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[ADMIN] Pam Ldap (debian) solved

2003-09-11 Thread Wim Bertels
As noticed, he used pam_unix , as in a local connection,
i just had to outcommend the 127.0.0.1 line in pam_ldap.conf to get it working..



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [ADMIN] `__builtin_va_alist' undeclared

2003-09-11 Thread Kalyan (FS)
Hi Tom,
   It works!  Now I have to install.
  Thank you very much. Have a great day!

Kalyan.


Here is the final output:
lib libplpgsql.a
 -shared -h libplpgsql.so.1 pl_gram.o pl_handler.o pl_comp.o pl_exec.o
pl_funcs.o -L../../../../src/port -L/usr/local
b   -o libplpgsql.so.1.0
-f libplpgsql.so.1
-s libplpgsql.so.1.0 libplpgsql.so.1
-f libplpgsql.so
-s libplpgsql.so.1.0 libplpgsql.so
e[4]: Leaving directory `/opt/temp/postgresql-7.3.4/src/pl/plpgsql/src'
e[3]: Leaving directory `/opt/temp/postgresql-7.3.4/src/pl/plpgsql'
e[2]: Leaving directory `/opt/temp/postgresql-7.3.4/src/pl'
e[1]: Leaving directory `/opt/temp/postgresql-7.3.4/src'
 of PostgreSQL successfully made. Ready to install.



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [ADMIN] postgresql and replication

2003-09-11 Thread Andrew Sullivan
On Wed, Sep 03, 2003 at 12:25:09PM +0200, [EMAIL PROTECTED] wrote:
> I am unable to find the adress to unsuscribe, could you please help
> me and give me that adress... thanks for advance!!


http://gborg.postgresql.org/mailman/listinfo/erserver-general

(Sorry, I've been buried lately and I'm behind on my list responses.)

A
 
-- 

Andrew Sullivan 204-4141 Yonge Street
Liberty RMS   Toronto, Ontario Canada
<[EMAIL PROTECTED]>  M2P 2A8
 +1 416 646 3304 x110


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[ADMIN] Does VACUUM ever free up any disk space?

2003-09-11 Thread Chris Miles
I've read a lot where people recommend using VACUUM FULL
to free up disk space, especially after many updates/inserts.
But does a regular VACUUM (or VACUUM ANALYSE) ever free up
any space?
24/7 production databases cannot be locked for long periods
of time to run VACUUM FULL, but I do not want data files
growing indefinitely (any more than they need to) so I hope
the routine VACCUM ANALYSE will take care of this.
My pg is version 7.2.x but the question can apply to newer
versions as well if the functionality has changed.
Regards,
Chris.
--
Chris Miles
http://chrismiles.info/
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [ADMIN] Does VACUUM ever free up any disk space?

2003-09-11 Thread Bruno Wolff III
On Thu, Sep 11, 2003 at 18:42:25 +0100,
  Chris Miles <[EMAIL PROTECTED]> wrote:
> I've read a lot where people recommend using VACUUM FULL
> to free up disk space, especially after many updates/inserts.
> 
> But does a regular VACUUM (or VACUUM ANALYSE) ever free up
> any space?
> 
> 24/7 production databases cannot be locked for long periods
> of time to run VACUUM FULL, but I do not want data files
> growing indefinitely (any more than they need to) so I hope
> the routine VACCUM ANALYSE will take care of this.

Vacuum full actually shrinks the file sizes. A normal vacuum just marks
free areas in the files so that they can be reused. If you regularly
do a normal vacuum your database should have a steady state size with
some of the disk space taken up by free space. If something unusual
happens and the database grows much bigger than it needs to be, you
can then use vacuum full to reclaim space. If your FSM setting is too
low, your normal vacuums won't mark all of the free space and this
can result in continual growth of the database files.

You also have to worry about index growth. In versions prior to 7.4
btree indexes would not reuse space when the index values were montonicly
increasing. This isn't a problem for everyone, but is for some. In 7.4
things work much better.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [ADMIN] Does VACUUM ever free up any disk space?

2003-09-11 Thread scott.marlowe
On Thu, 11 Sep 2003, Chris Miles wrote:

> I've read a lot where people recommend using VACUUM FULL
> to free up disk space, especially after many updates/inserts.
> 
> But does a regular VACUUM (or VACUUM ANALYSE) ever free up
> any space?
> 
> 24/7 production databases cannot be locked for long periods
> of time to run VACUUM FULL, but I do not want data files
> growing indefinitely (any more than they need to) so I hope
> the routine VACCUM ANALYSE will take care of this.
> 
> My pg is version 7.2.x but the question can apply to newer
> versions as well if the functionality has changed.

A regular vacuum since 7.2 never actually frees up space, it simply marks 
the space in the file as available for reuse.  Note that in many 
circumstances this is actually better than freeing up the space, as it 
allows the database to store date without having to extend and possibly 
fragment the table.


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [ADMIN] How to create schema-specific users?

2003-09-11 Thread Michał Małecki
Dnia 03-09-09 16:22, Użytkownik Tom Gordon napisał:
I'm missing something.  How do I create a user that is just for a 
specific schema, and not a global user?  I see in the docs how to create 
schemas and users, but not a schema-specific user.
Users, as well as roles aka groups are non-schema objects (they can own 
other objects and mat be granted rights to other objects). What 
particular functionality would you like to achieve using a user being 
part of some other user's schema (except the ability to have multiple 
separate namespaces for usernames)?

--
 ___
|  || Michal Malecki, system analyst
|  ||
|  || e.media Studio, Gliwice, PL
|e.media|
|__|| [EMAIL PROTECTED]
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[ADMIN] unsubscribe

2003-09-11 Thread Peralta Miguel-MPERALT1


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [ADMIN] Does VACUUM ever free up any disk space?

2003-09-11 Thread Tom Lane
"scott.marlowe" <[EMAIL PROTECTED]> writes:
> On Thu, 11 Sep 2003, Chris Miles wrote:
>> I've read a lot where people recommend using VACUUM FULL
>> to free up disk space, especially after many updates/inserts.

> A regular vacuum since 7.2 never actually frees up space, it simply marks 
> the space in the file as available for reuse.  Note that in many 
> circumstances this is actually better than freeing up the space, as it 
> allows the database to store date without having to extend and possibly 
> fragment the table.

Regular vacuum *will* shorten the table's file if (a) there are some
completely-empty pages at the end, and (b) it can get an exclusive lock
on the table without blocking.  This might be a relatively rare
condition in a heavily-used table.  But "never actually frees up space"
is incorrect.

You're correct that regular vacuum is designed around the idea of
maintaining a steady-state file size rather than trying very hard to
give space back to the OS.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[ADMIN] Cross-DB linking?

2003-09-11 Thread Andrew Biagioni
I am thinking of separating my data into various DBs (maybe on the same server, 
probably not) -- mostly for performance/stability/backup reasons -- but I have 
a considerable amount of foreign keys, views, and queries that would need to 
work across DBs if I were to split things the way I want to.

Is it possible to have foreign keys / views / queries work across database 
boundaries?  On the same server / on separate servers?  If so, how?

For example, I have:
 - a table, A, with > 200 K rows which never changes;  
 - another table, B with < 10 K rows which changes frequently;  
 - and a third table, C, which joins A and B, i.e. has foreign keys into A and 
B, and changes rarely

I would like to have A in one DB, dbA (possibly its own server);  B in another 
DB, dbB (possibly its own server);  and C either with A or with B (this one is 
not an issue per se).

What I'm looking to gain is:
 - dbA would be backed up/replicated religiously, and possibly on a server 
optimized for frequent writes
 - dbB would NEVER be backed up, possibly on a server optimized for cacheing
 - each database's schema would be simpler and easier to manage
 - as the number of records and users grow, be able to distribute the 
computing/storage/memory load among various machines rather than have to 
upgrade the hardware

Thanks in advance!

Andrew




---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [ADMIN] Cross-DB linking?

2003-09-11 Thread scott.marlowe
You might want to consider using schemas to accomplish some of this.

You can backup individual schemas as of 7.4 (maybe 7.3, but I've not used 
it in production, waiting for 7.4 to upgrade from 7.2)

performance will almost certainly suffer if you are doing cross db work, 
so schemas help there.

I've never had any stability issues with Postgresql, and certainly not 
from having everything in one database.

Other than the ability to spread your load across multiple machines, 
7.3/7.4 and schemas should address all your concerns.

And no, you can't fk across databases.  You can get some primitive (but 
quite functional) cross database action with the contrib/dblink package.

On Thu, 11 Sep 2003, Andrew Biagioni wrote:

> I am thinking of separating my data into various DBs (maybe on the same server, 
> probably not) -- mostly for performance/stability/backup reasons -- but I have 
> a considerable amount of foreign keys, views, and queries that would need to 
> work across DBs if I were to split things the way I want to.
> 
> Is it possible to have foreign keys / views / queries work across database 
> boundaries?  On the same server / on separate servers?  If so, how?
> 
> For example, I have:
>  - a table, A, with > 200 K rows which never changes;  
>  - another table, B with < 10 K rows which changes frequently;  
>  - and a third table, C, which joins A and B, i.e. has foreign keys into A and 
> B, and changes rarely
> 
> I would like to have A in one DB, dbA (possibly its own server);  B in another 
> DB, dbB (possibly its own server);  and C either with A or with B (this one is 
> not an issue per se).
> 
> What I'm looking to gain is:
>  - dbA would be backed up/replicated religiously, and possibly on a server 
> optimized for frequent writes
>  - dbB would NEVER be backed up, possibly on a server optimized for cacheing
>  - each database's schema would be simpler and easier to manage
>  - as the number of records and users grow, be able to distribute the 
> computing/storage/memory load among various machines rather than have to 
> upgrade the hardware
> 
> Thanks in advance!
> 
>   Andrew
> 
> 
> 
> 
> ---(end of broadcast)---
> TIP 9: the planner will ignore your desire to choose an index scan if your
>   joining column's datatypes do not match
> 


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [ADMIN] Report Generator Proposal

2003-09-11 Thread Gaetano Mendola
Nobody entusiastic ? I'm wrong about my feeling ?
I think that have the psql able to generate a report
will increase the our better response to a user novice
or not ?


Regards
Gaetano Mendola

- Original Message - 
From: ""Gaetano Mendola"" <[EMAIL PROTECTED]>
Newsgroups: comp.databases.postgresql.admin
Sent: Wednesday, September 10, 2003 4:10 AM
Subject: Report Generator Proposal


> Hi all,
> each time that someone say: this query is slow, this query take forever,
bla
> bla bla
> all our request are:
>
> 1) Which version are you running ?
> 2) May I see your table definition?
> 3) May I see you configuration file?
> 4) May I see the explain ?
> 5) ...
>
> sometimes a beginner is scared about all these questions and even
> he/she don't know how do collect these informations and we lose
> important data about:  "why that damn query is too slow"?
>
>
> may be is usefull generate an automatic report that take charge
> of collect all these informations and send it to a file or directly to
> [EMAIL PROTECTED] , may be something like:
>
>
> > BEGIN REPORT TO FILE 
> > SELECT ..
> > END REPORT;
>
>
> or
>
>
> > BEGIN REPORT TO EMAIL 
> > SELECT ..
> > END REPORT;
>
>
>
> Regards
> Gaetano Mendola
>
>
>
>
>
> ---(end of broadcast)---
> TIP 8: explain analyze is your friend
>


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [ADMIN] Duplicate key ( reindex and vacuum full logs )

2003-09-11 Thread Gaetano Mendola
No clues?

Gaetano

- Original Message - 
From: ""Gaetano Mendola"" <[EMAIL PROTECTED]>
Newsgroups: comp.databases.postgresql.admin
Sent: Tuesday, September 09, 2003 4:39 PM
Subject: Re: Duplicate key ( reindex and vacuum full logs )


> just for add informations on the problem:
> these are the logs received by the reindex and by
> vacuum full on that table ( before to delete the
> duplicated rows):
>
> = REINDEX ===
>
> Start Reindex table ua_user_data_exp at Fri Sep 5 07:12:26 CEST 2003
> ERROR:  Cannot create unique index. Table contains non-unique values
>
>
> = VACUUM FULL ==
>
> Start Vacuum table ua_user_data_exp at Fri Sep 5 07:13:11 CEST 2003
> INFO:  --Relation public.ua_user_data_exp--
> INFO:  Pages 890: Changed 119, reaped 531, Empty 0, New 0; Tup 11503: Vac
> 223, Keep/VTL 206/206, UnUsed 1674, MinLen 44, MaxLen
> 696; Re-using: Free/Avail. Space 1007004/934048; EndEmpty/Avail. Pages
> 1/421.
> CPU 0.08s/0.01u sec elapsed 0.08 sec.
> INFO:  Index ua_user_data_exp_id_user_key: Pages 52; Tuples 11505: Deleted
> 223.
> CPU 0.01s/0.01u sec elapsed 0.03 sec.
> WARNING:  Index ua_user_data_exp_id_user_key: NUMBER OF INDEX' TUPLES
> (11505) IS NOT THE SAME AS HEAP' (11503).
> Recreate the index.
> INFO:  Index ua_user_data_exp_login_key: Pages 81; Tuples 11505: Deleted
> 223.
> CPU 0.00s/0.00u sec elapsed 0.07 sec.
> WARNING:  Index ua_user_data_exp_login_key: NUMBER OF INDEX' TUPLES
(11505)
> IS NOT THE SAME AS HEAP' (11503).
> Recreate the index.
> INFO:  Index exp_id_provider: Pages 87; Tuples 11505: Deleted 223.
> CPU 0.00s/0.02u sec elapsed 0.26 sec.
> WARNING:  Index exp_id_provider: NUMBER OF INDEX' TUPLES (11505) IS NOT
THE
> SAME AS HEAP' (11503).
> Recreate the index.
> INFO:  Index exp_ci_login: Pages 81; Tuples 11505: Deleted 223.
> CPU 0.00s/0.01u sec elapsed 0.07 sec.
> WARNING:  Index exp_ci_login: NUMBER OF INDEX' TUPLES (11505) IS NOT THE
> SAME AS HEAP' (11503).
> Recreate the index.
> INFO:  Index exp_country: Pages 106; Tuples 11505: Deleted 223.
> CPU 0.00s/0.01u sec elapsed 0.28 sec.
> WARNING:  Index exp_country: NUMBER OF INDEX' TUPLES (11505) IS NOT THE
SAME
> AS HEAP' (11503).
> Recreate the index.
> INFO:  Index exp_os_type: Pages 269; Tuples 11505: Deleted 223.
> CPU 0.03s/0.04u sec elapsed 0.92 sec.
> WARNING:  Index exp_os_type: NUMBER OF INDEX' TUPLES (11505) IS NOT THE
SAME
> AS HEAP' (11503).
> Recreate the index.
> INFO:  Index exp_card: Pages 96; Tuples 11505: Deleted 223.
> CPU 0.01s/0.01u sec elapsed 0.22 sec.
> WARNING:  Index exp_card: NUMBER OF INDEX' TUPLES (11505) IS NOT THE SAME
AS
> HEAP' (11503).
> Recreate the index.
> INFO:  Index exp_status: Pages 848; Tuples 11505: Deleted 223.
> CPU 0.12s/0.03u sec elapsed 4.34 sec.
> WARNING:  Index exp_status: NUMBER OF INDEX' TUPLES (11505) IS NOT THE
SAME
> AS HEAP' (11503).
> Recreate the index.
> INFO:  Index exp_email: Pages 123; Tuples 11505: Deleted 223.
> CPU 0.00s/0.02u sec elapsed 0.19 sec.
> WARNING:  Index exp_email: NUMBER OF INDEX' TUPLES (11505) IS NOT THE SAME
> AS HEAP' (11503).
> Recreate the index.
> INFO:  Index exp_ci_email: Pages 123; Tuples 11505: Deleted 223.
> CPU 0.02s/0.01u sec elapsed 0.20 sec.
> WARNING:  Index exp_ci_email: NUMBER OF INDEX' TUPLES (11505) IS NOT THE
> SAME AS HEAP' (11503).
> Recreate the index.
> INFO:  Index exp_lastname: Pages 79; Tuples 11505: Deleted 223.
> CPU 0.01s/0.00u sec elapsed 0.08 sec.
> WARNING:  Index exp_lastname: NUMBER OF INDEX' TUPLES (11505) IS NOT THE
> SAME AS HEAP' (11503).
> Recreate the index.
> INFO:  Index exp_ci_lastname: Pages 79; Tuples 11505: Deleted 223.
> CPU 0.03s/0.01u sec elapsed 0.12 sec.
> WARNING:  Index exp_ci_lastname: NUMBER OF INDEX' TUPLES (11505) IS NOT
THE
> SAME AS HEAP' (11503).
> Recreate the index.
> INFO:  Index exp_orbital_ptns: Pages 670; Tuples 11505: Deleted 223.
> CPU 0.12s/0.05u sec elapsed 4.46 sec.
> WARNING:  Index exp_orbital_ptns: NUMBER OF INDEX' TUPLES (11505) IS NOT
THE
> SAME AS HEAP' (11503).
> Recreate the index.
> INFO:  Index exp_stickers: Pages 210; Tuples 11505: Deleted 223.
> CPU 0.05s/0.01u sec elapsed 0.68 sec.
> WARNING:  Index exp_stickers: NUMBER OF INDEX' TUPLES (11505) IS NOT THE
> SAME AS HEAP' (11503).
> Recreate the index.
> INFO:  Index exp_pid: Pages 443; Tuples 11505: Deleted 223.
> CPU 0.05s/0.01u sec elapsed 2.16 sec.
> WARNING:  Index exp_pid: NUMBER OF INDEX' TUPLES (11505) IS NOT THE SAME
AS
> HEAP' (11503).
> Recreate the index.
> INFO:  Index exp_mac_address: Pages 114; Tuples 11505: Deleted 223.
> CPU 0.02s/0.02u sec elapsed 0.14 sec.
> WARNING:  Index exp_mac_address: NUMBER OF INDEX' TUPLES (11505) IS NOT
THE
> SAME AS HEAP' (11503).
> Recreate the index.
> INFO:  Index exp_mac_address_norma

[ADMIN] OLEDB Driver

2003-09-11 Thread Thomas LeBlanc
I see a thread in ODBC that uses a DSN-less connection and {Postgresql} as 
the driver.

Where can I get this driver to use with ADO?

Also, I am not getting responces from my post on psql-odbc? Can you Help?

Thomas LeBlanc

_
Need more e-mail storage? Get 10MB with Hotmail Extra Storage.   
http://join.msn.com/?PAGE=features/es

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [ADMIN] Cross-DB linking?

2003-09-11 Thread Andrew Biagioni
Thanks -- I haven't looked at schemas, I guess I will now :-).

As for stability -- I was referring to the hardware breaking down, not 
Postgresql!

Andrew


9/11/03 5:24:50 PM, "scott.marlowe" <[EMAIL PROTECTED]> wrote:

>You might want to consider using schemas to accomplish some of this.
>
>You can backup individual schemas as of 7.4 (maybe 7.3, but I've not used 
>it in production, waiting for 7.4 to upgrade from 7.2)
>
>performance will almost certainly suffer if you are doing cross db work, 
>so schemas help there.
>
>I've never had any stability issues with Postgresql, and certainly not 
>from having everything in one database.
>
>Other than the ability to spread your load across multiple machines, 
>7.3/7.4 and schemas should address all your concerns.
>
>And no, you can't fk across databases.  You can get some primitive (but 
>quite functional) cross database action with the contrib/dblink package.
>
>On Thu, 11 Sep 2003, Andrew Biagioni wrote:
>
>> I am thinking of separating my data into various DBs (maybe on the same 
server, 
>> probably not) -- mostly for performance/stability/backup reasons -- but I 
have 
>> a considerable amount of foreign keys, views, and queries that would need to 
>> work across DBs if I were to split things the way I want to.
>> 
>> Is it possible to have foreign keys / views / queries work across database 
>> boundaries?  On the same server / on separate servers?  If so, how?
>> 
>> For example, I have:
>>  - a table, A, with > 200 K rows which never changes;  
>>  - another table, B with < 10 K rows which changes frequently;  
>>  - and a third table, C, which joins A and B, i.e. has foreign keys into A 
and 
>> B, and changes rarely
>> 
>> I would like to have A in one DB, dbA (possibly its own server);  B in 
another 
>> DB, dbB (possibly its own server);  and C either with A or with B (this one 
is 
>> not an issue per se).
>> 
>> What I'm looking to gain is:
>>  - dbA would be backed up/replicated religiously, and possibly on a server 
>> optimized for frequent writes
>>  - dbB would NEVER be backed up, possibly on a server optimized for cacheing
>>  - each database's schema would be simpler and easier to manage
>>  - as the number of records and users grow, be able to distribute the 
>> computing/storage/memory load among various machines rather than have to 
>> upgrade the hardware
>> 
>> Thanks in advance!
>> 
>>  Andrew
>> 
>> 
>> 
>> 
>> ---(end of broadcast)---
>> TIP 9: the planner will ignore your desire to choose an index scan if your
>>   joining column's datatypes do not match
>> 
>
>
>---(end of broadcast)---
>TIP 2: you can get off all lists at once with the unregister command
>(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
>
>
>




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [ADMIN] Cross-DB linking?

2003-09-11 Thread scott.marlowe
On Thu, 11 Sep 2003, Andrew Biagioni wrote:

> Thanks -- I haven't looked at schemas, I guess I will now :-).

Schemas rock.  Like little sandboxes for each user with their own play 
areas and what not.

> As for stability -- I was referring to the hardware breaking down, not 
> Postgresql!

Ahhh.  I see. You might want to look into the erserver replication 
application on gborg.postgresql.org.  That's a pretty nice little system, 
and the .org and .info domains run on top of postgresql using it, so it 
has had plenty of production testing.


> 
>   Andrew
> 
> 
> 9/11/03 5:24:50 PM, "scott.marlowe" <[EMAIL PROTECTED]> wrote:
> 
> >You might want to consider using schemas to accomplish some of this.
> >
> >You can backup individual schemas as of 7.4 (maybe 7.3, but I've not used 
> >it in production, waiting for 7.4 to upgrade from 7.2)
> >
> >performance will almost certainly suffer if you are doing cross db work, 
> >so schemas help there.
> >
> >I've never had any stability issues with Postgresql, and certainly not 
> >from having everything in one database.
> >
> >Other than the ability to spread your load across multiple machines, 
> >7.3/7.4 and schemas should address all your concerns.
> >
> >And no, you can't fk across databases.  You can get some primitive (but 
> >quite functional) cross database action with the contrib/dblink package.
> >
> >On Thu, 11 Sep 2003, Andrew Biagioni wrote:
> >
> >> I am thinking of separating my data into various DBs (maybe on the same 
> server, 
> >> probably not) -- mostly for performance/stability/backup reasons -- but I 
> have 
> >> a considerable amount of foreign keys, views, and queries that would need to 
> >> work across DBs if I were to split things the way I want to.
> >> 
> >> Is it possible to have foreign keys / views / queries work across database 
> >> boundaries?  On the same server / on separate servers?  If so, how?
> >> 
> >> For example, I have:
> >>  - a table, A, with > 200 K rows which never changes;  
> >>  - another table, B with < 10 K rows which changes frequently;  
> >>  - and a third table, C, which joins A and B, i.e. has foreign keys into A 
> and 
> >> B, and changes rarely
> >> 
> >> I would like to have A in one DB, dbA (possibly its own server);  B in 
> another 
> >> DB, dbB (possibly its own server);  and C either with A or with B (this one 
> is 
> >> not an issue per se).
> >> 
> >> What I'm looking to gain is:
> >>  - dbA would be backed up/replicated religiously, and possibly on a server 
> >> optimized for frequent writes
> >>  - dbB would NEVER be backed up, possibly on a server optimized for cacheing
> >>  - each database's schema would be simpler and easier to manage
> >>  - as the number of records and users grow, be able to distribute the 
> >> computing/storage/memory load among various machines rather than have to 
> >> upgrade the hardware
> >> 
> >> Thanks in advance!
> >> 
> >>Andrew
> >> 
> >> 
> >> 
> >> 
> >> ---(end of broadcast)---
> >> TIP 9: the planner will ignore your desire to choose an index scan if your
> >>   joining column's datatypes do not match
> >> 
> >
> >
> >---(end of broadcast)---
> >TIP 2: you can get off all lists at once with the unregister command
> >(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
> >
> >
> >
> 
> 
> 
> 
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster
> 


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[ADMIN] Some basic info regarding postgresql?

2003-09-11 Thread Duffey, Kevin








Hi all, new to postgresql and WOW!
IS it a great DB! I am blown away with it, it does far
more than I thought it could.

 

I am interested in learning a bit more regarding a few areas
of PostgreSQL. Either through replies, or links
anyone can provide to more detailed information, I would be much appreciated.

 

I am looking to see how replication is done. I read that the
eRServer or something that was once commercial was
just recently open-sourced. Is this part of the 7.3.x or maybe the 7.4.x branch
now? Or is it an add-on product, and how well does it handle the multiple ways
of replication? In particular we would need data replication for a fail-over
setup that would be done every 10 minutes or so, to ensure we have data that
isn’t lost as much as it can be. What are some ways others are
replicating data?

 

Clustering is of course another area we are concerned with. How
well does PostgreSQL scale, how well does it handle
heavy client load? We would probably either be using a dual Xeon 2.8Ghz system
with 4GB ram, or an Itanium 2 system with 8GB ram as the DB server. We are
looking to be able to handle a few 100 transactions per second in an “almost”
24/7 cycle… most of our clients don’t work between 10pm and 5am, but sometimes they do. How does it
handle adding another DB to the cluster? How are some ways some of you are
clustering and scaling with PostgreSQL?

 

Security is another biggie. I see that you can limit who can
access it based on ip, user name, etc. What other
levels of security are there? Can table permissions be set on an individual
user, or a group of users? What about row permissions or column permissions?
What other ways of security should we be concerned with in regards to the DB
itself (deployed on linux of course)?

 

Lastly, performance. How well
does it stack up for small and large/complex queries compared to Oracle, DB2
and MSSQL 2000? We are actually using all  3 of those DB’s at the
request of some of our clients. How fast is it compared to mySQL
these days? I read something from a year ago that said the performance gap is
narrowing between pgsql and mysql,
can anyone elaborate a bit? We are also looking at data warehousing
possibilities. Is pgsql a good DB for these types of
applications? Does it have any built-in support for it?

 

Would anyone be willing to provide a reference that our
CIO/IT Manager may call to talk to if possible, to support the use of pgsql over the big boys?

 

Thanks.

 



*"The information contained in this e-mail message  may be confidential and protected from disclosure.  If you are not the intended recipient, any dissemination, distribution or copying is strictly prohibited.  If you think that you have received this e-mail message in error, please e-mail the sender at [EMAIL PROTECTED]."*




---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.501 / Virus Database: 299 - Release Date: 7/14/2003
 


Re: [ADMIN] Some basic info regarding postgresql?

2003-09-11 Thread Bruno Wolff III
On Thu, Sep 11, 2003 at 16:18:04 -0700,
  "Duffey, Kevin" <[EMAIL PROTECTED]> wrote:
>  
> Security is another biggie. I see that you can limit who can access it based on ip, 
> user name, etc. What other levels of security are there? Can table permissions be 
> set on an individual user, or a group of users? What about row permissions or column 
> permissions? What other ways of security should we be concerned with in regards to 
> the DB itself (deployed on linux of course)?

You can limit access to individual databases in a cluster by ip address
or username (both postgres username and host username). This is done in
pg_hba.conf.

You can also limit access to objects by postgres username and group.
This includes databases, schemas, tables, views, functions, sequences and
probably a few other things. You can't directly restrict access to rows or
columns of a table. You can create views that can only be used by some
users or groups that restrict access to rows and or columns of a table.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[ADMIN] pg 7.3.4 and linux box crash

2003-09-11 Thread pginfo
Hi,
I am runing linux red hat 7.3 (standart install)
on dual athlon box , 1 GB ram and pg 7.3.4.

If I try to access with pgAdmin one from my tables (i contains ~ 1 M
records)
the linux box crashes.
In my pg log I can find:
ERROR:  Invalid page header in block 5604 of a_acc

before this error I see also :
FATAL:  Database "template0" is not currently accepting connections
ERROR:  Relation "pg_relcheck" does not exist

and

ERROR:  'ksqo' is not a valid option name


I am using reiserFS on this box and do not find any other problems with
it.

Can I solve this problem or I need to recreate the datebase ( The table
and the database are created befor 2 days)?

regards,
ivan.




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html