Re: [GENERAL] remote duplicate rows

2006-09-13 Thread A. Kretschmer
am  Wed, dem 13.09.2006, um 15:46:58 -0700 mailte Junkone folgendes:
> hI
> i have a bad situation that i did not have primary key. so i have a
> table like this
> colname1colname2
> 1 apple
> 1 apple
> 2  orange
> 2   orange
> 
> It is a very large table. how do i remove the duplctes quickly annd
> without much change.

begin;
alter table foo rename to tmp;
create table foo as select distinct * from tmp;
commit;

You should create a primary key now to avoid duplicated entries...


HTH, Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47215,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] select unique items in db

2006-09-13 Thread a
this doesnt work

SELECT DISTINCT on link *, rank(ts_vec, to_tsquery('default', $qtxt))
FROM feed_entry WHERE ts_vec @@ to_tsquery('default', $qtxt) ORDER
BY rank(ts_vec, to_tsquery('default', $qtxt)) DESC LIMIT 5 OFFSET 0

can you tell me how to get the DISTINCT elements in LINK
thanks

Stijn Vanroye wrote:
> a schreef:
> >  "select unique id" - i found this line in google search
> > but i want to make
> > select * as unique
> >
> > select unique * is
> > or select distinct
> >
> > pls point out how to select unique items from a list of million items
> >
> Maybe
> select distinct  from ...
> or
> select distinct on (field1,field2)  from ... ?
> 
> Regards,
> 
> Stijn.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] oracle listener intercept

2006-09-13 Thread David Link
Hi,I'm looking at replacing an Oracle database that only has a few tables, but that is accessed by many programs in the company via oracle jdbc drivers, oracle odbc drivers and python database interface. Is there a way to intercept the calls to the oracle listener and redirect to postgres without changing the calling application?  
Thanks, David


[GENERAL] remote duplicate rows

2006-09-13 Thread Junkone
hI
i have a bad situation that i did not have primary key. so i have a
table like this
colname1colname2
1 apple
1 apple
2  orange
2   orange

It is a very large table. how do i remove the duplctes quickly annd
without much change.

Regards

Seede


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Issue with order by for type varchar

2006-09-13 Thread Simon_Kelly
Thanks Martijn.


Simon Kelly
Java Developer
Information Systems Development
Information Technology Shared Services
Ministry of Health
DDI: 
Mobile: 

http://www.moh.govt.nz
mailto:[EMAIL PROTECTED]



[EMAIL PROTECTED] wrote on 13/09/2006 01:33:15 a.m.:

> On Tue, Sep 12, 2006 at 09:51:20AM +1200, [EMAIL PROTECTED] wrote:
> > Hi All,
> > 
> > I am trying to order a select by the primary key which is a varchar 
field.
> > 
> > 1. the varchar always contains 28 characters. ( and, yes, I know the 
irony 
> > of using a variable array on a known fixed length field )
> 
> It doesn't make any difference in space usage.
> 
> > However, when I do an order by I get misplacing of characters so 
character 
> > sequences like "000+..." are coming after "".
> 
> Check your LC_COLLATE setting ("show all" should tell you).
> 
> > Is there any way of adding a function that would make the order by do 
> > ordering on this field in the same way as the Java Collections.sort() 
> > method?
> 
> PostgreSQL uses the collation supported by your OS (you didn't say
> what OS you're running). I believe Java has its own collation system?
> Getting them to equal in general would be difficult, but in your case
> it should work.
> 
> Which collation are you using in Java and which in postgres?
> 
> Have a nice day,
> -- 
> Martijn van Oosterhout  http://svana.org/kleptog/
> > From each according to his ability. To each according to his 
> ability to litigate.
> [attachment "signature.asc" deleted by Simon Kelly/MOH] 


Statement of confidentiality: This e-mail message and any accompanying
attachments may contain information that is IN-CONFIDENCE and subject to
legal privilege.
If you are not the intended recipient, do not read, use, disseminate,
distribute or copy this message or attachments.
If you have received this message in error, please notify the sender
immediately and delete this message.


*
This e-mail message has been scanned for Viruses and Content and cleared 
by the Ministry of Health's Content and Virus Filtering Gateway
*

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] remote duplicate rows

2006-09-13 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 09/13/06 19:36, ljb wrote:
> [EMAIL PROTECTED] wrote:
>> hI
>> i have a bad situation that i did not have primary key. so i have a
>> table like this
>> colname1colname2
>> 1 apple
>> 1 apple
>> 2  orange
>> 2   orange
>>
>> It is a very large table. how do i remove the duplctes quickly annd
>> without much change.

Since the 2 colname1 == 2 records are different (extra spaces in
colname2), how do you determine which is the correct record?  (Or is
the extra space just an artifact?)

> Make a new table (with a primary key) and the same columns in order,
> and do: INSERT INTO newtable SELECT DISTINCT * FROM oldtable;

That's a single transaction, and since this is a "very large table",
it would be very unpleasant if it rolled back at 95%.

Of course, we don't know what junkone1's definition of "very large"
is and how beefy his hardware is...

If there are OIDs on the table, you could write a script with this
pseudocode, which because of the "candidate key table", transaction
block and LIMIT TO, allows the script to be restated.  Niceties like
 printing timestamp and a counter after every commit are always helpful.

CREATE TABLE BIGTABLE_PK (
COLNAME1   INTEGER);

INSERT INTO BIGTABLE_PK
SELECT DISTINCT COLNAME1
FROM BIGTABLE;

CREATE INDEX I_BIGTABLE_PK
ON BIGTABLE_PK (COLNAME1)
TYPE IS SORTED;

DECLARE LOOP_FLAG INTEGER = 1;
WHILE LOOP_FLAG DO
BEGIN TRANSACTION
FOR :X AS EACH ROW OF
SELECT COLNAME1
FROM BIGTABLE_PK
ORDER BY COLNAME1
LIMIT TO 2000 ROWS
DO
DELETE FROM BIGTABLE
WHERE OID IN (SELECT OID
  FROM BIGTABLE_PK
  WHERE COLNAME1 = :X.COLNAME1
  LIMIT TO 1 ROWS);
DELETE FROM BIGTABLE_PK
WHERE COLNAME1 = :X.COLNAME1;
END FOR;
IF (SELECT COUNT(*) FROM BIGTABLE_PK) == 0 THEN
SET LOOP_FLAG = 0;
END IF;
COMMIT;
END ;

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is "common sense" really valid?
For example, it is "common sense" to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that "common sense" is obviously wrong.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFFCMPtS9HxQb37XmcRArYMAKCGEK7ft1PLprnHtpjsLYlgs4t5gACdEksT
JY42ieEmRvehOsuU/o6YFR8=
=MJhV
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] berkley sockets

2006-09-13 Thread J S B
Thanks alot Tony.
just wondering if the same can be done with C
~Jas 
On 9/13/06, Tony Caduto <[EMAIL PROTECTED]> wrote:
J S B wrote:> I don't want to connect to the postgres database.>> The scenario is something like this.
>> Postgres database has to initiate some deamon process running is> another server.> The only way i could think of doing this was openeing a socket> connection between postgres database and
> the deamon process through a shared object dynamicall loaded in postgres.>> Berkley sockets is the socket API in unix that uses> >> Don't know if there's a better way to do it.
>> ~Jas>I have done this using PLperl  (untrusted) and it works just fine.Here is a simple example that just sends a command to a popupnotification daemon I use.Win32 clients connect to the daemon and when I need to notify them of
incoming files from a ProFTP server I use this function.Works great and have never had a problem.CREATE or REPLACE FUNCTION public.psendpopup(text,text)RETURNS pg_catalog.varchar AS$BODY$
use IO::Socket;$sock = new IO::Socket::INET ( PeerAddr => 'localhost', PeerPort => '13000', Proto => 'tcp', );
die "Could not create socket: $!\n" unless $sock;print $sock "null\r\n";print $sock "send_broadcast\r\n";print $sock $_[0]."\r\n";print $sock $_[1]."\r\n";
close($sock);$BODY$LANGUAGE 'plperlu' VOLATILE;--Tony CadutoAM Software Designhttp://www.amsoftwaredesign.comHome of PG Lightning Admin for Postgresql
Your best bet for Postgresql Administration


Re: [GENERAL] Kill specific connection

2006-09-13 Thread Steve Crawford
snacktime wrote:
> What's a safe way to kill a specific connection to the database?  I'm
> testing some code that reconnects if a connection has timed out or
> gone bad and I need to simulate a connection that has gone away.

There are various ways. The two easiest are to use iptables to block
network access or to kill the backend process.

If you only have one connection from the client machine then identifying
the backend is easy. On recent versions of PG you can run "select * from
pg_stat_activity" to to find the PID of the backend associated with any
given TCP/IP address/port pair. If you are sorting out multiple
connections or running an older version of PG just use "lsof -P -i
:5432" on the client and server machines to figure out which backend to
kill.

But one thing that is slightly harder to test but probably more
important is when the backend isn't responding but isn't gone and hasn't
closed the connection. I've done partial simulations of this using
"netcat -l -p 5432". The network connection will open successfully but
the "server" will just sit and send no data.

We've found that for our purposes (where a failure to respond within 5
seconds requires the client to continue processing and store the data
locally for recovery later) the combination of using a timeout parameter
in the connection string and "set statement_timeout to 5" works very
reliably.

Cheers,
Steve

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] berkley sockets

2006-09-13 Thread Tony Caduto

J S B wrote:

I don't want to connect to the postgres database.
 
The scenario is something like this.
 
Postgres database has to initiate some deamon process running is 
another server.
The only way i could think of doing this was openeing a socket 
connection between postgres database and

the deamon process through a shared object dynamicall loaded in postgres.
 
Berkley sockets is the socket API in unix that uses


 
Don't know if there's a better way to do it.
 
~Jas


I have done this using PLperl  (untrusted) and it works just fine. 
Here is a simple example that just sends a command to a popup 
notification daemon I use.


Win32 clients connect to the daemon and when I need to notify them of 
incoming files from a ProFTP server I use this function.

Works great and have never had a problem.

CREATE or REPLACE FUNCTION public.psendpopup(
text,
text)
RETURNS pg_catalog.varchar AS
$BODY$
use IO::Socket;
$sock = new IO::Socket::INET (
 PeerAddr => 'localhost',
 PeerPort => '13000',
 Proto => 'tcp',
 );
die "Could not create socket: $!\n" unless $sock;
print $sock "null\r\n";
print $sock "send_broadcast\r\n";
print $sock $_[0]."\r\n";
print $sock $_[1]."\r\n";

close($sock);
$BODY$
LANGUAGE 'plperlu' VOLATILE;



--
Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for Postgresql
Your best bet for Postgresql Administration 



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Kill specific connection

2006-09-13 Thread Tom Lane
snacktime <[EMAIL PROTECTED]> writes:
> What's a safe way to kill a specific connection to the database?  I'm
> testing some code that reconnects if a connection has timed out or
> gone bad and I need to simulate a connection that has gone away.

Disconnecting a network cable might be the easiest test.  Anything
involving a process kill is not a realistic test, because the kernel
will report connection closure to the other end, which is a luxury
you don't get in the real-world cases where this is an issue.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Kill specific connection

2006-09-13 Thread snacktime

What's a safe way to kill a specific connection to the database?  I'm
testing some code that reconnects if a connection has timed out or
gone bad and I need to simulate a connection that has gone away.

Chris

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] [NOVICE] Question About Aggregate Functions

2006-09-13 Thread Brandon Aiken








I think I mistakenly sent this to General
instead of Novice.  Oops.

 

Yeah, I either skipped over or forgot the
bit in the OP about bools.  Mea culpa.

 

You should be able to use OR instead of
AND in any logical _expression_.

 

  Well this sounds more like what I want. 
Given t2.fielda, t2.fieldb, t2.fieldc, any one (or all three) could be true,
but frequently at least one of the fields is false.  Initially, all of the
fields might be unknown (thus NULL) for a given item until I am able to
investigate the items to determine TRUE/FALSE.  I frequently have items
that are inactive, and thus unable to determine any of attributes in t2. 

My end result needs to be a count of all the values in each field where the
value is TRUE, as opposed to FALSE or NULL.

 

Yeah, I would probably run 4 separate,
simple queries.  That will get you the best performance since you’re
doing no JOINs and no composite queries.

 

If you need to enter the results into
another table, try INSERT … to insert the defaults and any primary key
you have (like timestamp), then four UPDATE … SELECT statements.

 

 

The real problem with NULLs is some of the
(in my mind) nonsensical results you get, especially with logical operators:

NULL AND TRUE => NULL

NULL OR TRUE => TRUE

NULL AND FALSE => FALSE

NULL OR FALSE => NULL

 

Plus you have to use IS instead of = since
any NULL in an = _expression_ makes the result NULL (yes, this is an error in my previous
queries).  NULL just has all these special cases.  I find it much
nicer to avoid it wherever possible since it has somewhat unpredictable
results.

 

 









From:
[EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Don Parris
Sent: Wednesday, September 13,
2006 12:50 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] [NOVICE]
Question About Aggregate Functions



 

On 9/13/06, Brandon
 Aiken <[EMAIL PROTECTED]>
wrote:











Ah, I did not know what was in your fields, so I did not
assume they were Boolean values.  It looked to me like you were trying to
use IS TRUE to substitute for the lack of a GROUP BY, so I didn't know what to
do.












That was in the first paragraph of my OP.   "How do I create a query
that (1) evaluates each boolean field for TRUE/FALSE and (2) counts the number of
rows where each field is TRUE?"  Maybe you just hadn't had your first
cup of coffee? ;-)  Seriously, though, I really do appreciate your help.





 







Yes, count() will include all non-NULL values.  Sorry if
I sounded unclear there.  

 

If you do typecasting the value zero is false (and non-zero
is true).  NULL in an _expression_ always returns NULL, and many programs
will interpret that result as false.  So I'm not sure of what results you
might get with a Boolean test against a non-Boolean field, especially if it's
an integer field.

 

postgres=# select 0::boolean = FALSE;

 ?column?

--

 t

(1 row)

 

You should just be able to take the previous query and add in
your WHERE clauses:





 

SELECT count(t1.fielda), count(t2.fielda), count(t2.fieldb),
AVG(t2.fieldc)





FROM t1 JOIN t2 ON t1.item_id = t2.item_id

WHERE t1.fielda = TRUE AND t2.fielda = TRUE AND t2.fieldb =
TRUE

GROUP BY NULL;

 

Now, the INNER JOIN you're using is only selecting fields
where both t1.item_id and t2.item_id exist and the respective fields are TRUE.
 That is, it's only going to run the count and average functions against
the results of this query:

SELECT *





FROM t1 JOIN t2 ON ON t1.item_id = t2.item_id





WHERE t1.fielda = TRUE AND t2.fielda = TRUE AND t2.fieldb =
TRUE;

 

If that's what you want, that's great.  












Can I use OR instead of AND here?





 







However, you might want a count of each field where that
field is TRUE.  In that case, I would use either temporary tables,
compound queries and derived tables, or multiple simple queries. 











 







It's also possible that you might want a count of fields
where t1.item_id and t2.item_id exist, but where only each respective field is
TRUE.  That is, you want a count of t1.fielda where it is TRUE no matter
what t2.fielda and t2.fieldb are as long as t1.item_id matches t2.item_id.
 In that case you have to do even more joins, and that could take a fair
bit of time especially if you haven't indexed your item_id fields.











  Well this sounds more like what I want.  Given t2.fielda,
t2.fieldb, t2.fieldc, any one (or all three) could be true, but frequently at
least one of the fields is false.  Initially, all of the fields might be
unknown (thus NULL) for a given item until I am able to investigate the items
to determine TRUE/FALSE.  I frequently have items that are inactive, and
thus unable to determine any of attributes in t2. 

My end result needs to be a count of all the values in each field where the
value is TRUE, as opposed to FALSE or NULL.











You really have to look at your result sets.  Sometimes
it is better to run multiple simple queries instead of 

Re: [GENERAL] Load "myLib.so" error cannot load obj file...

2006-09-13 Thread Tom Lane
didier tanti <[EMAIL PROTECTED]> writes:
> ERROR: could not load library "/myPath/libxx.so": libpcre.so : cannot open 
> shared object file: No such file or directory.

> the problem is that the libpcre.so does exist and is in my 
> LD_LIBRARY_PATH,

But is it in the postmaster's LD_LIBRARY_PATH?  That's what counts here.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Load "myLib.so" error cannot load obj file...

2006-09-13 Thread didier tanti
Hello,i am desesperate to get this resolved, you are my only hope here is the issue:i am compiling a .so using:gcc -shared libxx.so -lpcre,then in my DB, i am trying:LOAD '/myPath/libxx.so',it keeps returning me :ERROR: could not load library "/myPath/libxx.so": libpcre.so : cannot open shared object file: No such file or directory.the problem is that the libpcre.so does exist and is in my LD_LIBRARY_PATH,  I really dont understand anything, i even dont know where to look at      at least could u give me some idea how to debug that ? i am thinking to reinstall the postgres ...  i need to say that the postgres dameon is started at startup, 
      
		 
Découvrez un nouveau moyen de poser toutes vos questions quelque soit le sujet ! 
Yahoo! Questions/Réponses pour partager vos connaissances, vos opinions et vos expériences. Cliquez ici. 


Re: [GENERAL] serial, sequence, and COPY FROM

2006-09-13 Thread rloefgren
Thanks for all the help, it works and I've learned some more about
COPY, which is treated thinly in Douglas's first edition of
"PostgreSQL" (which was my source.)

r


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] [NOVICE] Question About Aggregate Functions

2006-09-13 Thread Don Parris
On 9/13/06, Brandon Aiken <[EMAIL PROTECTED]> wrote:















Ah, I did not know what was in your
fields, so I did not assume they were Boolean values.  It looked to me
like you were trying to use IS TRUE to substitute for the lack of a GROUP BY,
so I didn't know what to do.That was in the first paragraph of my OP.   "How do I create a query that (1) evaluates each boolean field for
TRUE/FALSE and (2) counts the number of rows where each field is TRUE?"  Maybe you just hadn't had your first cup of coffee? ;-)  Seriously, though, I really do appreciate your help.
Yes, count() will include all non-NULL
values.  Sorry if I sounded unclear there.  

 

If you do typecasting the value zero is false
(and non-zero is true).  NULL in an _expression_ always returns NULL, and
many programs will interpret that result as false.  So I'm not sure
of what results you might get with a Boolean test against a non-Boolean field,
especially if it's an integer field.

 

postgres=# select 0::boolean = FALSE;

 ?column?

--

 t

(1 row)

 

You should just be able to take the
previous query and add in your WHERE clauses:

 

SELECT count(t1.fielda), count(t2.fielda),
count(t2.fieldb), AVG(t2.fieldc)

FROM t1 JOIN t2 ON t1.item_id = t2.item_id

WHERE t1.fielda = TRUE AND t2.fielda =
TRUE AND t2.fieldb = TRUE

GROUP BY NULL;

 

Now, the INNER JOIN you're using is
only selecting fields where both t1.item_id and t2.item_id exist and the respective
fields are TRUE.  That is, it's only going to run the count and
average functions against the results of this query:

SELECT *

FROM t1 JOIN t2 ON ON t1.item_id =
t2.item_id

WHERE t1.fielda = TRUE AND t2.fielda =
TRUE AND t2.fieldb = TRUE;

 

If that's what you want, that's
great.  Can I use OR instead of AND here?
However, you might want a count of each
field where that field is TRUE.  In that case, I would use either
temporary tables, compound queries and derived tables, or multiple simple
queries. 
It's also possible that you might
want a count of fields where t1.item_id and t2.item_id exist, but where only
each respective field is TRUE.  That is, you want a count of t1.fielda
where it is TRUE no matter what t2.fielda and t2.fieldb are as long as
t1.item_id matches t2.item_id.  In that case you have to do even more
joins, and that could take a fair bit of time especially if you haven't
indexed your item_id fields.  Well this sounds more like what I want.  Given t2.fielda, t2.fieldb, t2.fieldc, any one (or all three) could be true, but frequently at least one of the fields is false.  Initially, all of the fields might be unknown (thus NULL) for a given item until I am able to investigate the items to determine TRUE/FALSE.  I frequently have items that are inactive, and thus unable to determine any of attributes in t2.
My end result needs to be a count of all the values in each field where the value is TRUE, as opposed to FALSE or NULL.


You really have to look at your result
sets.  Sometimes it is better to run multiple simple queries instead of
one big complex query to be sure you're getting the data you want and the
query executes in a reasonable amount of time.

 

Also, consider that NULL values are
generally considered bad to purposefully enter.  Logically, It would be
better to create one table for each field and then create a record for each
item_id as you need it so you never have NULLs.  The problem with that is
one of performance if you end up doing large number of JOINs.  In that
case, it might be better to use integers instead of Boolean fields, since you
have three explicit states of TRUE, FALSE, and NOT YET DETERMINED.Regarding the NULL Values:I have been thinking that I might want to leave my NULLs as they are.  However, I will be concentrating mostly on the items that are active.  Inactive items are only counted as part of the total number of items.  Their attributes are a moot point, but technically FALSE. (If they are inactive, the attributes are no longer TRUE in any case.)
I am counting only those fields that I know (through verification) to be TRUE.  I can use FALSE where the actual value is not known, and just change the attribute to TRUE when I discover that to be the case.  I just need to be sure in my counts that I note the fact that FALSE values include the unverified values (meaning some of those might actually be true).  Does that sound fairly logical to you?
Regarding the Table Layout:The columns in t2 are ordered, essentially according to the category of attributes involved.  I had thought about using multiple tables, one for each category of attributes.  However, I chose a monolithic table for the attributes to represent a single survey of each item.  Each item might be surveyed again in the future to determine any changes, which would introduce a new version of the current table.
 I'll tinker around with the queries a bit, and see what I come up with.Thanks for the input.


[GENERAL] Garbage data sent by Windows 98 client

2006-09-13 Thread Andrus
I use ODBC driver to access PostgresSQL 8.1 servers from Windows 98 computer
 from Microsoft Visual FoxPro 9 application.

 In a number of client and servers, it is not possible to send data to
 server.

 PostgreSQL log file shows shows that garbage characters are received. 
Server
 responds to this packet  with message

 syntax error or at near 

 Server response is received correctly by client.

 I tried latest and previous versions of Unicode ODBC driver and the problem
 exist in both driver versions.
 I can reproduce this in my office if application is running from Windows XP
 drive from network drive.
 If I copy application to C: drive in Windows 98 computer, application runs
 OK.

 Any idea how to fix this ?

Server log file included.

 Andrus. 


begin 666 baddata.zip
M4$L#!!0(`.!\+35:$XJ)>"<``)[EMAIL PROTECTED]@`'T`<&]S=&=R97-Q;"TR,# V
M+3 Y+3$S7S P,# P,"YL;V=31&@`O `(`%4;=#9C9&[EMAIL PROTECTED]@@ `?
M(&9D!3-918'$;JY;@L]/WZWM6FYD_YH9MQPC$P,#$T,!`PM(5D""X3^C/ -(
[EMAIL PROTECTED]@-@"(A!Q1HBX$%BM"D0,[EMAIL PROTECTED]"*&8IPAD`P!55 T`!^3[!T5\
M)@A%>B8(1>V=78\=UW&NK^U?T= -I0-GL-=7]^I!? `>:90(H46 8I#D*AAQ
M-N4=#F>(^9 M(!<28@:F(<:6(R(*(",D#GDD)71$4F02.=!]?H8P)/?\C=-5
M,T-VU>[EMAIL PROTECTED]>B$,34YNCIWC7O6KWZK5JU_&12_]ZD_3T7JLEDN?N_U%0K
M*V^6J$R?_8+FJ5G=W-M]>/;.[>WZYNK"U>6:ZO3W;>*M:6]U9
M?7-U>UJ],)V^O?K"[EMAIL PROTECTED],!<"=Z?D+ZZL[4[= ]8H:"ZC;YU>W=A:(01%3
M`?'"YO;.6UO3[05H5- :C692P,8DFK6B9CB:C2*V%M',$MI.T&BV"[EMAIL PROTECTED]
M$'(36)NN&T.Y3[30IO,*BFK3!04TT::[EMAIL PROTECTED]"FJR74H=ITC0+::#,K
M:LGT/AS-5A%+YO>Q:/J)A'I4FUZ-(&^B3:^&D(>UZ=48\A;:]&H(!52;7HV@
M8*)[EMAIL PROTECTED]@*)MI40RB43.^#T50C*)3,[J/1#&H(15B;H1M#;9]HHN
MFAQ<[LSF[OI:M;&Y4VU-STQG;T_Y2M79K7-C8WIF9[:Y
M4>W,SD_7JLW='>#:_NC:NQO3'U_HP!UQY>2K58<_N&)W4T<7U)=I'UWF('#@
MLL6Q/[EMAIL PROTECTED])HOA1QYA'UJCPX_L00$T&7YD$0HJ//S('Q1$B^%''J& HL./[$$!
M-!E^9!$**J[-K([EMAIL PROTECTED])+:L3W8!V83;08UA(JRQX/1)'\P]8D6VB2/
M4$!1;9(]*( FVB2+4%!A;9(_*(@6VB2/4$!A;68%-'C=/!:)(_
MV".&B84VR2,44%2;[EMAIL PROTECTED])05)NQ5D 3;48YA$)[EMAIL PROTECTED]
MFED1+:R(V"HH5ASDR'1,=0]H\[J7G*+"VDQ>$2VTF8*"HMI,40%-M)F2I!9E
MCX>C62NBA393HZ"P-K,"&A2N4?Y'4<'B(,\5>WVB05K6<]5>'PJF93T7[ F@
MP7K3<]&>H(+K3<\5>X*(/],]5^T)*/9,]URP)X &\Z;GHCU!!>[EMAIL PROTECTED]/
[EMAIL PROTECTED]>[EMAIL PROTECTED]@@IKTWE%M-"F"PJ*:M-%!331IDN*"FO3U8IH
MH4W7*"BLS:R !L]TSXYG;D!!35IE !M$,[,@)*AY--88,
MMI(&=N3Z4' K:= %>]%B*VG017L1+9\.NF(O&KRG!Z[:$U#L/3UPP5X/F"RR
M;(&+]OI4-,L6N&)/$/&G4."J/0'%GD*!"_8$T. [EMAIL PROTECTED])]"
M@:OV!!2=-UVK@";SII\H*CQO>C6&#+)[EMAIL PROTECTED]"46UZ=4(LLBR!?+/)!76IE=C
MJ&[EMAIL PROTECTED] *J#:]K'E-1>^_X]',BHIKLU5$BV=ZD+7C"2R?#N2="6!1+G T
MFL$K*JS-$!310ILA*BBJS9 4T$2;H5946)M!C2&#VN) !IJ @K7%@;PS"319
M;T8UA([EMAIL PROTECTED]/1C&H,%64#QZ(9U1 [EMAIL PROTECTED]"QR/IAI"1B25MH!=!$F[2-
M5E!A;=(>6D&TT";MH^U!ZZ)DX& TDP*::#/[EMAIL PROTECTED],S6*:*+-K*#H>WIJ%= @
M9QEY&VV?BF;9(N^A%43\72CR/EH!Q=Z%(F^A%4"#]6;D;;2A3P6?Z9'WT HB
M_DR/O(]60+%Y,[(C)X &\V9D1TY0P7DSLB,GB/B\&=F1ZT/!+%MD1TX`3;1)
[EMAIL PROTECTED];3IU! "LVR1'3D!--&F4T,(;?D4V9$31!-MJB$$UA9'=N0$
MT."9'MF1ZU/1+%MD1TX0+;3IU1 "LVR1';G8!YIHDQPY086U28Z<(%IHDQRY
M/A1L^139D1- &VUF1077FY$=.4'$UYN1';D^%,Q91G;D!-!$F^3("2JLS:#&
MD$'.,K(CUX>".F)'3A#Q9WIB1ZX/!7.6
MB1TY`328-Q,[ )MIT:@BA.<[EMAIL PROTECTED]&P'U)B1Z[I`PV>Z8D=N3X5
MS5DF=N0$T4*;Y,CUH!G,629VY 301)ODR DJK$URY 310IODR DHJDURY 30
M1IM94<'U9F)'3A#Q]69B1ZX/!=O?)G;D!-!$FT$-(;3];6)'3A MM!G4$ +;
MWR9VY 301)M!#2&T'U)B1TX03;2IAA#8#RFQ(Y?[0)-W(7+D^E2T_6UB1TX0
M+;1)CIR HMHD1TX`3;1)[EMAIL PROTECTED];1)CIR HMHD1TX`;;29);4HM3H<
M336&BG*K8]%,:@B!O6$3.W(":*+-I(80VALVL2,GB!;:3&H(@;UA$SMR`FBB
MS:2&4%%J=3B:W1AJ^T03;68%Q2IG$CMR`FA0.5.S(]>GHCG+FATY0<3?A6IV
MY/I0,&=9LR,G@ ;KS9H=.4$%G^DU.W*"B#_3:W;D!!2;-VMVY 308-ZLV9$3
M5'#>K-F1$T1\WJS9D>M!6S!G6;,C)X FVG1>46%MNJ"(%MIT44%1;;JD@";:
M=+6BPMIT3:>D/M%$FUE!L6=Z38ZK,F1$U"P
[EMAIL PROTECTED]"&$]H:MR9&31 MM!C6$P-ZP-3ERSO6!)MH,M:2B_9!J$4MD-!;-2,TH>M"B%.-@
M-*,"ECQ]QZ.9%+7DESX>B=MBB7-EP1&N-+%'3:$0;
M3<76]8TZ&I6()O.H:S46GD?]1",MYE%RK>H^%=PWV+!E)8D&[EMAIL PROTECTED]
MPZZ51.*67R6IZ#Q*UI4@
M6B3C&C:P)!;6*+E7$FFA47*P)!75*-E7DFBB43*Q)!;6:*C5^Z)!2JYA%TM2
M88UF331YUH=68AUZ2DE#WIAX%W$&:;F&O#%%134:O2:::#0&C84U&J-&[EMAIL PROTECTED]
M)DU%-1IK3331:%3O]0[MH]2P1R:1%N],L=54])TIZ;%DD:!KV"236%BCVB5S
M!BFZ9L$D(,TG1T))6FPAK-FFCR7I]:
MC2V9\ V%>,,D[EMAIL PROTECTED]<2:
MBCWK:5>8)AK,HU3FK['@/$IESQJ)SZ-4QJ:IV#Q*)0F*:+'1,)/UIK"P1IUR
MG)W!5L-,WINBHAIU41---.J2QL(:=;5&6FC4-9H*:S1KHL&SGE:."EM4##P8
M43^12&^PY9"^O*:B&O5>$TTTZH/&PAKU42,M-.J3IJ(:]535)[EMAIL PROTECTED]&O6-PJ)G
MQ&1R\Q027X]F0J)YYDRN7F*BN69,KEYBFB09\KDYBDLF&?*Y.9)
M9%'63E-!!+/A68^DE12L9J2S&>2"F)1=G4THG0HJ<3"&J5322720J-T+*FD
MHAJE42B3^K&_Y
MB%))Q9[UU-1,$D-1TG(\HHW&@O,HM1S02'P>I2VDFHK-H[0=2!&+DI:C$27K
M+0DLN+N[9=]-(G$/OV7O35#!?D(M5[Q)HH$_VG+)F\2"_FC+-6\2B7M/+9>\
M22KF/;5<\=:O>PH6AT*V;))-!!;6J/<:::%1K](W`3P7LF6/3!)--.J3QL(:
M];5&6FC4-YH*:S1KHH'WU+)))K&@]]2R2R:0!KG0EDRR*)XC12G&H8B21R:)
M!MGEEDTRB06SRRV[9!*)MP)NV2235*S?:LL>F22:S*-L/(^22R:1)AIM
M%16L<6[9(Q/[EMAIL PROTECTED]:$2C'DQ%*<;!B,;N7:052/PMM"633%&QM]"6/#)%-'@+
M;$TTTFH+&PAI-XLSW#EF4#!R-:*

Re: [GENERAL] Template1 oops

2006-09-13 Thread Scott Marlowe
On Wed, 2006-09-13 at 10:05, Alban Hertroys wrote:
> Berend Tober wrote:
> > On Wed, Sep 13, 2006 at 10:56:41AM +0200, Alban Hertroys wrote:
>  >
> > I'm humble (or naive) enough to admit that I've used the approach 
> > outlined there by Josh Berkus, and it worked fine. More than once, even.
> 
> I'm quite certain that approach can be made more bullet-proof by 
> wrapping it inside a transaction. I saw no mention of that on his page.
> 
> It's quite amazing what PostgreSQL can handle inside transactions :)

Every time I write a DDL change request for an Oracle database, I am
reminded of this.  :)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Template1 oops

2006-09-13 Thread Andrew - Supernews
On 2006-09-13, Alban Hertroys <[EMAIL PROTECTED]> wrote:
> I'm quite certain that approach can be made more bullet-proof by 
> wrapping it inside a transaction. I saw no mention of that on his page.
>
> It's quite amazing what PostgreSQL can handle inside transactions :)

Some of the few things that pg can _not_ do inside a transaction include:

CREATE DATABASE
DROP DATABASE

-- 
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Template1 oops

2006-09-13 Thread Alban Hertroys

Berend Tober wrote:

On Wed, Sep 13, 2006 at 10:56:41AM +0200, Alban Hertroys wrote:

>
I'm humble (or naive) enough to admit that I've used the approach 
outlined there by Josh Berkus, and it worked fine. More than once, even.


I'm quite certain that approach can be made more bullet-proof by 
wrapping it inside a transaction. I saw no mention of that on his page.


It's quite amazing what PostgreSQL can handle inside transactions :)

--
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Template1 oops

2006-09-13 Thread Berend Tober

On Wed, Sep 13, 2006 at 10:56:41AM +0200, Alban Hertroys wrote:

I found an oops in one of our template1 databases; tables and stuff were 
apparently loaded into the wrong database (namely template1). I found 
this page describing a solution: 
http://techdocs.postgresql.org/techdocs/pgsqladventuresep1.php


But, this looks kind of risky to me. I'd prefer not to put our running 
databases at risk.


As an alternative approach, wouldn't dropping and recreating the public 
schema be a nice alternative? And in that case, what would be the right 
CREATE SCHEMA public command? I don't feel like messing this up ;)
   



I'm humble (or naive) enough to admit that I've used the approach 
outlined there by Josh Berkus, and it worked fine. More than once, even.


Regards,
BMT


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Problems with date configuration

2006-09-13 Thread Fabi Avilés
Hi, I had an application in which I used postgres 7.4, and then only thing refering to date was: datestyle = 'ISO,European'everything else was commented. Now I have had to change the OS and I've installed postgres 
8.1.3, but this configuration is different, it's something like:        lc_monetary = 'en_US.UTF-8'       lc_numeric = 'en_US.UTF-8'     lc_time = 'en_US.UTF-8'   
and evrything else is commented, including the line which references the datestyle. But in the comment it says #datestyle = 'iso, mdy'. Someone can help me to make it works in the same way? Thanks.Fabi



Re: [GENERAL] Template1 oops

2006-09-13 Thread Michael Fuhr
On Wed, Sep 13, 2006 at 10:56:41AM +0200, Alban Hertroys wrote:
> I found an oops in one of our template1 databases; tables and stuff were 
> apparently loaded into the wrong database (namely template1). I found 
> this page describing a solution: 
> http://techdocs.postgresql.org/techdocs/pgsqladventuresep1.php
> 
> But, this looks kind of risky to me. I'd prefer not to put our running 
> databases at risk.
> 
> As an alternative approach, wouldn't dropping and recreating the public 
> schema be a nice alternative? And in that case, what would be the right 
> CREATE SCHEMA public command? I don't feel like messing this up ;)

The following should recreate the public schema:

CREATE SCHEMA public;
GRANT ALL ON SCHEMA public TO public;
COMMENT ON SCHEMA public IS 'Standard public schema';

You could practice by creating a test database from template0, then
use pg_dump to get a "before" dump of the test database, drop and
recreate the public schema, use pg_dump to get an "after" dump,
then compare the dumps with a command like "diff".  The before and
after dumps should be identical.

When you're done messing with template1, you could dump it and
compare that dump to a dump of a database created from template0.
The comparison should show if you missed anything.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] [NOVICE] Question About Aggregate Functions

2006-09-13 Thread Brandon Aiken








Ah, I did not know what was in your
fields, so I did not assume they were Boolean values.  It looked to me
like you were trying to use IS TRUE to substitute for the lack of a GROUP BY,
so I didn’t know what to do.

 

Yes, count() will include all non-NULL
values.  Sorry if I sounded unclear there.  

 

If you do typecasting the value zero is false
(and non-zero is true).  NULL in an _expression_ always returns NULL, and
many programs will interpret that result as false.  So I’m not sure
of what results you might get with a Boolean test against a non-Boolean field,
especially if it’s an integer field.

 

postgres=# select 0::boolean = FALSE;

 ?column?

--

 t

(1 row)

 

You should just be able to take the
previous query and add in your WHERE clauses:

 

SELECT count(t1.fielda), count(t2.fielda),
count(t2.fieldb), AVG(t2.fieldc)

FROM t1 JOIN t2 ON t1.item_id = t2.item_id

WHERE t1.fielda = TRUE AND t2.fielda =
TRUE AND t2.fieldb = TRUE

GROUP BY NULL;

 

Now, the INNER JOIN you’re using is
only selecting fields where both t1.item_id and t2.item_id exist and the respective
fields are TRUE.  That is, it’s only going to run the count and
average functions against the results of this query:

SELECT *

FROM t1 JOIN t2 ON ON t1.item_id =
t2.item_id

WHERE t1.fielda = TRUE AND t2.fielda =
TRUE AND t2.fieldb = TRUE;

 

If that’s what you want, that’s
great.  

 

However, you might want a count of each
field where that field is TRUE.  In that case, I would use either
temporary tables, compound queries and derived tables, or multiple simple
queries.

 

It’s also possible that you might
want a count of fields where t1.item_id and t2.item_id exist, but where only
each respective field is TRUE.  That is, you want a count of t1.fielda
where it is TRUE no matter what t2.fielda and t2.fieldb are as long as
t1.item_id matches t2.item_id.  In that case you have to do even more
joins, and that could take a fair bit of time especially if you haven’t
indexed your item_id fields.

 

You really have to look at your result
sets.  Sometimes it is better to run multiple simple queries instead of
one big complex query to be sure you’re getting the data you want and the
query executes in a reasonable amount of time.

 

Also, consider that NULL values are
generally considered bad to purposefully enter.  Logically, It would be
better to create one table for each field and then create a record for each
item_id as you need it so you never have NULLs.  The problem with that is
one of performance if you end up doing large number of JOINs.  In that
case, it might be better to use integers instead of Boolean fields, since you
have three explicit states of TRUE, FALSE, and NOT YET DETERMINED.

 





--





Brandon
 Aiken





CS/IT Systems Engineer













From:
[EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Don Parris
Sent: Tuesday, September 12, 2006
9:16 PM
To: [EMAIL PROTECTED]
Subject: Re: [NOVICE] Question
About Aggregate Functions



 

On 9/12/06, Brandon
 Aiken <[EMAIL PROTECTED]>
wrote:











First, aggregate functions always have to have a GROUP BY
clause.  If you want everything in a table or join, you use GROUP BY
NULL.  












Thanks.  I did not realize that. 





 







Next, IS TRUE statements will select anything that is not
NULL, 0, or FALSE, so I'm not sure what you're trying to get because you're
getting nearly everything, and count() already ignores NULL values.












I didn't see that in the manual's coverage, but could have overlooked it. 
But count() will include the FALSE values along with the TRUE values - ignoring
only those that are NULL.  At least, I think that's the case.  So,
for each column I select, I need to be sure I am counting only the TRUE
values.  I do have NULL, FALSE and TRUE values in each column, since I do
not always know for sure whether an attribute is TRUE or FALSE when I record
the item.  That may be determined later, but not in all cases. 





 







Next, count(x, y, z) isn't a valid function.  Count()
only has one parameter, so you'll have to call it several times.












I knew my syntax was wrong - but wasn't sure about calling multiple functions
since I hadn't seen any examples of that in my hunting for info.  I was
trying to make a little clearer what I wanted to do. 
 





 







Depending on what you were hoping count(x, y, z) was
returning, you do this: 

 

SELECT count(t1.fielda), count(t2.fielda), count(t2.fieldb),
AVG(t2.fieldc)

FROM t1 JOIN t2 ON ON t1.item_id = t2.item_id

GROUP BY NULL;












This one looks more like what I am attempting to do.  However, I do need
to be sure my  count() functions are counting the values that are
TRUE.  Is this a case where I should run a query to select the records
where the values for the desired columns are true, insert that result into a
temp table, and then perform the count() function as above on just those
records?  Sure seems like that would be the simple r

Re: [GENERAL] [NOVICE] INSERT does not finish except if it is carried out a

2006-09-13 Thread Brandon Aiken
Why drop and recreate the table?  Why not TRUNCATE it?

--
Brandon Aiken
CS/IT Systems Engineer

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Matthieu Guamis
Sent: Wednesday, September 13, 2006 6:15 AM
To: [EMAIL PROTECTED]
Subject: Re: [NOVICE] INSERT does not finish except if it is carried out a

Hello,

PostgreSQL 8.1 is running on Ubuntu 6.06 server edition.

Please trust me, when I use DELETE/INSERT/INSERT statements the job is 
done in a few seconds whereas with DROP/CREATE AS /SELECT it takes 
several minutes (to achieve the SELECT statement). But in this last 
case, if I wait few minutes between  CREATE AS and SELECT then the 
SELECT is done in a few seconds.

Sorry for previous syntax errors (I did not paste statements but wrote 
them with simplified names for fields and tables... it may explain the 
unmatched open parenthesis).

Could you tell me more about some possible causes of the delay?

Regards


Michael Fuhr a écrit :
> [Please don't post HTML.]
>
> On Tue, Sep 12, 2006 at 02:09:40PM +0200, Matthieu Guamis wrote:
>   
>> During the execution of the following requests, INSERT does not finish
>> except if it is carried out a few minutes after the
>> creation of the table. How to explain this latency time?
>> 
> [...]
>   
>> insert into maTable (select * from
>> ((select a.id1 ,b.id2 ,0
>> from maTable a, maTable b
>> group by a.id1,b.id2
>> order by b.id2,a.id1)
>> EXCEPT
>> (select c.id1 ,c.id2 ,0
>> from maTable c
>> ))as tt;
>> 
>
> This statement isn't syntactically correct; it has an unmatched
> open parenthesis.  If I paste the statement into psql it appears
> to hang, presumably because the parser thinks it's incomplete and
> is waiting for more input.  Are you sure you've diagnosed the problem
> correctly?  If so then please post a test case without errors so
> others can attempt to duplicate the problem.
>
> What version of PostgreSQL are you running and on what platform?
> What client interface are you using?
>
>   
>> DROP and CREATE do their job but INSERT does not finish if it is
>> carried out immediately after the CREATE. On the other hand
>> if it is carried out a few minutes (~5min) later then INSERT commits
>> in a few seconds.
>> 
>
> A five-minute delay could hint at some possible causes, but first
> let's find out whether syntax is the problem.
>
>   

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] berkley sockets

2006-09-13 Thread Chris Mair
On Wed, 2006-09-13 at 01:51 -0400, J S B wrote:

> I don't want to connect to the postgres database.
>  
> The scenario is something like this.
>  
> Postgres database has to initiate some deamon process running is
> another server.
> The only way i could think of doing this was openeing a socket
> connection between postgres database and 
> the deamon process through a shared object dynamicall loaded in
> postgres.
>  
> Berkley sockets is the socket API in unix that uses
> 
>  
> Don't know if there's a better way to do it.

Is ist that you want to have a PG instance running on host A accepting
connections on host B?

Maybe you can use an SSH tunnel?

Bye, Chris.



-- 

Chris Mair
http://www.1006.org


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Template1 oops

2006-09-13 Thread Alban Hertroys
I found an oops in one of our template1 databases; tables and stuff were 
apparently loaded into the wrong database (namely template1). I found 
this page describing a solution: 
http://techdocs.postgresql.org/techdocs/pgsqladventuresep1.php


But, this looks kind of risky to me. I'd prefer not to put our running 
databases at risk.


As an alternative approach, wouldn't dropping and recreating the public 
schema be a nice alternative? And in that case, what would be the right 
CREATE SCHEMA public command? I don't feel like messing this up ;)


Regards,
--
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] select unique items in db

2006-09-13 Thread Stijn Vanroye

a schreef:

 "select unique id" - i found this line in google search
but i want to make
select * as unique

select unique * is
or select distinct

pls point out how to select unique items from a list of million items


Maybe
select distinct  from ...
or
select distinct on (field1,field2)  from ... ?

Regards,

Stijn.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match