Re: [GENERAL] DeadLocks..., DeadLocks...

2007-06-14 Thread Terry Fielder

My 2 cents:

I used to get a lot of these sharelock problems.
Users using different records, but same tables in different order.
(apparently 7.x was not as good as 8.x at row level locking)

I was advised to upgrade from 7.x to 8.x
I did, and all those sharelock problems went away.

Terry

Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085



Tom Allison wrote:

Gregory Stark wrote:


I'm still not precisely clear what's going on, it might help if you 
posted the
actual schema and the deadlock message which lists the precise locks 
that

deadlocked.

Are any of the DML you mention on other tables on those tables with 
foreign

key references to this one?

It's impossible for two inserts on the same table to deadlock against 
each
other so there must be more going on than what you've described. It's 
hard to

help much without a complete picture.



I think I found the problem.  And it's not at all where I thought it was.
Process 17583 waits for ShareLock on transaction 306841;
blocked by process 17725.
Process 17725 waits for ShareLock on transaction 306840;
blocked by process 17583.

Where I'm at a lost is the deadlocks reported are on different tables.
However, getting back to the Foreign Key question
history_token does have a foreign key constraint on tokens.token_idx 
on delete cascade.


So is the INSERT statement on history_token getting deadlocked by the 
token UPDATE statement?  Looks that way and the only think I can see 
causing that might be a foreign key issue.


Am I correctly identifying the problem?
Any options?


2007-06-14 19:58:43 EDT 17725 306927 LOG:  statement: select token_idx 
from tokens where token in 
('ShareLock','hdr:414A79FBC82','ht.history_idx','2271','hdr:
2007-06-14 19:58:31 EDT 17583 306840 LOG:  statement: insert into 
history_token(history_idx, token_idx)

select values.history_idx, values.token_idx
from ( values 
(2862,260),(2862,31789),(2862,1518),(2862,59),(2862,555),(2862,4),(2862,66447),(2862,8178),(2862,64),(2862,132),(2862,6126),(2862,135),(2 

862,69),(2862,9166),(2862,629),(2862,73),(2862,74),(2862,2271),(2862,78),(2862,493),(2862,8164),(2862,211),(2862,8166),(2862,84),(2862,60608),(2862,217),(2862, 

88),(2862,8207),(2862,161),(2862,33518),(2862,220),(2862,222),(2862,446),(2862,2188),(2862,336),(2862,1197),(2862,166),(2862,1537),(2862,28),(2862,168),(2862,2 

481),(2862,1081),(2862,99),(2862,100),(2862,172),(2862,8209),(2862,231),(2862,1900),(2862,344),(2862,104),(2862,24694),(2862,106),(2862,37),(2862,107),(2862,17 

9),(2862,8203),(2862,99140),(2862,85629),(2862,3671),(2862,8187),(2862,187),(2862,306),(2862,254),(2862,415),(2862,256),(2862,257),(2862,99227),(2862,99228),(2 


862,99229),(2862,99230) ) as values(history_idx, token_idx)
left outer join history_token ht using (history_idx, token_idx)
where ht.history_idx is null

2007-06-14 19:58:31 EDT 17725 306841 LOG:  statement: update tokens 
set last_seen = now() where token_idx in 
(260,31789,1518,59,555,4,66447,8178,64,132,6126,13
5,69,9166,629,73,74,2271,78,493,8164,211,8166,84,99222,60608,217,88,8207,161,33518,220,222,446,2188,336,1197,166,1537,28,168,2481,1081,99,100,172,8209,231,1900 

,344,104,24694,106,37,107,179,8203,99140,85629,3671,8187,187,306,254,415,256,257,99224,99225,99226) 


2007-06-14 19:58:31 EDT 17657 306842 LOG:  duration: 0.033 ms
2007-06-14 19:58:31 EDT 17657 306842 LOG:  execute dbdpg_105: insert 
into user_history(user_idx, history_idx, seen_as) values 
($1,$2,'noscore')
2007-06-14 19:58:31 EDT 17657 306842 DETAIL:  parameters: $1 = '1', $2 
= '2853'

2007-06-14 19:58:31 EDT 17657 306842 LOG:  duration: 0.194 ms
2007-06-14 19:58:32 EDT 17657 306843 LOG:  statement: DEALLOCATE 
dbdpg_105

2007-06-14 19:58:32 EDT 17657 0 LOG:  duration: 0.164 ms
2007-06-14 19:58:32 EDT 17657 306844 LOG:  statement: select h_msgs, 
s_msgs from user_token where user_idx = 1 and token_idx in 
(260,31789,1518,59,555,4,66447,
8178,64,132,6126,135,69,9166,629,73,74,2271,78,493,8164,211,8166,84,60608,217,88,8207,161,33518,220,222,446,2188,336,1197,166,1537,28,168,2481,1081,99,100,172, 

8209,231,1900,344,104,24694,106,37,107,179,8203,99140,85629,3671,8187,187,306,254,415,256,257,99216,99217,99218,99219) 


2007-06-14 19:58:32 EDT 17657 0 LOG:  duration: 1.408 ms
2007-06-14 19:58:32 EDT 17657 306845 LOG:  statement: update tokens 
set last_seen = now() where token_idx in 
(260,31789,1518,59,555,4,66447,8178,64,132,6126,13
5,69,9166,629,73,74,2271,78,493,8164,211,8166,84,60608,217,88,8207,161,33518,220,222,446,2188,336,1197,166,1537,28,168,2481,1081,99,100,172,8209,231,1900,344,1 

04,24694,106,37,107,179,8203,99140,85629,3671,8187,187,306,254,415,256,257,99216,99217,99218,99219) 


2007-06-14 19:58:33 EDT 17583 306840 ERROR:  deadlock detected
2007-06-14 19:58:3

[GENERAL] Insert speed new post

2007-07-02 Thread Terry Fielder
My first posting stalled because I posted from the wrong email account, 
here is the new posting, plus some more info:

I have a user application use log.

Under pg 7.x the system performed fine.

In 8.1.9, the insert statements seem to take a long time sometimes, upto 
several seconds or more.


Here is the table:
CREATE TABLE user_app_use_log
(
user_id int4 NOT NULL,
access_stamp timestamptz NOT NULL DEFAULT now(),
app_name char(50) NOT NULL,
url char(150) NOT NULL,
form_params text,
division_id char(3),
url_params text,
ip_address varchar(31)
)
WITHOUT OIDS;

There is no primary key, but the table is never updated, only inserted.
I removed the only index, with no improvement in performance (in case 
the 8.2 "resolves index locking issues" was the concern for an 8.1 install.


Should I add a primary key column of serial?  Will that help?

If anyone has any ideas it would be appreciated.

And in addition, I do a regular vacuum analyze, and to be clear this 
table has 948851 and rising records.  I USED to purge the table 
regularly, but not since SOX came around.  (But that happened prior to 
my upgrade from 7.4 to 8.1)
The server is a very powerful 8 CPU on SCSI Raid. 
iostat tells me its not backlogged on disk IO:

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
  6.540.000.661.310.00   91.49

Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
hda   0.51 2.0222.361292832   14285368
sda   0.00 0.01 0.00   4585   2552
sdb   0.65 4.66 7.3929758134720552
sdc  40.37   384.92  1072.08  245922466  684946704
sdd   0.34 0.00 7.392484720552
sde  40.27   389.03  1066.04  248548400  681086784
sdf  40.21   385.00  1072.58  245976056  685265296
dm-0  1.26 4.66 7.3929755814720552
dm-1  0.00 0.00 0.00   1662128
dm-2  1.26 4.65 7.3929730504720424
hdd   0.00 0.00 0.00140  0
md0 230.85   373.72  1783.57  238766922 1139514032

And top tells me minimal cpu load:
top - 16:28:55 up 7 days,  9:30,  2 users,  load average: 2.61, 2.82, 2.86
Tasks: 220 total,   1 running, 219 sleeping,   0 stopped,   0 zombie
Cpu0  :  2.3%us,  2.0%sy,  0.0%ni, 95.7%id,  0.0%wa,  0.0%hi,  0.0%si,  
0.0%st
Cpu1  :  2.0%us,  3.0%sy,  0.0%ni, 91.0%id,  2.3%wa,  0.7%hi,  1.0%si,  
0.0%st
Cpu2  :  0.0%us,  0.3%sy,  0.0%ni, 89.4%id, 10.3%wa,  0.0%hi,  0.0%si,  
0.0%st
Cpu3  :  0.0%us,  0.3%sy,  0.0%ni, 99.7%id,  0.0%wa,  0.0%hi,  0.0%si,  
0.0%st
Cpu4  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  
0.0%st
Cpu5  :  4.3%us,  0.3%sy,  0.0%ni, 95.0%id,  0.3%wa,  0.0%hi,  0.0%si,  
0.0%st
Cpu6  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  
0.0%st
Cpu7  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  
0.0%st

Mem:  15894024k total, 15527992k used,   366032k free,   323760k buffers
Swap: 17880304k total, 1084k used, 17879220k free, 13912888k cached

 PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
20914 postgres  15   0  200m  93m  90m S4  0.6   1:14.89 postmaster
20014 postgres  15   0  200m  93m  90m S4  0.6   2:55.08 postmaster
2389 root  10  -5 000 S3  0.0  33:46.72 md0_raid5
15111 postgres  15   0  209m 102m  90m S2  0.7  25:32.37 postmaster
2577 root  10  -5 000 D1  0.0  22:59.43 kjournald
4949 root  15   0 12996 1336  792 S1  0.0  38:54.10 top



--
Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Insert speed new post

2007-07-02 Thread Terry Fielder

Responses below.

Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085



Tom Lane wrote:

Terry Fielder <[EMAIL PROTECTED]> writes:
  

Under pg 7.x the system performed fine.



  
In 8.1.9, the insert statements seem to take a long time sometimes, upto 
several seconds or more.



  

There is no primary key, but the table is never updated, only inserted.
I removed the only index, with no improvement in performance (in case 
the 8.2 "resolves index locking issues" was the concern for an 8.1 install.



Hmm.  With no indexes, inserts ought to be basically a constant-time
operation.  

My sentiments exactly.

I suspect what you are looking at is stalls caused by
checkpoints or other competing disk activity.  
There were warnings in the logs when I first deployed the 8.1 version.  
Sorry, I should have mentioned.  Here are the postgresql.conf changes I 
made based on what I know I need from my 7.4 install:

max_connections   increased to 250

shared_buffers increased to 11000

The below I increased based on HINT's in the log file.
max_fsm_pages increased to 80   (I have
max_fsm_relations to 1000

checkpoint_segments increased to 300

And much playing around with logging settings, done on my own.



I'd suggest watching the
output of "vmstat 1" or local equivalent, and seeing if you can
correlate the slow inserts with bursts of disk activity.
  

I can do that, next peak period (tomorrow).

Have you tuned the 8.1 installation?
I have tweaked the settings based on HINT's as described above.  Is 
there a document or something else you are referring to?



  I'm wondering about things like
checkpoint_segments and wal_buffers maybe being set lower than you had
them in 7.4.
  

I left the default in 8.1, which is:
#fsync = on # turns forced synchronization 
on or off

#wal_sync_method = fsync# the default is the first option
#full_page_writes = on  # recover from partial page writes
#wal_buffers = 8# min 4, 8KB each
#commit_delay = 0   # range 0-10, in microseconds
#commit_siblings = 5# range 1-1000

# - Checkpoints -

#checkpoint_segments = 3# in logfile segments, min 1, 
16MB each

#increased by terry 20070402
checkpoint_segments = 300   # in logfile segments, min 1, 
16MB each

#checkpoint_timeout = 300   # range 30-3600, in seconds
#checkpoint_warning = 30# in seconds, 0 is off

Any ideas based on this?

Thanks for your help.

Terry


regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly

  


[GENERAL] Insert speed

2007-07-03 Thread Terry Fielder

I have a user application use log.

Under pg 7.x the system performed fine.

In 8.1, the insert statements seem to take a long time sometimes, upto 
several seconds or more.


Here is the table:
CREATE TABLE user_app_use_log
(
 user_id int4 NOT NULL,
 access_stamp timestamptz NOT NULL DEFAULT now(),
 app_name char(50) NOT NULL,
 url char(150) NOT NULL,
 form_params text,
 division_id char(3),
 url_params text,
 ip_address varchar(31)
)
WITHOUT OIDS;

There is no primary key, but the table is never updated, only inserted.
I removed the only index, with no improvement in performance (in case 
the 8.2 "resolves index locking issues" was the concern for an 8.1 install.


Should I add a primary key column of serial?  Will that help?

If anyone has any ideas it would be appreciated.

--
Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] dblink does not connect when activated in a remote server

2007-12-18 Thread Terry Fielder

Does the remote server have the relevant port opened up in its firewall?

Is postgres set to allow TCP/IP connections in the postgresql.conf?

Terry

Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085



Abraham, Danny wrote:

dblink problem:

I am connecting to a remote server and run a functions that calls
another one using db_link. 
It fails with the error message is 08001 - could not establish

connection.

When the function is located in a local server - it runs fine. 

The connection string is: 


'hostaddr=127.0.0.1 .'

The following were tried but did not help:
'host=localhost' and even  providing the real target server
('host=tlvl0390') did not help.

Happens on WIndows as well as Unix 

Can you help? 


Thanks

Danny


---(end of broadcast)---
TIP 6: explain analyze is your friend

  


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Quick Regex Question

2007-12-20 Thread Terry Fielder



Howard Cole wrote:
  

Hi Marijn, Andreas,

I think Andreas is right, note the ordering of characters in the above 
example as [ ^] rather than [^ ].
So if the '^' is taken as literal '^', can I check for the beginning 
of a string in the brackets, 
Why do you need to?  Check for the beginning of the string BEFORE the 
set brackets.  The point of set brackets is "match from a set of 
chars".  Since "beginning of string" can only match one place, it has no 
meaning as a member of a set.  Or in other words, if it has meaning, it 
needs to be matched FIRST out of the set, and therefore you can just 
remove from the set and put before the set brackets.

or am I forced to use the (^| ) syntax?




Is it just me or are regular expressions crazy?

Complicated, not crazy.

Terry



Howard

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Loging of postgres requests

2008-03-15 Thread Terry Fielder
You can turn up the verbosity of postgres logger to log all SQL 
statements.  Look in postgresql.conf


In particular, you can set postgres to log statements that take over x 
milliseconds to execute.  If you set log_min_duration_statement to 0, 
then it will log ALL statements, which could also give you what you want 
if you want to see all SQL statements.


Terry

Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085



Dave Potts wrote:



I am using a 3rd front end to generate  postgres requests , its 
reportng an error with the database.


Is there anyway of loging which sql requests the application is  
actual sending to postgres.  I need to known if the error is being 
created by the application generating invalid SQL or if there is a 
problem with the desgin of the database tables.


Dave.



  


Re: [GENERAL] How Big is Too Big for Tables?

2010-07-28 Thread Terry Fielder
If all the table files are the same structure, its really not hard, just 
a UNION clause.


Indeed, one can even create a VIEW that leverages that union clause to 
simplify the code that needs to grab from the multiple tables.


As far as indexes, "single table" COULD be OK if you throw enough 
hardware at it.  But if the data changes a lot and vacumming/index 
rebuilding is not keeping up, well it could get degraded performance 
even with high end hardware.


Let's look at your indexes, are they to be of 3-4 columns or less?  
Likely you will be OK.  If there are several or more columns, your 
indexes will be massive and then performance drops off with increased 
paging on even just index usage.


NOTE:
If you compile the data into a SINGLE table, you could always break up 
your table into smaller tables using SELECT INTO statements that grab by 
state.  Then your queries that assume a single table for all states need 
to be tweaked to use union or (even better) tweaked to use a VIEW that 
already implements a union.


If a lot of querying would use the UNION'd view, you probably want to 
avoid that.  If its not very often, or "OK to wait a little bit longer", 
the union will allow you to break up the data with probably only minor 
impact when you need multiple states reported together.


You likely probably might almost sort of maybe be best to do a test case 
on your hardware first, even if dummy meaningless data populated by a 
script, it will give you a measurement of your expected performance that 
is much more meaningful then my ramble above.  :)


Terry


Terry Fielder
te...@greatgulfhomes.com
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085



Bill Thoen wrote:
I'm building a national database of agricultural information and one 
of the layers is a bit more than a gigabyte per state. That's 1-2 
million records per state, with a mult polygon geometry, and i've got 
about 40 states worth of data. I trying to store everything in a 
single PG table. What I'm concerned about is if I combine every state 
into one big table then will performance will be terrible, even with 
indexes? On the other hand, if I store the data in several smaller 
files, then if a user zooms in on a multi-state region,  I've got  to 
build or find a much more complicated way to query multiple files.


So I'm wondering, should I be concerned with building a single 
national size table (possibly 80-100 Gb) for all these records, or 
should I keep the files smaller and hope there's something like 
ogrtindex out there for PG tables? what do you all recommend in this 
case? I just moved over to Postgres to handle big files, but I don't 
know its limits. With a background working with MS Access and bitter 
memories of what happens when you get near Access'  two gigabyte 
database size limit, I'm a little nervous of these much bigger files. 
So I'd appreciate anyone's advice here.


TIA,
- Bill Thoen



Re: [GENERAL] Pet Peeves?

2009-01-29 Thread Terry Fielder

I have 2, closely related:

1) if I have multiple pids running queries, say all selects, I have no 
idea which pid is running which query


and that ties to:
2) If I try to kill 1 postgres pid (e.g. to abort a bad query), the 
whole backend shuts down and rolls back.

Can we get a way to look at and then kill a specific bad query?

Maybe this is not missing functionality, it can do it and I just don't 
know how.  Anybody want to wisen me up?  :)


Terry

Terry Fielder
te...@greatgulfhomes.com
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085



Gregory Stark wrote:

I'm putting together a talk on "PostgreSQL Pet Peeves" for discussion at
FOSDEM 2009 this year. I have a pretty good idea what some them are of course,
but I would be interested to hear if people have any complaints from personal
experience. What would be most interesting is if you can explain an example of
when the problem caused real inconvenience to you, since sometimes it's hard
to see from a theoretical description where the real harm lies.

So, what do people say? Is Postgres perfect in your world or does it do some
things which rub you the wrong way?

Feel free to respond on-list or if you prefer in personal emails. I do intend
to use the ideas you give in my presentation so mark anything you wouldn't be
happy to see in a slide at a conference some day.

  


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help - Urgent

2005-05-17 Thread Terry Fielder
Usually 5432
Its listed in postgresql.conf, which could be in
/var/lib/pgsql/data/
or
/usr/local/pgsql/data
Terry
ElayaRaja S wrote:
I am facing TCP IP connection refused.
But i enbled the tcpip as true and i like to know the port number.
becuase i like to enble the port number in my firewall. Please let me
know
Note: i can able to connect with my local ip address. The issue which
i faced when i access through the public ip address.
--
Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Limitations of PostgreSQL

2005-10-12 Thread Terry Fielder
Wow, I must be missing something, because I cannot even dream up a case 
where I think I might find a use for the mysql INTERVAL() function, far 
less actually NEED it  :)


Terry

Michael Fuhr wrote:

On Wed, Oct 12, 2005 at 04:49:59PM -0500, Scott Marlowe wrote:


On Wed, 2005-10-12 at 16:16, Chris Travers wrote:

Compared to MySQL, I can't think of any downsides.  All relevant 
usability issues have been solved, though there are some functions like 
INTERVAL that are not supported (see my migration guide at 
http://www.metatrontech.com/wpapers/)


What, exactly, is the interval function in MySQL?  IS that one that
creates a sequence of numbers or whatnot?  If so, there is an equivalent
in 8.0 now.  By the way, interval is a SQL reserved keyword, so it's
surprising MySQL would choose to name a function after it.



Surprising?  C'mon now, this is MySQL :->

Here's an excerpt from the MySQL documentation:

INTERVAL(N,N1,N2,N3,...)
Returns 0 if N < N1, 1 if N < N2 and so on or -1 if N is
NULL.  All arguments are treated as integers.  It is required
that N1 < N2 < N3 < ... < Nn for this function to work
correctly.  This is because a binary search is used (very fast).

mysql> SELECT INTERVAL(23, 1, 15, 17, 30, 44, 200);
-> 3
mysql> SELECT INTERVAL(10, 1, 10, 100, 1000);
-> 2
mysql> SELECT INTERVAL(22, 23, 30, 44, 200);
-> 0



--
Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [pgsql-advocacy] [GENERAL] Oracle buys Innobase

2005-10-19 Thread Terry Fielder



Tom Lane wrote:

Christopher Kings-Lynne <[EMAIL PROTECTED]> writes:

Strangely a pgsql to oracle exporter is a good thing.  It'd be a great 
feature of PostgreSQL.  Imagine how many people would start on 
PostgreSQL if they KNEW that one day they could easily move to Oracle if 
they needed to.  Risk management.



Problem is: to offer such a thing with a straight face, we'd have to
confine ourselves to an Oracle-subset version of SQL.  For instance,
lose the ability to distinguish empty-string from NULL.


Oh please PLEASE *PLEASE* don't bend that way.  Oracle has some SQL non 
compliant flaws at least one is serious:  The inability to distinguish 
between the absence of value and an explicitly empty string is just ONE 
of Oracle's ridiculous fubarness.  People who know what a NULL really is 
and use it properly have to program around Oracle's stupidity to "dumb 
it down" for the weak application developer, let's not do that.


Terry



regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings



--
Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [pgsql-advocacy] [GENERAL] Oracle buys Innobase

2005-10-19 Thread Terry Fielder
OK, I am not an expert on the SQL standard, but I thought the definition 
varied by data type e.g. varchar <> bpchar


Terry

Marc G. Fournier wrote:

On Wed, 19 Oct 2005, [EMAIL PROTECTED] wrote:

I was referring to trailing blanks, but did not explicitly say it, 
though showed it in the examples.  I am pretty sure that the SQL 
standard says that trailing whitespace is insignificant in string 
comparison.



Then we are broken too :)

# select 'a ' = 'a  ';
 ?column?
--
 f
(1 row)


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org



--
Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [pgsql-advocacy] [GENERAL] Oracle buys Innobase

2005-10-19 Thread Terry Fielder

I agree with you, but...

Actually that's not how the compare works usually.

Generally one of the operands is converted to the same datatype as the 
other, and THEN the compare is performed.


I expect MS SQL is converting a 'sdas' typeless string to be assumed 
CHAR and Postgresql is converting a 'sdas' typeless string to be assumed 
VARCHAR.


Hence, the different behaviour.

Terry

Dann Corbit wrote:

Would you want varchar(30) 'Dann Corbit' to compare equal to bpchar(30)
'Dann Corbit'?

I would.  


If both are considered character types by the language, then they must
compare that way.

Perhaps there are some nuances that I am not aware of.  But that is how
things ought to behave, if I were king of the forest.



-Original Message-
From: [EMAIL PROTECTED] [mailto:pgsql-general-
[EMAIL PROTECTED] On Behalf Of Terry Fielder
Sent: Wednesday, October 19, 2005 12:37 PM
To: Marc G. Fournier
Cc: [EMAIL PROTECTED]; pgsql-general@postgresql.org
Subject: Re: [pgsql-advocacy] [GENERAL] Oracle buys Innobase

OK, I am not an expert on the SQL standard, but I thought the


definition


varied by data type e.g. varchar <> bpchar

Terry

Marc G. Fournier wrote:


On Wed, 19 Oct 2005, [EMAIL PROTECTED] wrote:



I was referring to trailing blanks, but did not explicitly say it,
though showed it in the examples.  I am pretty sure that the SQL
standard says that trailing whitespace is insignificant in string
comparison.



Then we are broken too :)

# select 'a ' = 'a  ';
?column?
--
f
(1 row)


Marc G. Fournier   Hub.Org Networking Services


(http://www.hub.org)


Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ:


7615664


---(end of


broadcast)---


TIP 4: Have you searched our list archives?

 http://archives.postgresql.org



--
Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085

---(end of


broadcast)-------


TIP 6: explain analyze is your friend





--
Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [pgsql-advocacy] [GENERAL] Oracle buys Innobase

2005-10-19 Thread Terry Fielder

Hi Dann

Without looking at the internals to see if the 1 column or the other is 
being converted to the other columns type before the compare, it really 
demonstrates nothing.


It could perhaps be used to help demonstrate that when comparing a 
datatype of CHAR to VARCHAR that

MS-SQL converts the VARCHAR to CHAR and then does the compare
Postgres converts the CHAR to VARCHAR and then does the compare

But there isn't even enough evidence here to support that.

Terry

Dann Corbit wrote:

create table foo (col1 varchar(30))
go
create table bar (col1 char(30))
go
insert into foo values ('Danniel ')
go
insert into bar values ('Danniel   ')
go
select * from foo,bar where foo.col1=bar.col1
go

Result set:
Danniel 	Danniel   




-Original Message-
From: Terry Fielder [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 19, 2005 1:39 PM
To: Dann Corbit
Cc: Marc G. Fournier; [EMAIL PROTECTED]; pgsql-
[EMAIL PROTECTED]
Subject: Re: [pgsql-advocacy] [GENERAL] Oracle buys Innobase

I agree with you, but...

Actually that's not how the compare works usually.

Generally one of the operands is converted to the same datatype as the
other, and THEN the compare is performed.

I expect MS SQL is converting a 'sdas' typeless string to be assumed
CHAR and Postgresql is converting a 'sdas' typeless string to be


assumed


VARCHAR.

Hence, the different behaviour.

Terry

Dann Corbit wrote:


Would you want varchar(30) 'Dann Corbit' to compare equal to


bpchar(30)


'Dann Corbit'?

I would.

If both are considered character types by the language, then they


must


compare that way.

Perhaps there are some nuances that I am not aware of.  But that is


how


things ought to behave, if I were king of the forest.




-Original Message-
From: [EMAIL PROTECTED] [mailto:pgsql-general-
[EMAIL PROTECTED] On Behalf Of Terry Fielder
Sent: Wednesday, October 19, 2005 12:37 PM
To: Marc G. Fournier
Cc: [EMAIL PROTECTED]; pgsql-general@postgresql.org
Subject: Re: [pgsql-advocacy] [GENERAL] Oracle buys Innobase

OK, I am not an expert on the SQL standard, but I thought the


definition



varied by data type e.g. varchar <> bpchar

Terry

Marc G. Fournier wrote:



On Wed, 19 Oct 2005, [EMAIL PROTECTED] wrote:




I was referring to trailing blanks, but did not explicitly say it,
though showed it in the examples.  I am pretty sure that the SQL
standard says that trailing whitespace is insignificant in string
comparison.



Then we are broken too :)

# select 'a ' = 'a  ';
?column?
--
f
(1 row)


Marc G. Fournier   Hub.Org Networking Services


(http://www.hub.org)



Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ:


7615664



---(end of


broadcast)---



TIP 4: Have you searched our list archives?

http://archives.postgresql.org



--
Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085

---(end of


broadcast)---



TIP 6: explain analyze is your friend




--
Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085





--
Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: 'a' == 'a ' (Was: RE: [pgsql-advocacy] [GENERAL] Oracle buysInnobase)

2005-10-19 Thread Terry Fielder



Dann Corbit wrote:

Try this query in Oracle, SQL*Server, DB/2, Informix, etc.:

connxdatasync=# select 1 where cast('a' as varchar(30)) = cast('a ' as
varchar(30));
 ?column?
--
(0 rows)

I see how you can interpret the SQL Standard to make the above response
a correct one.  But is it the response that you would like?


When the compared datatypes are VARCHAR: YES
When the compared datatypes are fixed length like CHAR: NO



Suppose, for instance, that you have an employee table in your database.
You have another table called benefits.  Perhaps it is even in another
database.  Conceivably even in a database other than PostgreSQL.

Anyway, you want to match information between the two systems so you
join on some redundant columns like a.First_Name = b.fname AND
a.Last_name = b.lname AND a.Middle_Initial = b.mi AND a.City = b.city
AND a.Street_name = b.street

If the columns are not the same length (or one is fixed length and the
other variable), then 'Danniel' won't match 'Danniel' unless you trim
it.  If you trim it, then the indexes go out the window.  If the indexes
go out the window, then we table scan.


If you want to compare datatypes that are different, and you don't have 
an index that accomodates, that's the price you pay for comparing 
different data types.




I don't like that behavior.  Perhaps others who know more than me can
say why not blank padding comparisons is a good idea.

Clearly, one could argue that having redundant data is bad and that
every attribute in a database intended to match should be exactly the
same type.  But there are lots of database systems badly designed.  And
of well designed systems, it is not uncommon to have more than one
database in your organization, and a need to perform federated joins as
well because of it.



-Original Message-
From: Tino Wildenhain [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 19, 2005 1:05 PM
To: Marc G. Fournier
Cc: Dann Corbit; [EMAIL PROTECTED]; pgsql-
[EMAIL PROTECTED]; pgsql-general@postgresql.org
Subject: Re: 'a' == 'a ' (Was: RE: [pgsql-advocacy] [GENERAL] Oracle
buysInnobase)

Am Mittwoch, den 19.10.2005, 16:29 -0300 schrieb Marc G. Fournier:


I'm CC'ng this over to -hackers ... Tom?  Comments?

On Wed, 19 Oct 2005, Dann Corbit wrote:



Yes, clearly that is the wrong result according to the SQL


standard.


Here is a SQL*Server query:
select 1 where 'a' = 'a ' AND 'a' = 'a  ' AND 'a ' = 'a '

It returns (correctly): 1



-Original Message-
From: [EMAIL PROTECTED] [mailto:pgsql-general-
[EMAIL PROTECTED] On Behalf Of Marc G. Fournier
Sent: Wednesday, October 19, 2005 11:41 AM
To: [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org
Subject: Re: [pgsql-advocacy] [GENERAL] Oracle buys Innobase

On Wed, 19 Oct 2005, [EMAIL PROTECTED] wrote:



I was referring to trailing blanks, but did not explicitly say


it,


though showed it in the examples.  I am pretty sure that the SQL
standard says that trailing whitespace is insignificant in


string


comparison.


Then we are broken too :)

# select 'a ' = 'a  ';
 ?column?
--
 f
(1 row)



experiment=# SELECT 'a '::char = 'a  '::char;
?column?
--
t





---(end of broadcast)-------
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match



--
Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [pgsql-advocacy] [GENERAL] Oracle buys Innobase

2005-10-19 Thread Terry Fielder
Converting '   ' to '', well, that depends on the definition of the 
datatype PAD/NOPAD ad nasuem.


Converting '' to NULL, that's just wrong, and here's some examples of why:

In oracle, there is no easy way to determine the difference between 
"there is no value" and "the value the user entered was the empty 
string".  It's as simple as that.


Virtually any other database, NULL means "not defined" aka "absence of 
value".  Whereas '' means "a string of zero characters"


Bah humbug, you may say.  But consider, should:
rtrim('   ') = ''

The answer is simple: YES, the 2 are equal.

Oracle has always had the '' ==> NULL flaw
And it may have been to compensate for that flaw that they added:
'   ' ==> NULL flaw
Although it may be in the background that what is really happening is:
'   ' ==> '' ==> NULL

Guess mommy Oracle forgot to mention that 2 wrongs don't make a right.  :)


Terry

[EMAIL PROTECTED] wrote:

Okay, since the standard explicitly says that whether 'a' = 'a ' is a
well-defined characteristic of a character datatype (NO PAD) I'm happy with
both Oracle and PostgreSQL.  If you want a certain behavior, choose your
datatypes wisely.  Cool.  I didn't in a recent port.  Uncool.  I went from
CHAR() in Interbase to VARCHAR2() in Oracle.  I shot myself in the foot,
and then complained about it before understanding the standard.  I'm now
better educated, thanks to all.

But, I still need to research the conditions under which Oracle converts ''
(zero length string) and '  ' (all blank string) to NULL.  Then, before
complaining about it, I'll read the standard again.  Since everybody
complains about it, I can't believe it is standard, but I have (very
recently) been wrong before.

Cheers,

Rick

Richard D Levine/US/Raytheon wrote on 10/19/2005 04:07:03 PM:



This is the salient sentence from the standard (that I've never
personnally thought much about before now).

"If CS has the NO PAD characteristic, then the pad character is an
implementation-dependent character different from any character in the
character set of X and Y that collates less than any string under CS."

It boils down to saying "NO PAD strings of different length are
never equal".  So the correctness of any DB depends on whether the
type in question has the NO PAD characteristic.  So,  is varchar NO
PAD?  That's the real question.

Rick

"Dann Corbit" <[EMAIL PROTECTED]> wrote on 10/19/2005 03:57:26 PM:



create table fooa (col1 varchar(30))
go
create table bara (col1 varchar(300))
go
insert into fooa values ('Danniel ')
go
insert into bara values ('Danniel   ')
go
select * from fooa,bara where fooa.col1=bara.col1
go

Returns:
DannielDanniel

I think that the issue is:
Does PostgreSQL use something other than  as the pad character?
If so, what character is that (and why choose it over )?
If not, then PostgreSQL is clearly returning the wrong results.



-Original Message-
From: Terry Fielder [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 19, 2005 2:02 PM
To: Dann Corbit
Cc: Marc G. Fournier; [EMAIL PROTECTED]; pgsql-
[EMAIL PROTECTED]
Subject: Re: [pgsql-advocacy] [GENERAL] Oracle buys Innobase

Hi Dann

Without looking at the internals to see if the 1 column or the other


is


being converted to the other columns type before the compare, it


really


demonstrates nothing.

It could perhaps be used to help demonstrate that when comparing a
datatype of CHAR to VARCHAR that
MS-SQL converts the VARCHAR to CHAR and then does the compare
Postgres converts the CHAR to VARCHAR and then does the compare

But there isn't even enough evidence here to support that.

Terry

Dann Corbit wrote:


create table foo (col1 varchar(30))
go
create table bar (col1 char(30))
go
insert into foo values ('Danniel ')
go
insert into bar values ('Danniel   ')
go
select * from foo,bar where foo.col1=bar.col1
go

Result set:
DannielDanniel




-Original Message-
From: Terry Fielder [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 19, 2005 1:39 PM
To: Dann Corbit
Cc: Marc G. Fournier; [EMAIL PROTECTED]; pgsql-
[EMAIL PROTECTED]
Subject: Re: [pgsql-advocacy] [GENERAL] Oracle buys Innobase

I agree with you, but...

Actually that's not how the compare works usually.

Generally one of the operands is converted to the same datatype as


the


other, and THEN the compare is performed.

I expect MS SQL is converting a 'sdas' typeless string to be


assumed


CHAR and Postgresql is converting a 'sdas' typeless string to be


assumed



VARCHAR.

Hence, the different behaviour.

Terry

Dann Corbit wrote:



Would you want varchar(30) 'Dann Corbit' to compare equal to


bpch

Re: [GENERAL] Disappearing Records

2005-11-01 Thread Terry Fielder
Do you have any cascading deletes that could be doing this by performing 
a delete on a different table and cascading to the table in question?


Terry

Rory Browne wrote:

Hi all

What is the first thing you would do, when you find that your system
has been losing information? Information is there at one stage, and
later it's not.

I tried checking the logs, but there isn't a delete or drop there
anywhere, nor anything else that seems relevent. I tried googling for
various rewordings of "PostgreSQL disappearing records", but didn't
get anything useful.

Since this isn't something I can recreate, I'm not sure what to do about it.

The fact that I never really used PostgreSQL that much before may be a
hindering factor, but I can't think why information would just
mysteriously disappear.

Regards
Rory

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org



--
Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] PG_DUMP without asking password

2006-09-22 Thread Terry Fielder
If you run it on the same server, with a user who has permission to 
access the db using ident (e.g. user "postgres"),

i.e. you have a pg_hba.conf entry like:
local  allpostgres ident   sameuser

Then pg_dump and pg_dumpall will not require a password.

Alternatively, you can create a .pgpass file in the users home directory 
(if ident not available, but you *need* to think about the permissions 
of this file and ramifications)


Terry

Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085



Alain Roger wrote:

Hi,

is there a way to backup the database thanks a command script, without 
postgresql requesting the user password ?


thanks a lot,

Alain


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] share lock error

2006-10-05 Thread Terry Fielder

I am getting this in my log file:
2006-10-05 16:06:23 [6469] ERROR:  deadlock detected
DETAIL:  Process 6469 waits for ShareLock on transaction 668582701; 
blocked by process 28763.
   Process 28763 waits for ShareLock on transaction 668586325; 
blocked by process 6469.


I believe the scenario is likely like:
user 1 does something which causes a lock on element a
user 2 does something which causes a lock on element b
user 1 tries to do something and blocks waiting for a lock on b
user 2 tries to do something and blocks waiting for a lock on a
deadlock

I believe that the elements "a" and "b" are different tables.

The solution is to have both apps lock "a" first, then "b", hence no 
deadlock could occur.  Problem is, I don't know what the underlying 
entities are.


Is there somewhere I can gather more information about which tables or 
entities are behind the transaction number?


Thanks in advance


--
Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] share lock error

2006-10-05 Thread Terry Fielder




7.4.3

And the records are gone from pg_locks, how much time after the
deadlock do I have before they are purged?



Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085


Tom Lane wrote:

  Terry Fielder <[EMAIL PROTECTED]> writes:
  
  
I am getting this in my log file:
2006-10-05 16:06:23 [6469] ERROR:  deadlock detected
DETAIL:  Process 6469 waits for ShareLock on transaction 668582701; 
blocked by process 28763.
Process 28763 waits for ShareLock on transaction 668586325; 
blocked by process 6469.

  
  
  
  
I believe that the elements "a" and "b" are different tables.

  
  
Actually, what you're looking at there is a conflict on row-level locks
being obtained in opposite orders.  What PG version is this?  If it's
8.1 you can identify the row in question from other entries in pg_locks,
but if it's older then there's no easy way to find out.

			regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings

  





Re: [GENERAL] COPY FROM : out of memory

2006-11-23 Thread Terry Fielder

Tom Lane wrote:

Arnaud Lesauvage <[EMAIL PROTECTED]> writes:
  
When trying to import a 20M rows csv file into PostgreSQL, I 
get :



  

ERROR: out of memory
État SQL :53200
Détail :Failed on request of size 1073741823.
Contexte : COPY tmp, line 1



Can you put together a self-contained example?  The reference to "line
1" suggests that you wouldn't need the whole 20M row file, just the
first few rows ...
  
Unless its not seeing the end of the first record AS the end of the 
first record, and hence seeing the whole file as 1 record.


Terry


regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/

  


Re: [GENERAL] Thanks to all

2007-03-01 Thread Terry Fielder

You may encounter many things in life which you may regret having done.

Converting from M$ SQL to PostgreSQL is probably *not* one of them.

Terry

Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085



Paul Lambert wrote:
Thanks to all who have helped me over the last month or so with 
converting my system from M$ SQL server to Postgres.


Unfortunately I've decided to scrap the project and continue working 
with M$ SQL Server... PG just isn't doing what I want.






No... I jest, I've finally got the entire system up and running 
exactly as I want it and I am ready to do my first customer install. 
That's another 40 or 50 PG installs over the next few months :D


A big thankyou to everyone who has given me advise up to now... and a 
big thanks in advance to those who will continue to advise me in the 
future ;)


Regards,
Paul.



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: hardware checks (was Re: [GENERAL] invalid memory alloc request

2006-01-23 Thread Terry Fielder

I second Tom:

badblocks and memtest86 are what I use and works great on all kinds of 
hardware.  You don't even need a specific OS for memtest86 because you 
can make a bootable floppy and test any old piece of hardware it recognizes.


Terry


--
Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Unnecessary function calls

2006-05-02 Thread Terry Fielder

SELECT id, get_category_text_path(id)
FROM category
WHERE id IN (
  SELECT c.id
  FROM category AS c
  ORDER BY c.rank
  LIMIT 5
)

Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085



Markus Schiltknecht wrote:

Hi,

when using LIMIT, how do I tell the planner to only call a function for
rows it returns?

An example: I want to fetch the top five categories. A function
get_category_text_path(cat_id int) returns the textual representation of
the category. For that I do something like:

SELECT id, get_category_text_path(id)
   FROM category
   ORDER BY rank
   LIMIT 5

Unfortunately this takes very long because it calls
get_category_text_path() for all of the 450'000 categories in the table.
But I only need the full text path of the top five rows.

It does not matter if I declare the function to be IMMUTABLE, STABLE or
VOLATILE - it gets called for every row in category (which normally is
what you want I guess).

How can I rewrite the query to call get_category_text_path() only for
the top five rows?

Thanks for hints.

Markus



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match

  


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Unnecessary function calls

2006-05-02 Thread Terry Fielder




1) If the join to rank_lookup is done across Pkey and/or unique
indexed fields and a foreign key is defined for said join, I don't know
how the subquery could possibly be more optimized then it is (the
reasoning being beyond the scope of this discussion and possibly even
beyond the scope of area in which I can safely comment :)

2) It is my understanding and experience (I could be unaware of a trick
or special case specifics, however) that using an IN clause is LESS
efficient then joining to the table.  The only reason I used the in
clause is because, as you indicated, you were only ask for the top
five, which is a very small set (you probably would not want to do that
if the set was large).
Indeed, the IN clause is a de-optimization, it only HAPPENS to make the
query run faster because it allows you to avoid calling the select
function for all but the selected 5 rows (which was the goal you
requested)

3) In SQL there is almost always more then 1 way of doing something,
you have now seen 2.  There may be more, possibly even better ways. 
Experts care to comment?  :)


Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085


Markus Schiltknecht wrote:

  Hello Terry,

Thanks a lot. That's so simple I didn't see it. (The original query is
much more complex.)

The only problem is, rank is not a column of category itself, but a
joined row. With this solution, the join will have to be performed
twice. But since this doesn't cost that much and because the second join
is only done for 5 rows at the max this does not hurt.

The more complete query now looks a little ugly:

SELECT id, get_category_text_path(id), r.rank
FROM category
JOIN rank_lookup AS r ON cat_id = id
WHERE id IN (
SELECT c.id
FROM category AS c
JOIN rank_lookup AS rr ON rr.cat_id = c.id
ORDER BY rr.rank
LIMIT 5
)

It's not possible to optimize out that second join, is it?

Regards

Markus

On Tue, 2006-05-02 at 07:39 -0400, Terry Fielder wrote:
  
  
SELECT id, get_category_text_path(id)
FROM category
WHERE id IN (
   SELECT c.id
   FROM category AS c
   ORDER BY c.rank
   LIMIT 5
)

  
  


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match

  





Re: [GENERAL] EnterpriseDB

2006-05-05 Thread Terry Fielder

By "regular PostgreSQL" do you mean a PostgreSQL 7.x install?

Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085



Matthew Hixson wrote:

http://www.enterprisedb.com/news_events/press_releases/04_24_06.do

Was wondering if anyone had used that to import an Oracle .dmp file 
and then export a Postgres 8.1 dump file for import into a regular 
PostgreSQL database.

  -M@

---(end of broadcast)---
TIP 6: explain analyze is your friend



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Can't Figure Out Where Rows Are Going

2006-05-06 Thread Terry Fielder
Is there an chance the rows are being inserted in a transaction that 
fails and rolls back?  Maybe look at all the lines that were inserted 
with that order, and try them manually in psql, character for character, 
and see if an error pops up for any of the lines?


Terry Fielder
[EMAIL PROTECTED]
Associate Director Software Development and Deployment
Great Gulf Homes / Ashton Woods Homes
Fax: (416) 441-9085



Rodrigo Gonzalez wrote:

Just to be sure

Any error msg in log?

Maybe you can run this query in psql and see if it return any error msg

Best regards

Rodrigo

HH wrote:

Hello,

I am running 8.1.3, built from source, on RHEL4/x64.

I have a Web application that uses this database to sell products. We 
have

an order table and an order lines table with a one to many relationship
between them.

For the past few months I have had a strange problem where sometimes 
(about
2% of orders), the line item rows disappear. By disappear I mean that 
they
are committed to the database and then when I go to look at the order 
later,
there are no line items. The row in the 'order' table still exists, 
it is

just the line(s) that vanish.

As I started looking into this problem I assumed that it was a bug in 
my Web
application. I did some extensive testing and logging to no avail. I 
turned
up the logging on my Web app and I can see the INSERTs but I never 
saw any

DELETE statements though I can't find the child row(s) in the DB.

I've been perplexed for quite some time so a few days ago I turned on 
the

following PG logging:

log_statement = 'mod'

Today, I found an order that has this problem. Grepping my serverlog, 
I see

the following:

The line item is inserted:

serverlog:LOG:  statement: INSERT INTO order_lines ("order_description",
"order_id", "updated_at", "band_name", "order_item", "product_id",
"line_source", "order_quantity", "extended_price", "unit_price",
"catalog_number", "created_at") VALUES('FW "Not My Pres" Trucker Hat',
16899, '2006-05-06 14:43:38', NULL, 'Baseball Cap ', 165, 'Merch', 1, 
NULL,

7.0, 94, '2006-05-06 14:43:38')

Then, I do a SELECT:

fw_production=# SELECT COUNT(*) FROM order_lines WHERE order_id = 16899;
 count ---
 0

There was about 3 hours between when the line was inserted and when I 
went

to look for it. There is no DELETE in the serverlog for this row.

I can't figure out where this row went and why it disappeared! Is it 
getting

deleted some other way that wouldn't be logged? Am I missing something
rudimentary?

Any help is appreciated.

Hunter



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match

  


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Default value for bit datatype

2001-06-12 Thread Terry Fielder

When I try to do a default value for the bit datatype, eg:
CREATE TABLE my_calendar (
 ID serial NOT NULL ,
 Global_field bit DEFAULT 0 NOT NULL ,
)

I get the message:
ERROR:  Attribute 'global_field' is of type 'bit' but default expression
is of type 'int4'
You will need to rewrite or cast the expression

I have tried (bit) 0, bit(0), bit 0, etc but have not found a way to do
casting.  Can anyone help me to set a default value for a bit datatype
field?

Thanks

Terry Fielder
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]